text stringlengths 4 2.78M |
|---|
---
abstract: 'In this paper, we consider a monolithic approach to handle coupled fluid-structure interaction problems with different hyperelastic models in an all-at-once manner. We apply Newton’s method in the outer iteration dealing with nonlinearities of the coupled system. We discuss preconditioned Krylov sub-space, algebraic multigrid and algebraic multilevel methods for solving the linearized algebraic equations. Finally, we compare the results of the monolithic approach with those of the corresponding partitioned approach that was studied in our previous work.'
address:
- 'Johann Radon Institute for Computational and Applied Mathematics (RICAM), Austrian Academy of Sciences, Altenberger Strasse 69, A-4040 Linz, Austria'
- 'Johann Radon Institute for Computational and Applied Mathematics (RICAM), Austrian Academy of Sciences, Altenberger Strasse 69, A-4040 Linz, Austria'
author:
- Ulrich Langer
- Huidong Yang
bibliography:
- 'FSI\_NonLinea.bib'
title: 'Numerical Simulation of Fluid-Structure Interaction Problems with Hyperelastic Models: A Monolithic Approach'
---
Introduction
============
Parallel to the development of the partitioned approach for the fluid-structure interaction (FSI) simulation (see, e.g., [@DS:06; @SB09:00; @UK08:00; @Habchi2013306; @NME:NME1792]), the monolithic one also attracts many interests in the last decade; see, e.g., [@ABXCC10; @EPFL11; @Badia20084216; @NME:NME3001; @YB06:00; @Heil20041; @Razzaq20121156; @CX10]. Compare to the flexibility of the partitioned approach, where existing fluid and structure sub-problem solvers can be directly reused or adapted in an iterative manner, the monolithic one behaves more stable and robust by dealing with the coupled nonlinear FSI system in an all-at-once manner. Formally speaking, we apply Newton’s method (see [@PD05:00]) in an outer iteration dealing with nonlinearities originated from the domain movements, convection terms, material laws, transmission conditions and stabilization parameters (that may depend on the solution itself); as a price to pay, at each Newton iteration, a large linearized system is to be solved efficiently.
In the monolithic approach, the linearization of the nonlinear coupled system turns out to be a nontrivial task and requires tedious work on both the analytical derivation and computer implementation. One difficulty considered in this work results from the hyperelastic nonlinear material law as for the thick-walled artery with the media and adventitia layer (see [@Holzapfel00:00; @Holzapfel06:00]), for which the second and fourth order tensors of the energy functional with respect to the right Cauchy-Green tensor demand heavy amount of computational effort in each Newton iteration; see, e.g., [@GAH00:00; @JB08:00] for an introduction on the basic tools used to derive these quantities under the Lagrangian framework and e.g., [@CA14] for the simulation of such arterial tissues. Thanks to our previous work in [@ULHY13], the linearization for the hyperelastic models tackled in a partitioned FSI solver is reused in this work. Another difficulty stems from the fluid domain movement handled by the Arbitrary-Lagrangian-Eulerian (ALE) method, where the fluid domain displacement is introduced as an additional variable; see, e.g., [@TH:81; @LF99:00; @JD04:00]. To formalize the derivative of the fluid sub-problem with respect to the fluid domain displacement, the domain mapping (see, e.g., [@Wick11:00]) and shape derivative calculus (see, e.g., [@YB06:00; @FENA2005127]) are two typical robust approaches mainly considered so far. In the domain mapping approach, the fluid sub-problem is mapped to the one on the reference (initial) fluid domain via the ALE mapping, that matches the Lagrangian structure domain on the interface for all the time. Therefore, the FSI transmission conditions are defined on the unchanged interface between the fluid and structure reference domains. By transforming the fluid sub-problem from the current domain (ALE framework) to the reference domain (Lagrangian framework), the fluid domain deformation gradient tensor and its determinant arise, which leads to a formulation similar to the one under the Lagrangian framework as usually adopted in continuum mechanics. Thus, for the fluid sub-problem, we follow the same approach to compute the directional derivative with respect to the fluid domain displacement (see related techniques in, e.g., [@GAH00:00; @JB08:00]) as we used for the hyperelastic equations in [@ULHY13]. In the second approach based on a shape derivative technique (see, e.g., [@JS92]), the derivative of the fluid sub-problem is then evaluated by computing the directional derivative with respect to the change of geometry (a small perturbation) on the current domain; see also this technique employed by the partitioned Newton’s method in [@DS:06; @Yang11:00].
In addition to the effort on the linearization of the coupled nonlinear system, the monolithic solver requires the properly designed preconditioners and solvers (as inner iteration) for the linearized coupled FSI system at each Newton iteration and may demand more effort. In [@Razzaq20121156], the preconditioned Krylov subspace method (see, e.g., [@YS03:00]) and geometrical multigrid method (see, e.g., [@HB03]) with a Vanka-like smoother are employed to solve the linearized and discretized 2D FSI system using the high order $Q_2-P_1$ stabilized finite element pair. For the complex 3D geometries and unstructured meshes, in [@NME:NME3001], the GMRES method (see [@Saad86]) accelerated by the block Gauss-Seidel preconditioner is considered, for which the block inverse is approximated by smoothed aggregation multigrid (see, e.g., [@SMTR08]) for each sub-problem. In order to improve the performance, a monolithic FSI algebraic multigrid (AMG) method using preconditioned Richardson iterations with potentially level-dependent damping parameters as smoothing steps is further developed therein. Besides, the monolithic solver is shown capable of utilizing parallel computing resources. In [@EPFL11], parallel preconditioners of the coupled problem based on the algebraic additive Schwarz (see, e.g., [@ATOW05]) preconditioners for the sub-problems are built for both the convective explicit and geometry-convective explicit time discretized FSI systems. As a 2D counterpart, in [@ABXCC10], a one-level additive Schwarz preconditioner (see, e.g., [@ATOW05]) for the linearized system is considered for the fully implicit time discretized FSI system, that is based on a sub-domain preconditioner constructed on an extention of a non-overlapping sub-domain to its neighbors.
In this work, we focus on the development and comparison of different monolithic solution methods, namely, the Krylov subspace methods preconditioned by the block $LU$ decomposition of the coupled system, the AMG and algebraic multilevel (AMLI [@AO89I; @AO90; @PSV08:00; @OA96; @KJMS13; @KJ12], also referred to as K-cycle [@NY08; @NLA542]) method, applied to the coupled FSI system with nearly incompressible hyperelastic models (see [@Holzapfel00:00; @Holzapfel06:00]). Our solution methods are mainly based on a class of special AMG methods developed in [@FK98:00] and [@WM04:00; @WM06:00], for the discrete elliptic and saddle point problems, respectively, where the robust matrix-graph based coarsening strategies are proposed in a (pure) algebraic manner. This class of AMG methods have been applied to the sub-problems in the fluid-structure interaction simulation; see [@Yang11:00; @YH11:00; @HY10:000; @ULHY13]. Particularly in our recent work [@ULHY13], we have developed this approach by carefully choosing the effective smoothers: Braess-Sarazin smoother (see [@Braess97:00; @WZ00:00]) and Vanka smoother (see [@Vanka86:00; @TS09:00]), for the linearized Navier-Stokes equations under the ALE framework and hyperelastic equations under the Lagrangian framework, respectively. In order to further extend this class of AMG methods to the monolithic coupled FSI system after linearization, the two essential components in the AMG methods, the coarsening strategy and the smoother, for the coupled system are to be developed. Namely, the robust coarsening strategy using the stabilized Galerkin projection is constructured based on the $\inf-\sup$ condition (see, e.g., [@BF91:00; @VG86]) on coarse levels for the indefinite sub-problems. By this means, we obtain the stabilized coupled systems on coarse levels. The effective smoother is designed by damped block Gauss-Seidel iterations applied to the coupled system, that are based on the AMG cycles for the mesh movement, fluid and structure sub-problem, respectively. According to our numerical experiments, we observe the robustness of the damping parameter with respect to the AMG levels and different hyperelastic models adopted in the FSI simulation. As a variant of our coupled AMG method, we further consider the AMLI method for the coupled FSI system, in which we use the hierarchy of the coupled systems constructured in an algebraic manner as in the AMG methods. The smoothing for the coarse grid correction equation is performed by a flexible GMRES (FGMRES [@Saad1993]) scheme preconditioned by the multilevel preconditioner; see, e.g., [@PSV14] the application for the non-regularized Bingham fluid problem using the geometric multigrid method. In order to improve the performance, we finally consider the GMRES and FGMRES Kyrlov sub-space method preconditioned with such AMG and AMLI cycles.
The remainder of the paper is organized in the following way. In Section \[sec:pre\], the coupled FSI system using a family of hyperelastic models for a model problem is formulated in a monolithic way. Section \[sec:dis\] deals with the temporal and spatial discretization, and Newton’s method tackling the linearization for the coupled nonlinear FSI system. In Section \[sec:lsm\], several monolithic solution methods for the linearized FSI system are considered in detail. Some numerical experiments are presented in Section \[sec:num\]. Finally, in Section \[sec:con\], some conclusions are drawn.
A model problem {#sec:pre}
===============
Computational domains and mappings
----------------------------------
We consider a model problem in the computational FSI domain $\Omega^t$ at time $t$ decomposed into the fluid domain $\Omega_f^t$ and the structure domain $\Omega_s^t$, i.e., $\overline{\Omega^t}=\overline{\Omega_f^t}\cup\overline{\Omega_s^t}$ and $\Omega_f^t\cap\Omega_s^t=\O$. Let $\Gamma_d^0$ and $\Gamma_n^0$ denote the boundaries with the homogeneous Dirichlet and Neumman condition for the structure sub-problem, respectively, $\Gamma_{in}^t$ and $\Gamma_{out}^t$ the boundaries with the inflow and outflow condition for the fluid sub-problem, respectively, $\Gamma_f^0=\overline{\Gamma_d^0}
\cap(\overline{\Gamma_{in}^0}\cup\overline{\Gamma_{out}^0})$ the fluid boundary with the homogeneous velocity condition, $\Gamma^t$ the interface between two domains: $\Gamma^t=\partial\Omega_f^t\cap\partial\Omega_s^t\setminus\Gamma_f^0$. At time $t=0$, we have all the initial configurations. See an illustration in Fig. \[fig:dom\].
As usually adopted, we use the Lagrangian mapping (see, e.g., [@GAH00:00; @JB08:00; @WP08]) ${\mathcal L}^t(\cdot):{\mathcal L}^t(x_0)=x_0+d_s(x_0, t)$ for all $x_0\in\Omega_s^0$ and $t\in(0, T)$ to track the motion of the structure body, where $d_s(\cdot, \cdot)$ denotes the structure displacement, i.e., $d_s(\cdot, \cdot):\Omega_s^0\times(0, T]\mapsto{\mathbb{R}}^3$. For the fluid sub-problem, we employ the arbitrary Lagrangian Eulerian (ALE, see, e.g., [@TH:81; @JD04:00; @LF99:00]) mapping ${\mathcal A}^t(\cdot)$ on $\Omega_f^0$ to track the fluid domain monition, i.e., ${\mathcal A}^t(x_0)=x_0+d_f(x_0, t)$ for all $x_0\in\Omega_f^0$ and $t\in(0, T)$, where the fluid displacement $d_f(\cdot, \cdot):\Omega_f^0\times(0, T]\mapsto{\mathbb{R}}^3$ follows the fluid and structure particle motion on the $\Gamma^0$, and is arbitrary extended into the fluid domain $\Omega_f^0$ (see, e.g., [@Wick11:00]). The fluid domain velocity $w_f(\cdot, \cdot):\Omega_f^0\times(0, T]\mapsto{\mathbb{R}}^3$ is then given by $w_f=\partial_t d_f$. With help of this mapping, the fluid velocity $u(\cdot, \cdot):\Omega_f^0\times(0, T]\mapsto{\mathbb{R}}^3$ and pressure $p(\cdot, \cdot):\Omega_f^0\times(0, T]\mapsto{\mathbb{R}}$ are defined by the transformation:
\[eq:vp\] $$\begin{aligned}
u(x, t)=\tilde{u}(\tilde{x}, t)=\tilde{u}({\mathcal A}_f^t(x), t),\\
p(x, t)=\tilde{p}(\tilde{x}, t)=\tilde{p}({\mathcal A}_f^t(x), t),
\end{aligned}$$
for all $x\in\Omega_f^0$ and $\tilde{x}={\mathcal A}_f^t(x)\in\Omega_f^t$. Here for simplicity of notations, $\tilde{u}(\cdot, \cdot)$ and $\tilde{p}(\cdot, \cdot)$ are used to indicate the variables under the Eulerian framework.
Basic notations in Kinematics
-----------------------------
In order to formulate the coupled system on the reference domain $\Omega^0$, we first introduce the following basic notations in Kinematics of nonlinear continuum mechanics; see, e.g., [@GAH00:00; @JB08:00; @WP08]. Let $F_f=\partial{\mathcal A}^t/{\partial x}=I+\nabla d_f$ for $x\in\Omega^0_f$ and $F_s=\partial{\mathcal L}^t/{\partial x}=I+\nabla d_s$ for $x\in\Omega^0_s$ denote the fluid and structure deformation gradient tensor, respectively. The determinant is given by $J_f=\text{det}F_f$ and $J_s=\text{det}F_s$, respectively. For the nonlinear hyperelastic models, further notations are used, namely, the right Cauchy-Green tensor $C=F_s^TF_s$ and the three principal invariants $I_1=C:I$, $I_2=0.5(I_1^2-C:C)$ and $I_3=\text{det}C$, respectively. Furthermore, the second Piola-Kirchoff tensor $S$ is defined by $S=2\partial\Psi/\partial C$, where $\Psi$ denotes the invariant dependent energy functional determined by the material properties.
A family of hyperelastic models {#sec:hypermat}
-------------------------------
A family of hyperleastic models are used in this work, that posses nearly incompressible or anisotropic properties; see, e.g., [@JB08:00; @GAH00:00; @Holzapfel00:00; @Holzapfel06:00; @CA14]. We first consider the model of Neo-Hookean material, for which the energy functional is given by $$\label{eq:nhenerg}
\Psi=0.5c_{10}(J_1-3)+0.5\kappa(J_s-1)^2,$$ where $J_1=I_1I_3^{-1/3}$ denotes the invariant, $\kappa$ the bulk modulus and $c_{10}$ the material parameter related to the shear modulus. The second Piolad-Kirchoff tensor is then given by $$\label{eq:nhpk}
S=S^{'} -p_sJ_sC^{-1}$$ with $S^{'}=c_{10}\partial J_1/\partial C$, where the structure pressure $p_s:=p_s(x, t)=-\kappa(J_s-1):\Omega_s^0\times(0, T]\mapsto{\mathbb{R}}^3$ is introduced in order to overcome the locking phenomena with large bulk modulus; see, e.g., [@Klaas96:00; @Maniatty02:00; @Goenezen11:00; @TTE08:00].
We then consider the modified model of Mooney-Rivlin material, for which the energy functional is given by $$\label{eq:mrenerg}
\Psi=0.5c_{10}(J_1-3)+0.5c_{01}(J_2-3)+0.5\kappa(J_s-1)^2,$$ where $J_2=I_2I_3^{-2/3}$ denotes the invariant and $c_{01}$ the material parameter related to the shear modulus. The second Piolad-Kirchoff tensor is then accordingly given by $$\label{eq:mrpk}
S=S^{'}-p_sJ_sC^{-1},$$ where $S^{'}=c_{10}\partial J_1/\partial C +c_{01}\partial J_2/\partial C$.
We finally consider the model of the anisotropic two-layer thick-walled artery; see [@Holzapfel00:00; @Holzapfel06:00]. The energy functional of such an arterial model is defined by $$\label{eq:gaenerg}
\Psi=0.5c_{10}(J_1-3)+\Psi_{\textup{aniso}}+0.5\kappa(J_s-1)^2$$ with $\Psi_{\text{aniso}}=0.5k_1k_2\sum_{i=4, 6}(\exp(k_2(J_i-1)^2)-1)$, where $k_1$ and $k_2$ are a stress-like material parameter and a dimensionless parameter, respectively, associated with contribution of collagen to the response, and $J_4>1$ and $J_6>1$ are invariants active in extension, that are defined as $J_4=I_3^{-1/3}A_1:C$ and $J_6=I_3^{-1/3}A_2:C$, respectively. The tensors $A_1=a_{01}\otimes a_{01}$ and $A_2=a_{02}\otimes a_{02}$ are prescribed with the direction vectors $a_{01}=(0, \cos\alpha, \sin\alpha)^T$ and $a_{02}=(0,\cos\alpha, -\sin\alpha)^T$, respectively, where $\alpha\in\{\alpha_M, \alpha_A\}$ denotes the angle between the collagen fibers and the circumferential direction in the media and adventitia, respectively. The second Piola-Kirchoff tensor $S$ for this hyperelastic model is computed as $$\label{eq:gapk}
S=S^{'}-p_sJ_sC^{-1}$$ with $S^{'}=c_{10}\partial J_1/\partial C+k_1\sum_{i=4, 6}
(\exp(k_2(J_i-1)^2(J_i-1))\partial J_i/\partial C)$. In our numerical experiments, we use the geometrical configuration and material parameters of a rabbit carotid artery prescribed in [@Holzapfel00:00]. For modeling arterials in the FSI simulation considering specific fiber orientation, prestress and zero-stress configurations, and viscoelastic support conditions, we further refer to [@YB06:00; @TET13:00; @YB11:00; @GJF12:00; @GJF13:00; @CNM12:00; @GPlank12:00] for relevant details.
Coupled fluid-structure interaction system
------------------------------------------
The coupled FSI system in strong form reads: Find $(d_f, u, p_f, d_s, p_s)$ such that
\[eq:fsi\] $$\begin{aligned}
-\Delta d_f=0&{\textup{ in }} \Omega_f^0, \\
d_f=d_s&{\textup{ on }} \Gamma^0,\\ [0.2cm]
\rho_fJ_f\partial_t u+\rho_fJ_f((u- w_f)\cdot F_f^{-1}\nabla) u \\ \nonumber
-\nabla\cdot (J_f\sigma_f(u, p_f)F_f^{-T})=0&{\textup{ in }}\Omega_f^0,\\
\nabla\cdot (J_fF_f^{-1}u)=0&{\textup{ in }}\Omega_f^0, \\[0.2cm]
\rho_s\partial_{tt}d_s-\nabla\cdot (F_sS)=0&{\textup{ in }}\Omega_s^0,\\
-(J_s-1)-(1/\kappa)p_s=0&{\textup{ in }}\Omega_s^0, \\[0.2cm]
u=\partial_t d_s&{\textup{ on }} \Gamma^0, \\
J_f\sigma_{f}(u, p_f)F_f^{-T}n_f+F_sSn_s=0&{\textup{ on }}\Gamma^0,
\end{aligned}$$
supplemented with the corresponding boundary conditions $d_f=0$ on $\Gamma^0_{in}\cup\Gamma^0_{out}$, $u=0$ on $\Gamma_f^0$, $J_f\sigma_f(u, p_f)F_f^{-T}n_f = g_{in}$ (a given function) on $\Gamma_{in}^t$ and $J_F\sigma_f(u, p_f)F_f^{-T}n_f = 0$ on $\Gamma_{out}^t$, $d_s=0$ on $\Gamma_d^0$ and $F_sSn_s=0$ on $\Gamma_n^0$, and proper initial conditions $u(x, 0)=0$ for all $x\in\Omega_f^0$ and $d_s(x,0)=\partial_td_s(x, 0)=0$ for all $x\in\Omega_s^0$. Here $\rho_f$ and $\rho_s$ denote the fluid and structure density, respectively, $n_f$ and $n_s$ the fluid and structure outerward unit normal vector, respectively, $\sigma_f(u, p_f):=\mu(\nabla u+\nabla^T)-p_fI$ the Cauchy stress tensor with the dynamic viscosity term $\mu$. Note that for the fluid sub-problem, we transform the momentum balance and mass conservation equations from the Eulerian to Lagrangian framework using the ALE mapping.
Temporal and spatial discretization and linearization {#sec:dis}
=====================================================
The temporal discretization
---------------------------
For the time discretization of the fluid sub-problem, we use the first order implicit Euler scheme. Let $u^n:=u(\cdot, t^n)$ and $w^n:=w(\cdot, t^n)=\partial_t d_f(\cdot, t^n)$ denote the approximateions of the fluid and fluid domain velocity at time level $t^n=n\Delta t$, $n=1,..., N$, $\Delta t=T/N$, i.e., the time period $(0, T]$ is subdivided into $N$ equidistant intervals. At time level $t^0$, the FSI solution is given by the initial conditions. The time derivatives at the level $t^n$ are then approximated as
\[eq:tdfluid\] $$\begin{aligned}
\partial_t u^{n}&\approx&(u^{n}-u^{n-1})/\Delta t, \\
w^{n}&\approx&(d^{n}-d^{n-1})/\Delta t.
\end{aligned}$$
For the structure sub-problem, a first order Newmark-$\beta$ scheme is used (see [@NM59:00]), i.e.,
\[eq:tdstruc\] $$\begin{aligned}
\ddot{d}_s^{n}&\approx&\frac{1}{\beta\Delta t^2}(d_s^n-d_s^{n-1})
-\frac{1}{\beta\Delta t}\dot{d}_s^{n-1}-(\frac{0.5}{\beta}-1)\ddot{d}^{n-1}_s, \\
\dot{d}_s^{n}&\approx&\dot{d}_s^{n-1}+\gamma\Delta t \ddot{d}_s^n
+(1-\gamma)\Delta\ddot{d}_s^{n-1},
\end{aligned}$$
where $0<\beta\leq 1$ and $0\leq \gamma\leq 1$.
The time semi-discretized weak formulation
------------------------------------------
In order to find the finite element FSI solution on proper function spaces, we first formulate the weak formulation for the coupled system (\[eq:fsi\]). We introduce the notations $H^1(\Omega_f^0)$, $H^1(\Omega_s^0)$ and $L^2(\Omega_f^0)$ for the standard Sobolev and Lebesgue spaces (see, e.g., [@AF03:00]) on $\Omega_f^0$ and $\Omega_s^0$, respectively. Let $V_m:=H^1(\Omega_f^0)^3$ be the fluid domain displacement space, $V_f:=H^1(\Omega_f^0)^3$ and $Q_f:=L^2(\Omega_f^0)$ be the fluid velocity and pressure space, respectively. The function spaces $V_s$ and $Q_s$ for the structure displacement and pressure shall be properly chosen regarding the nonlinearities; see, e.g., [@JMB:76; @PGC88:00]. Incorporating boundary conditions, we further introduce the following function space notations: $V_{m,D}^n:=\{v\in V_m : v=d_s^n \text{ on } \Gamma^0\}$, $V_{m,0}:=\{v\in V_m : v=0 \text{ on } \Gamma_{in}^0\cup\Gamma_{out}^0\}$ for the mesh movement sub-problem, $V_{f,0}:=\{v\in V_f : v=0 \text{ on } \Gamma_f^0\}$ for the fluid sub-problem, $V_{s, D}^n:=\{v\in V_s : v=0 \text{ on } \Gamma_d^0
\;|\; v=u^n\Delta t+v^{n-1} \text{ on } \Gamma^0\}$, $V_{s, 0}:=\{v\in V_s : v=0 \text{ on } \Gamma_d^0\cup\Gamma^0\}$ for the structure sub-problem.
The weak formulation for the coupled system (\[eq:fsi\]) reads: Find $(d^n_f$, $u_f^n$, $p_f^n$, $d_s^n$, $p_s^n)$ $\in$$(V_{m, D}$, $V_{f,0}$, $Q_f$, $V_{s, D}^n$, $Q_s)$ such that for all $(v_m$, $v_f$, $q_f$, $v_s$, $q_s)$ $\in$ $(V_{m,0}$, $V_{f, 0}$, $Q_f$, $V_{s, 0}$, $Q_s)$
\[eq:weakfsi\] $$\begin{aligned}
(\nabla d_f^n, \nabla v_m)_{\Omega_f^0} = 0&, \\
\rho_f(J_f(u^n-u^{n-1})/\Delta t, v_f)_{\Omega_f^0}\\ \nonumber
+\rho_f(J_f((u^n- (d^n_f-d^{n-1}_f)/\Delta t)\cdot F_f^{-1}\nabla) u^n, v_f)_{\Omega_f^0} \\\nonumber
-\langle g_{in}, v_f\rangle_{\Gamma_{in}^0}
+(J_f\sigma_f(u^n, p_f^n)F_f^{-T}, \nabla v_f)_{\Omega_f^0}=0&,\\
-(\nabla\cdot (J_fF_f^{-1}u^n), v_f)_{\Omega_f^0}=0&,\\
(\beta_2d_s^n, v_s)_{\Omega_s^0}+(S^{'}, F_s^T\nabla v_s)_{\Omega_s^0}\\ \nonumber
-(J_s-1, q_s)_{\Omega_s^0}-(1/\kappa)(p_s^n, q_s)_{\Omega_s^0}=0&,\\
-(p_sJ_sF_s^{-T}, \nabla v_s)_{\Omega_s^0}=0&, \\
\langle J_f\sigma_{f}(u^n, p_f^n)F^{-T}n_f, v_f\rangle_{\Gamma^0}+
\langle F_sSn_s, v_s\rangle_{\Gamma^0}=0&,
\end{aligned}$$
with $r_s=\beta_2d_s^{n-1}+\beta_1\dot{d}_s^{n-1}+\rho_s(0.5/\beta-1)\ddot{d}_s^{n-1}$, where $\beta_2=\rho_s/(\beta\Delta t^2)$ and $\beta_1=\rho_s/(\beta\Delta t)$. As observed, the fluid sub-problem is coupled with the mesh movement sub-problem in $\Omega_f^0$ and coupled with the structure sub-problem on $\Gamma^0$. The mesh movement sub-problem is coupled with the structure sub-problem on $\Gamma^0$. The equilibrium of surface tractions on $\Gamma^0$ is realized by the equilibrium of the residual of the weak formulation for the fluid and structure momentum equations with non-vanishing test functions $v_f\in V_{f,0}$ and $v_s\in V_{s,0}$ on $\Gamma^0$, where $v_f=v_s$ on $\Gamma^0$; see [@Yang11:00].
The spatial discretization and stabilization
--------------------------------------------
As in [@ULHY13], we use Netgen [@JS97:00] to generate the tetrahedral mesh of the computational FSI domain $\Omega^0$ with conforming grids on the FSI interface and resolved different structure layers. For the mesh movement, we use $P_1$ finite element on the tetrahedral mesh. For the fluid and structure sub-problem, we use $P_1-P_1$ finite element with stabilization in order to fulfill the $\inf-\sup$ or LBB (Ladyshenkaya-Babuška-Brezzi) stability condition (see, e.g., [@BF91:00; @VG86]), and to tackle the instability in advection dominated regions of the domain. In particular, we employ a unified streamline-upwind and pressure-stabilizing Petrov-Galerkin (SUPG/PSPG) method (see, e.g., [@Hughes89:00; @Brooks82:00; @WD06:00; @FLD09:00; @CF07:00]) to stabilize the $P_1-P_1$ discretized fluid sub-problem. For the structure sub-problem, we use the PSPG method (see, e.g., [@Hughes89:00; @Klaas96:00; @Maniatty02:00; @Goenezen11:00]) to suppress the instability caused by equal order finite element interpolation spaces for the displacement and pressure. The application of this stabilization technique to the hyperelastic equations of anisotropic two-layer thick walled artery is prescribed in [@ULHY13].
Newton’s method for the nonlinear FSI system {#sec:nsm}
--------------------------------------------
Formally speaking, after discretization in time and space of the coupled FSI system (\[eq:fsi\]), we obtain the following nonlinear finite element algebraic equation $$\label{eq:nlinsystem}
{\mathcal R}(X) = 0$$ with $${\mathcal R}(\cdot)=
\left[\begin{array}{c}
R_{ms}(\cdot)\\R_{mfs}(\cdot)\\R_{sf}(\cdot)
\end{array}\right]\text{ and }
X=
\left[\begin{array}{c}
D_m \\ U_f\\ U_s
\end{array}\right],$$ where the subscripts $m$, $f$ and $s$ representing mesh movement, fluid and structure, respectively, and $ms$, $mfs$ and $sf$ the coupling among corresponding sub-problems. Furthermore $R_{ms}(X)=0$, $R_{mfs}(X)=0$ and $R_{sf}(X)=0$ stand for the finite element equations for the fluid mesh movement sub-problem coupled with the Dirichlet boundary condition on $\Gamma^0$ from the structure sub-problem, for the fluid sub-problem coupled with the fluid domain displacement from the mesh movement sub-problem and Neumann boundary condition on $\Gamma^0$ from the structure sub-problem, for the structure sub-problem coupled with the Dirichlet boundary condition on $\Gamma^0$ from the fluid sub-problem, respectively. The finite element solutions of the fluid domain displacement, fluid velocity and pressure, and structure displacement and pressure are denoted by $D_m$, $U_f$ and $U_s$, respectively.
Newton’s method applied to the nonlinear coupled FSI equation (\[eq:nlinsystem\]) is presented in Algorithm \[alg:Newton\], where ${\mathcal J}_k$ denote the Jacobian matrix and $\delta x_k$ the corrections of the finite element solutions at the $k$th step nonlinear iteration.
Given initial $X_0$, for $k\ge 0$,
Note that the two terms $\partial R_{ms}/\partial U_s$ and $\partial R_{sf}/\partial U_f$ take the derivatives with respect to the structure displacement and fluid velocity, respectively, that corresponds to the linearization of two Dirichlet interface conditions on $\Gamma^0$ between the fluid and structure domain displacement, and between the fluid and structure velocity, respectively. The linearization of the Neumann interface condition on $\Gamma^0$ and of the fluid sub-problem are given in the second row of ${\mathcal J}_k$. The linearization of structure sub-problem is given in the third row of ${\mathcal J}_k$.
Besides the costly assembly procedure of ${\mathcal J}_k$ in (\[eq:jaco\]), another main cost in Algorithm \[alg:Newton\] is to solve the linearized equation (\[eq:jacosys\]). More precisely, we come up with the linearized FSI system in the following reordered form (\[eq:linsystem1\]) that we aim to solve (for simplicity of notations, we neglect the subscript $k$ and zero matrix entries): $$\label{eq:linsystem1}
\left[\begin{array}{cccccccc}
A_m^{ii}&A_m^{i\gamma}& & & & & & \\ [0.1cm]
& I & & & & -I& & \\ [0.1cm]
B^i_{fm}& B^\gamma_{fm}& -C_f & B^{i}_{1f} & B^{\gamma}_{1f} & & & \\ [0.1cm]
A^{ii}_{fm}&A^{i\gamma}_{fm}& B^{i}_{2f}&A^{ii}_f&A^{i \gamma}_f& & & \\ [0.1cm]
A^{\gamma i}_{fm}&A^{\gamma\gamma}_{fm}&B^{\gamma}_{2f}
&A^{\gamma i}_f&A^{\gamma\gamma}_f
&A^{\gamma\gamma}_s&A^{\gamma i}_s&B^{\gamma}_{2s}\\ [0.1cm]
& & & & -I & \frac{1}{\Delta t}I & & \\ [0.1cm]
& & & & & A^{i\gamma}_s& A^{ii}_s& B^{i}_{2s}\\ [0.1cm]
& & & & & B^{\gamma}_{1s}& B^{i}_{1s}& -C_s\\
\end{array}
\right]
\left[\begin{array}{c}
\Delta d_m^i\\ [0.1cm]
\Delta d_m^{\gamma} \\ [0.1cm]
\Delta p_f\\ [0.1cm]
\Delta u_f^i \\ [0.1cm]
\Delta u_f^{\gamma}\\ [0.1cm]
\Delta d_s^{\gamma} \\ [0.1cm]
\Delta d_s^{i}\\ \Delta p_s
\end{array}\right]
=
\left[\begin{array}{c}
r_m^i\\ [0.1cm]
r_m^{\gamma} \\ [0.1cm]
r_{p_f}\\ [0.1cm]
r_f^i \\[0.1cm]
r_f^{\gamma}\\[0.1cm]
r_s^{\gamma}\\[0.1cm]
r_s^{i}\\ [0.1cm]
r_{p_s}
\end{array}\right],$$ where the superscripts $\gamma$ and $i$ are used to denote qualities associated to the nodal degrees of freedom (DOF) on the interface and the total remaining DOF in the domain and on the other boundaries of the domain. Furthermore, the qualities with the superscripts $\gamma\gamma$, $ii$, $\gamma i$ and $i\gamma$ indicate, that they result from the coupling of corresponding DOF. The solution posses a symbol $\Delta$ in front, indicating the DOF of the correction. It is easy to see from (\[eq:linsystem1\]) how the sub-problems are linearized and coupled in a big FSI system. On the computer implementation, we are not explictly assembling the system (\[eq:linsystem1\]), but the separate system for each sub-problem. The matching conditions are imposed implicitly by the conforming grids on the interface. This is easily implemented in the preconditioned Krylov subspace methods. The monolithic algebraic multigrid and multilevel approaches for the big coupled system require the formal systems on coarse levels. Furthermore, a direct solver is usually applied on the coarsest level, which requires an explictly formed system. Therefore, it is convenient and practical to form the big system in an explicit way and meanwhile to keep the flexibility of system assembling for each sub-problem. Therefore, we reformulate the system (\[eq:linsystem1\]) as $$\label{eq:linsystem2}
Kx=b$$ $$\label{eq:linsystem3}
K=
\left[\begin{array}{ccc}
A_m& 0 & A_{ms}\\
A_{fm}& A_{f}& A_{fs} \\
0 &A_{sf}& A_s
\end{array}
\right],
x=
\left[\begin{array}{c}
\Delta d_m
\\ \Delta u_f
\\ \Delta u_s
\end{array}\right],
b=
\left[\begin{array}{c}
r_m
\\ r_f
\\ r_s
\end{array}\right],$$ where $A_m$, $A_f$ and $A_s$ represent the mesh movement, fluid and structure stiffness matrix from the finite element assembly, respectively , which are permuted from the corresponding ones in (\[eq:linsystem1\]) according to their local nodal numbering of the finite element mesh for each sub-problem. The coupling matrix between $i\in\{m, f, s\}$ and $j\in\{m, f, s\}$ are denoted by $A_{ij}$, $i\neq j$. The solution vectors $\Delta d_m$, $\Delta u_f$ and $\Delta u_s$ denote the DOF of the correction for the fluid domain displacement, fluid velocity and pressure, and structure displacement and pressure, respectively. The residual vectors are presented by $r_m$, $r_f$ and $r_s$ for the fluid domain movement, fluid and structure sub-problem, respectively. These qualities are similar to the ones in (\[eq:linsystem1\]), except that they are not recorded based on the separation of the interface and remaing DOF. For consistency of notations, we will restrict our discussion to the solution methods of the linearized system (\[eq:linsystem2\]) from now on.
Monolithic solution methods for the coupled system {#sec:lsm}
==================================================
In this section, we discuss and compare different monolithic solution methods, namely, the preconditioned Krylov subspace methods, the algebraic multigrid and algebraic multilevel method, applied to the coupled system (\[eq:linsystem2\]).
The preconditioned Krylov subspace methods {#sec:pky}
------------------------------------------
Because of the block structure of the system matrix $K$ in (\[eq:linsystem2\]), we discuss some preconditioners mainly based on the $LU$ decomposition (see, e.g., [@YS03:00]). The inverse of the preconditioner applied to a given vector is easily realized using our efficient AMG methods for sub-problems (see [@ULHY13]).
### The block-diagonal preconditioner
We first consider the simplest block-diagonal preconditioner $\tilde{P}_D$, that is obtained by approximating $$\label{eq:pred}
P_D = \left[\begin{array}{ccc}
A_m & & \\
& A_f & \\
& & A_s \\
\end{array}\right]
\text{ with }
\tilde{P}_D = \left[\begin{array}{ccc}
\tilde{A}_m & & \\
& \tilde{A}_f & \\
& & \tilde{A}_s \\
\end{array}\right],$$ where $\tilde{A}_{i}=A_i(I-M_i^j)^{-1}$, $i\in\{m, f, s\}$, are corresponding multigrid preconditioners for each sub-problem; see, e.g, [@UH:02; @UJ:89]. The inverse of $\tilde{P}_D$ is easily evaluated by $$\label{eq:ipred}
\tilde{P}_D^{-1} = \left[\begin{array}{ccc}
\tilde{A}_m^{-1} & & \\
& \tilde{A}_f ^{-1}& \\
& & \tilde{A}_s^{-1} \\
\end{array}\right],$$ which corresponds to one AMG iteration applied to each sub-problem, that is developed in [@ULHY13]. This preconditioner completely neglects the coupling conditions among different sub-problems.
### The block lower triangular preconditioner
The block lower triangular preconditioner $\tilde{P}_L$ is obtained by approximating $$\label{eq:prel}
P_L=\left[\begin{array}{ccc}
A_m & &\\
A_{fm} & A_f & \\
0 & A_{sf}& A_s \\
\end{array}\right]
\text{ with }
\tilde{P}_L=\left[\begin{array}{ccc}
\tilde{A}_m & &\\
A_{fm} & \tilde{A}_f & \\
0 & A_{sf}& \tilde{A}_s \\
\end{array}\right].$$ It is easy to see the inverse of $\tilde{P}_L$ is given by $$\label{eq:iprel}
\tilde{P}_L^{-1}=\left[\begin{array}{ccc}
\tilde{A}_m^{-1} & &\\
-\tilde{A}_f^{-1}A_{fm}\tilde{A}_m^{-1} & \tilde{A}_f^{-1} & \\
-\tilde{A}_{s}^{-1}A_{sf}\tilde{A}_f^{-1}A_{fm}\tilde{A}_m^{-1} & -\tilde{A}_s^{-1}A_{sf}\tilde{A}_f^{-1}& \tilde{A}_s^{-1} \\
\end{array}\right],$$ which is nothing but a block Gauss-Seidel iteration (using forward substitution) with zero initial guess applied to (\[eq:linsystem2\]). This is easily computed since we have efficient AMG methods to approximate the inverse of $A_m$, $A_f$ and $A_s$. It is also easy to see one inverse operation of $\tilde{P}_L$ only requires (approximately) inverting $A_m$, $A_f$ and $A_s$ once. This preconditioner has taken into account the coupling block $A_{fm}$, the directional derivative of the fluid sub-problem with respect to the fluid domain displacement.
### The block upper triangular preconditioner
We then consider the block upper triangular preconditioner $\tilde{P}_U$ obtained by approximating $$\label{eq:prer}
P_U=\left[\begin{array}{ccc}
A_m & 0 & A_{ms}\\
& A_f & A_{fs}\\
& & A_s \\
\end{array}\right]
\text{ with }
\tilde{P}_U=\left[\begin{array}{ccc}
\tilde{A}_m & 0 & A_{ms}\\
& \tilde{A}_f & A_{fs}\\
& & \tilde{A}_s \\
\end{array}\right].$$ The coupling blocks $A_{ms}$ and $A_{fs}$ are included, which represent the coupling of the Dirichlet interface condition between the fluid and structure domain displacement, and the coupling of the Neumann interface condition between the fluid and the structure surface traction, respectively. The inverse $\tilde{P}_U^{-1}$ is given by $$\label{eq:iprer}
\tilde{P}_U^{-1}=\left[\begin{array}{ccc}
\tilde{A}_m^{-1} & 0 & -\tilde{A}_{m}^{-1}A_{ms}\tilde{A}_s^{-1}\\
& \tilde{A}_f^{-1} & -\tilde{A}_f^{-1}A_{fs}\tilde{A}_s^{-1}\\
& & \tilde{A}_s^{-1} \\
\end{array}\right],$$ that is nothing but a Gauss-Seidel iteration using a backward substitution. As we see the block $A_{fm}$ of the derivative of the fluid sub-problem with respect to the fluid domain displacement is not take into account.
### The $SSOR-$preconditioner
We consider a Symmetric Successive Over-Relaxation (SSOR) with a special choice of the relaxation parameter $\omega=1$. The preconditioner is based on the following $LU$ factorization of $P_{SSOR}$ given by $$\label{eq:pres}
\begin{aligned}
P_{SSOR}&=\left[\begin{array}{ccc}
A_m & & \\
A_{fm} & A_f & \\
0 & A_{sf} & A_s \\
\end{array}\right]\times
\left[\begin{array}{ccc}
I& 0& A_m^{-1}A_{ms}\\
& I & A_f^{-1}A_{fs}\\
& & I \\
\end{array}\right]\\
&=\left[\begin{array}{ccc}
A_m & 0 & A_{ms}\\
A_{fm} & A_f & A_{fs}+A_{fm}A_m^{-1}A_{ms} \\
0 & A_{sf} & A_s+A_{sf}A_f^{-1}A_{fs} \\
\end{array}\right].
\end{aligned}$$ that can be reformulated as $P_{SSOR}=K+R_{SSOR}$, where the remainder $R_{SSOR}$ is given by $$R_{SSOR}=\left[\begin{array}{ccc}
0& 0 & 0\\
0& 0 & A_{fm}A_m^{-1}A_{ms} \\
0 & 0 & A_{sf}A_f^{-1}A_{fs} \\
\end{array}\right].$$ The $SSOR$ preconditioner $\tilde{P}_{SSOR}$ is then given by $$\begin{aligned}
\tilde{P}_{SSOR}&=\left[\begin{array}{ccc}
\tilde{A}_m & & \\
A_{fm} & \tilde{A}_f & \\
0 & A_{sf} & \tilde{A}_s \\
\end{array}\right]\times
\left[\begin{array}{ccc}
I& 0& \tilde{A}_m^{-1}A_{ms}\\
& I & \tilde{A}_f^{-1}A_{fs}\\
& & I \\
\end{array}\right].
\end{aligned}$$ The inverse of $\tilde{P}_{SSOR}$ is computed by two block Gauss-Seidel iterations using the backward and forward substitution consecutively: $$\label{eq:ipres}
\begin{aligned}
\tilde{P}_{SSOR}^{-1}=
&\left[\begin{array}{ccc}
I & 0& -\tilde{A}_m^{-1}A_{ms}\\
& I & -\tilde{A}_f^{-1}A_{fs}\\
& & I \\
\end{array}\right]\times
\\
&\left[\begin{array}{ccc}
\tilde{A}_m^{-1} & &\\
-\tilde{A}_f^{-1}A_{fm}\tilde{A}_m^{-1} & \tilde{A}_f^{-1} & \\
-\tilde{A}_{s}^{-1}A_{sf}\tilde{A}_f^{-1}A_{fm}\tilde{A}_m^{-1} & -\tilde{A}_s^{-1}A_{sf}\tilde{A}_f^{-1}& \tilde{A}_s^{-1} \\
\end{array}\right].
\end{aligned}$$ Compared to $\tilde{P}_L^{-1}$ and $\tilde{P}_U^{-1}$, two more inverse operations of $\tilde{A}_m^{-1}$ and $\tilde{A}_f^{-1}$ are required.
### The $ILU(0)-$preconditioner
We finally consider the $ILU(0)-$ preconditioner $\tilde{P}_{ILU}$. This incomplete factorization technique is described in, e.g., [@YS03:00; @VH97:00; @OA96]. Here we apply a block $ILU(0)$ factorization for the coupled FSI system given by $$\label{eq:preilu}
\begin{aligned}
P_{ILU}&=\left[\begin{array}{ccc}
I & & \\
A_{fm}A_m^{-1} & I & \\
0 & A_{sf}A_f^{-1} & I \\
\end{array}\right]\times
\left[\begin{array}{ccc}
A_m& 0& A_{ms}\\
& A_f & A_{fs}-A_{fm}A_m^{-1}A_{ms}\\
& & A_s \\
\end{array}\right]\\
&=\left[\begin{array}{ccc}
A_m & 0 & A_{ms}\\
A_{fm} & A_f & A_{fs} \\
0 & A_{sf} & A_s+A_{sf}A_f^{-1}(A_{fs}-A_{fm}A_m^{-1}A_{ms}) \\
\end{array}\right],
\end{aligned}$$ that can be rewritten as $P_{ILU}=K+R_{ILU}$, where the remainder $R_{ILU}$ is given by $$R_{ILU}=\left[\begin{array}{ccc}
0& 0 & 0\\
0& 0 & 0 \\
0 & 0 & A_{sf}A_f^{-1}(A_{fs}-A_{fm}A_m^{-1}A_{ms}) \\
\end{array}\right].$$ The preconditioner $\tilde{P}_{ILU}$ is then given by $$\begin{aligned}
\tilde{P}_{ILU}&=\left[\begin{array}{ccc}
I & & \\
A_{fm}\tilde{A}_m^{-1} & I & \\
0 & A_{sf}\tilde{A}_f^{-1} & I \\
\end{array}\right]\times
\left[\begin{array}{ccc}
\tilde{A}_m& 0& A_{ms}\\
& \tilde{A}_f & A_{fs}-A_{fm}\tilde{A}_m^{-1}A_{ms}\\
& & \tilde{A}_s \\
\end{array}\right]\\
&=\left[\begin{array}{ccc}
\tilde{A}_m & 0 & A_{ms}\\
A_{fm} & \tilde{A}_f & A_{fs} \\
0 & A_{sf} & \tilde{A}_s+A_{sf}\tilde{A}_f^{-1}(A_{fs}-A_{fm}\tilde{A}_m^{-1}A_{ms}) \\
\end{array}\right].
\end{aligned}$$ The inverse of $\tilde{P}_{ILU}$ is then computed by two block Gauss-Seidel iterations using the forward and backward substitution consecutively: $$\label{eq:ipreilu}
\begin{aligned}
\tilde{P}_{ILU}^{-1}=
&\left[\begin{array}{ccc}
I & & \\
-A_{fm}\tilde{A}_m^{-1}& I & \\
A_{sf}\tilde{A}_f^{-1}A_{fm}\tilde{A}_m^{-1}& -A_{sf}\tilde{A}_f^{-1}& I \\
\end{array}\right]\times
\\
&\left[\begin{array}{ccc}
\tilde{A}_m^{-1} & 0 & -\tilde{A}_m^{-1}\tilde{A}_s^{-1}\\
& \tilde{A}_f^{-1} &-\tilde{A}_f^{-1}(A_{fs}-A_{fm}\tilde{A}_m^{-1}A_{ms}) \\
& & \tilde{A}_s^{-1}\\
\end{array}\right].
\end{aligned}$$ Compared to $\tilde{P}_L^{-1}$ and $\tilde{P}_U^{-1}$, two more inverse operations of $\tilde{A}_m^{-1}$ and one more inverse operation of $\tilde{A}_f^{-1}$ are required.
Algebraic multigrid method for the coupled FSI system {#sec:amgfsi}
-----------------------------------------------------
The $LU$ factorization is probably the best well-known preconditioner for solving general systems. Unfortunately, those preconditioners discussed in Section \[sec:pky\] for the FSI coupled system are not robust with respect to, e.g., the mesh size. As we observe from numerical experiments, the iteration numbers increase when the mesh is refined. In order to eliminate the mesh dependence, we consider the AMG and AMLI method. These methods tackles the low and high frequency errors separately by using the smoothing and coarse grid correction step. We discuss two essential components, the coarsening strategy and smoother, that are used in both the AMG and AMLI method.
### The coarsening strategy
First of all, we define a full rank prolongation matrix $$\label{eq:pro}
P_{l+1}^l =\left[
\begin{array}{ccc}
P_m^l& &\\
& P_f^l & \\
& & P_s^l
\end{array}\right],$$ where $l=1,...,L-1$, indicates the levels of a hierarchy, i.e., index $1$ refers to the finest level and $L$ the coarsest level. Here $P_m^l:{\mathbb{R}}^{n_m^{l+1}}\mapsto{\mathbb{R}}^{n_m^l}$ denotes the prolongation matrix constructured for the elliptic mesh movement sub-problem as in [@FK98:00], $n_m^l$ the number of DOF of the mesh movement sub-problem on level $l$ and $n_m^{l+1}<n_m^l$. In a similar way, $P_f : {\mathbb{R}}^{n_m^{l+1}}\mapsto{\mathbb{R}}^{n_m^l}$ and $P_s : {\mathbb{R}}^{n_s^{l+1}}\mapsto{\mathbb{R}}^{n_s^l}$ represent the prolongation matrices constructured for the indefinite fluid and structure sub-problem as in [@WM04:00; @ULHY13], that take the stability into account by proper scaling and avoid a mixture of velocity/displacement and pressure components on coarse levels, $n_f^l$ and $n_s^l$ the number of DOF of the fluid and structure sub-problem on level $l$ and $n_f^{l+1}<n_f^l$, $n_s^{l+1}<n_s^l$. Then it is easy to see $P_{l+1}^l : {\mathbb{R}}^{n_m^{l+1}+n_f^{l+1}+n_s^{l+1}}\mapsto{\mathbb{R}}^{n_m^l+n_f^l+n_s^l}$. More sophisticated and expensive coarsening strategies of the AMG method for saddle point systems arising from the fluid sub-problem can be found in [@BM:13]. In this work, we restrict ourselves to the strategy introduced in [@WM04:00], where a simple scaling technique is applied. We then define a restriction matrix $R_l^{l+1} : {\mathbb{R}}^{n_m^l+n_f^l+n_s^l}\mapsto{\mathbb{R}}^{n_m^{l+1}+n_f^{l+1}+n_s^{l+1}}$ as $$\label{eq:res}
R_l^{l+1} =\left[
\begin{array}{ccc}
R_m^{l+1}& &\\
& R_f^{l+1} & \\
& & R_s^{l+1}
\end{array}\right],$$ where $R_m^{l+1}=(P_m^l)^T$, $R_f^{l+1}=(P_f^l)^T$ and $R_s^{l+1}=(P_s^l)^T$. The system on the finest level $L$ is given by (\[eq:linsystem2\]) that is formulated as $K_1x_1=b_1$. Then the system matrix on the coarse level $l+1$ is formulated by the Galerkin projection that has considered the stability of indefinite sub-systems on coarse levels: $$\label{eq:cosys}
K_{l+1}=R_l^{l+1}K_lP_{l+1}^l=\left[
\begin{array}{ccc}
R_m^{l+1}A_m^lP_m^l&0&R_m^{l+1}A_{ms}^lP_s^l\\
R_f^{l+1}A_{fm}^lP_m^l&R_f^{l+1}A_f^lP_f^l& R_f^{l+1}A_{fs}^lP_s^l\\
0&R_s^{l+1}A_{sf}^lP_f^l&R_s^{l+1}A_s^lP_s^l
\end{array}\right],$$ where $A^l_{i}$, $i\in\{m, ms, fm, f, fs, sf, s\}$ denote the matrices on the level $l$, $l=1,...,L-1$. On the coarsest level $L$, the coupled system is solved by a direct solver.
### The smoother {#sec:fsism}
To complete the AMG method we need an iterative method (the smoother) for the problem $K_lx_l=b_l$, $l=1,...,L-1$, $$\label{eq:sm}
x_l^{k+1}={\mathcal S}_l(x_l^k, b_l)$$ with $$\label{eq:smvec}
x_l^k=\left[
\begin{array}{c}
\Delta d_{m, l}^k\\ \Delta u_{f, l}^k\\ \Delta u_{s, l}^k
\end{array}\right],
b_l=\left[
\begin{array}{c}
r_{m, l}\\ r_{f, l}\\ r_{s, l}
\end{array}\right],$$ where $k$ is the iteration index.
For this coupled FSI system, we consider the following preconditioned Richardson method, that turns out to be an effective FSI smoother with sufficient large number of smoothing steps: For $k\geq 0$, $$\label{eq:smrd}
\left[
\begin{array}{c}
\Delta d_{m, l}^{k+1}\\
\Delta u_{f, l}^{k+1}\\
\Delta u_{s. l}^{k+1}
\end{array}
\right]
=
\left[
\begin{array}{c}
\Delta d_{m, l}^k\\
\Delta u_{f, l}^k\\
\Delta u_{s. l}^k
\end{array}
\right]
+
P_{Rich}^{-1}
\left(
\left[
\begin{array}{c}
r_{m, l}\\
r_{f, l}\\
r_{s. l}
\end{array}
\right]-
K_l
\left[
\begin{array}{c}
\Delta d_{m, l}^k\\
\Delta u_{f, l}^k\\
\Delta u_{s. l}^k
\end{array}
\right]
\right),$$ where the preconditioner is given by $$\label{eq:prich}
P_{Rich}=
\left[
\begin{array}{ccc}
\frac{1}{\omega_m}\tilde{A_m^l}& & \\
A_{fm}^l& \frac{1}{\omega_f}\tilde{A_f^l}& \\
0& A_{sf}^l & \frac{1}{\omega_s}\tilde{A_s^l}\\
\end{array}
\right]$$ with the scaled block diagonal matrices. The inverse of each of these matrices is realized by applying one AMG cycle to each sub-problem, that has been developed in our previous work [@ULHY13]. In principle, these damping parameters $\omega_i$, $i\in\{m, f, s\}$, may be chosen differently. For simplicity, we use $\omega_m=\omega_f=\omega_s=\omega$ in our numerical experiments. This FSI smoother shows numerical robustness with respect to different hyperelastic models considered in the coupled FSI system and the AMG levels, i.e., the same damping parameter $\omega$ has been used in our numerical experiments. It is easy to see, one iteration of the preconditiond Richardson method consists of three steps of the following damped block Gauss-Seidel like iteration, that is demonstrated in Algorithm \[alg:gs\].
Given initial $x_l^k$,
### The algebraic multigrid iteration
The basic AMG iteration is given in Algorithm \[alg:amgit\], where $m_{pre}$ and $m_{post}$ refer to the number of pre- and post-smoothing steps. For $\nu=1$ and $\nu=2$, the iterations in Algorithm \[alg:amgit\] are called V- and W-cycle, respectively. In our numerical experiments, we choose the W-cycle. On the coarsest level $L$, we use direct solver to handle the coupled system.
$x_l^{k+1}={\mathcal S}_l(x_l^k, b_l)$ $b_{l+1}=R_l^{l+1}(b_l-K_lx_l)$, Solve $K_Lx_L=b_L$ $x_l=x_l+P_{l+1}^lx_{l+1}$, $x_l^{k+1}={\mathcal S}_l(x_l^k, b_l)$, return $x_l$.
As seen from Algorithm \[alg:amgit\], steps 1-3 and steps 14-16 correspond to the presmoothing and postsmoothing, respectively, steps 4-13 are referred to as “coarse grid correction”. The full AMG iterations are realized by repeated application of this algorithm until it satisfies certain stopping criteria. The iteration in this algorithm is also combined with GMRES [@Saad86] and FGMRES [@Saad1993] methods, that leads to fast convergence of the preconditioned Krylov subspace methods for the coupled FSI system.
Algebraic multilevel method for the coupled FSI system {#sec:amlgsi}
------------------------------------------------------
The AMLI method [@AO89I; @AO90; @PSV08:00; @KJMS13], sometimes referred to as “K-cycle”, is viewed as a W-cycle with the Krylov acceleration at the intermediate levels; see, e.g. [@NY08; @NLA542; @PSV14; @PSV08:00]. Here we combine our monolithic AMG method with the FGMRES Krylov subspace method at the intermediate levels, i.e., we reuse the coarsening strategy and smoothers constructed for the FSI AMG method. Instead of calling the AMG cycle (steps 9-10 in Algorithm \[alg:amgit\]), the AMLI algorithm calls the AMLI cycle recursively $\nu$ times as a preconditioner inside the FGMRES method for the coarse grid correction equations; see step 9 in Algorithm \[alg:amliit\].
$x_l^{k+1}={\mathcal S}_l(x_l^k, b_l)$ $b_{l+1}=R_l^{l+1}(b_l-K_lx_l)$, Solve $K_Lx_L=b_L$ $x_l=x_l+P_{l+1}^lx_{l+1}$, $x_l^{k+1}={\mathcal S}_l(x_l^k, b_l)$, return $x_l$.
It is easy to see, this method represents a variant of the W-cycle AMG method in the case of $\nu=2$; see an illustration for such W-cyles with $3$ levels ($L=3$) in Fig. \[fig:amgamli\]. Compared to the AMG W-cycle, the two preconditioned FGMRES iterations are called consecutively on the second level of the AMLI W-cycle, that are used to accelerate the convergence rate.
Numerical experiments {#sec:num}
=====================
Material and geometrical data, meshes and boundary conditions
-------------------------------------------------------------
We use the geometrical data from [@CJC83; @Holzapfel00:00]; see an illustration in Fig.\[fig:artgeo\].
In order to compare FSI simulation using different hyperelastic models (see Section \[sec:hypermat\]), we adopt the same geometrical data (except the angles $\alpha_M$ and $\alpha_A$) for the models of Neo-Hookean and Mooney-Rivlin materials. Furthermore, we set the value of the material parameters for three hyperelastic models as indicated in Tab. \[tab:par\], where $M$ denotes the media and $A$ the adventitia.
----------------------- -------------- -------------- ----------- ----------- ----------------
$\rho_s$
(r)[2-3]{} (r)[4-5]{} M A M A
Neo-Hookean $3$ kPa $0.3$ kPa $-$ $-$ $1.2$ kg/m$^3$
Mooney-Rivlin $3$ kPa $0.3$ kPa $0.3$ kPa $0.2$ kPa $1.2$ kg/m$^3$
Artery $3$ kPa $0.3$ kPa $-$ $-$ $1.2$ kg/m$^3$
$\kappa$
(r)[2-3]{} (r)[4-5]{} M A M A
Neo-Hookean $-$ $-$ $-$ $-$ $10^5$ kPa
Mooney-Rivlin $-$ $-$ $-$ $-$ $10^5$ kPa
Artery $2.3632$ kPa $0.5620$ kPa $0.8393$ $0.7112$ $10^5$ kPa
----------------------- -------------- -------------- ----------- ----------- ----------------
: The value of material parameters for three hyperelastic models.[]{data-label="tab:par"}
We use Netgen [@JS97:00] to generate finite element mesh for the computational FSI domain, that provides conforming grids on the FSI interface and two-layered structure interface. In order to study the robustness of the solvers (see Section \[sec:lsm\]) for the linearized coupled FSI system with respect to the discretization mesh parameter, three finite element meshes are generated using Netgen. In Tab. \[tab:mesh\], we summarize the total number of grid nodes (\#Nod), tetrahedra (\#Tet) and degrees of freedom (\#Dof) in the finite element simulation, that includes the mesh movement, fluid and structure sub-problems.
\#Nod \#Tet \#Dof
------------------- --------- ---------- ----------
Coarse mesh $1034$ $4824$ $6959$
Intermediate mesh $7249$ $38592$ $37909$
Fine mesh $54521$ $308736$ $285167$
: Three finite element meshes.[]{data-label="tab:mesh"}
For the fluid, we set the density $\rho_f=1$ mg/mm$^3$, the dynamic viscosity $\mu=0.035$ Poise. The fluid Neumann boundary condition on $\Gamma_{in}^t$ is given by $g_{in}=1.332 n_f$ kPa for $t\leq 0.125$ ms and $g_{in}=0$ kPa for $t>0.125$ ms. The remaining boundary conditions are specified in Section \[sec:pre\]. The fluid and structure are at the rest in the initial time. The time step size $\Delta t$ is set to $0.125$ ms. We run the simulation until $12$ ms.
Convergence of Newton’s method {#sec:pnewt}
------------------------------
To verify the linearization for the coupled nonlinear FSI system (see Section \[sec:nsm\]), we show the relative error (err) and iteration number (\#it) of Newton’s method for the FSI simulation using three different hyperelastic models: Neo-Hookean (FSI\_NH), Mooney-Rivlin (FSI\_MR) and artery (FSI\_AR), and three different meshes: Coarse mesh (C), intermediate mesh (I) and fine mesh (F); see Tab. \[tab:pnewt\] for details. Note that since we observe the same performance of Newton’s method for solving the nonlinear system at all time steps, only the performance at the first time step is recorded in Tab. \[tab:pnewt\] for simplicity of presentation.
[cccc]{}\
(r)[1-1]{} \#it &\
(r)[2-4]{} & C & I & F\
1 &$6.2e+01$ &$6.8e+01$ &$6.9e+01$\
2 &$3.8e-02$ &$5.7e-02$ &$6.6e-02$\
3 &$7.4e-06$ &$7.7e-06$ &$4.0e-06$\
4 &$4.3e-09$ &$1.1e-09$ &$1.3e-08$\
\
(r)[1-1]{} \#it &\
(r)[2-4]{} & C & I & F\
1& $6.2e+01$ &$6.8e+01$ &$6.9e+01$\
2& $3.8e-02$ &$5.7e-02$ &$6.4e-02$\
3& $7.3e-06$ &$7.3e-06$ &$3.0e-06$\
4& $4.3e-09$ &$9.7e-10$ &$2.5e-09$\
\
(r)[1-1]{} \#it &\
(r)[2-4]{} & C & I & F\
1& $6.2e+01$ &$6.8e+01$ &$6.9e+01$\
2& $4.0e-02$ &$6.0e-02$ &$7.0e-02$\
3& $7.5e-06$ &$8.4e-06$ &$1.2e-05$\
4& $4.3e-09$ &$1.2e-09$ &$4.0e-09$\
From the convergence history displayed in Tab. \[tab:pnewt\], we observe (near)quadratic convergence rate of Newton’s method, that conforms the derivation for the linearization of the coupled nonlinear FSI system, stemming from the domain movements, convection terms, material laws, transmission conditions and stabilization parameters. We observe nearly the same convergence rate for the nonlinear FSI system using three different hyperelastic models on the coarse, intermediate and fine mesh. At each iteration of Newton’s method, we use the preconditioned Krylov subspace, algebraic multigrid and multilevel methods to solve the linearized FSI system; see numerical results in Section \[sec:plins\] and \[sec:amlins\].
Iteration numbers of preconditioned Krylov subspace methods {#sec:plins}
-----------------------------------------------------------
To compare performance of preconditioned Krylov subspace methods for the linearized coupled FSI system, we use the GMRES method combined with the preconditioners from Section \[sec:pky\]. The stopping criterion for the GMRES method is set by the relative error $10^{-9}$. We compare the total number of GMRES iterations (\#it) to reach this criterion for the FSI simulation using the Neo-Hookean (FSI\_NH), Mooney-Rivlin (FSI\_MR) and artery (FSI\_AR) model on coarse mesh (C), intermediate mesh (I) and fine mesh (F). The detailed numerical results are shown in Tab. \[tab:pkrylov\]. Note that since the performance is similar for all Newton iterations, we demonstrate the iteration numbers at the first Newton iteration. The inverse of each sub-problem in the preconditioners is realized by calling the corresponding AMG cycle, that has been developed in [@ULHY13].
----------------------------------- ------ ------- ------- ------ ------- ------- ------ ------ -------
Precontitioner
(r)[2-10]{}
(r)[2-4]{} (r)[5-7]{} (r)[8-10]{} C I F C I F C I F
$\tilde{P}_D$ $51$ $111$ $217$ $53$ $111$ $227$ $46$ $98$ $189$
$\tilde{P}_L$ $28$ $58$ $109$ $29$ $60$ $114$ $25$ $50$ $95$
$\tilde{P}_U$ $28$ $59$ $114$ $28$ $61$ $119$ $25$ $51$ $98$
$\tilde{P}_{SSOR}$ $27$ $54$ $104$ $28$ $57$ $108$ $24$ $48$ $91$
$\tilde{P}_{ILU}$ $27$ $54$ $104$ $28$ $57 $ $108$ $24$ $48$ $91$
----------------------------------- ------ ------- ------- ------ ------- ------- ------ ------ -------
: The performance of preconditioned GMRES method for the linearized FSI system using three hyperelastic models and meshes.[]{data-label="tab:pkrylov"}
As we observe from the iteration numbers of the linear solvers using different preconditioners in Tab. \[tab:pkrylov\], the solver with the preconditioner $\tilde{P}_D$ requires more iteration numbers than the other four preconditioners. The solvers with the preconditioners $\tilde{P}_L$, $\tilde{P}_U$, $\tilde{P}_{SSOR}$ and $\tilde{P}_{ILU}$ require almost the same number of iteration numbers. As expected, when the mesh is refined, the iteration number of the preconditioned GMRES method increases. We will see in Section \[sec:amlins\] that, the mesh dependence is eliminated by using the multigrid and multilevel method.
Iteration numbers of algebraic multigrid and multilevel methods {#sec:amlins}
---------------------------------------------------------------
In this section, we compare the performance of the AMG and AMLI method for the linearized coupled FSI system. More precisely, we show the number of iteration numbers (\#it) of the AMG, AMLI, AMG preconditioned GMRES (AMG\_GMRES), AMG preconditioned FGMRES (AMG\_FGMRES), AMLI preconditioned GMRES (AMLI\_GMRES) and AMLI preconditioned FGMRES (AMLI\_FGMRES) method, respectively, up to the relative error $10^{-9}$. We run the FSI simulation using the Neo-Hookean (FSI\_NH), Mooney-Rivlin (FSI\_MR), artery (FSI\_AR) model, on the coarse (C), intermediate (I) and fine (F) mesh, respectively. See Tab. \[tab:pkamg\] for details. We use $8-10$ smoothing steps in the AMG and AMLI cycle, each of which only requires $1$ AMG cycle for the corresponding mesh movement, fluid and structure sub-problem (see Section \[sec:fsism\]). As preconditioners, we only apply $1$ AMG or AMLI cycle in the preconditioned GMRES or FGMRES iteration.
----------------------------------- ----- ----- ------ ----- ----- ------ ----- ----- ------
Method
(r)[2-10]{}
(r)[2-4]{} (r)[5-7]{} (r)[8-10]{} C I F C I F C I F
AMG $7$ $8$ $12$ $8$ $8$ $11$ $7$ $8$ $10$
AMG\_GMRES $7$ $7$ $8$ $7$ $7$ $8$ $7$ $7$ $8$
AMG\_FGMRES $6$ $7$ $8$ $6$ $7$ $9$ $6$ $7$ $8$
AMLI $7$ $8$ $12$ $8$ $8$ $11$ $7$ $8$ $10$
AMLI\_GMRES $7$ $7$ $8$ $7$ $7$ $8$ $7$ $7$ $8$
AMLI\_FGMRES $6$ $7$ $8$ $6$ $7$ $9$ $6$ $7$ $8$
----------------------------------- ----- ----- ------ ----- ----- ------ ----- ----- ------
: The performance of the AMG, AMLI, and AMG and AMLI preconditioned Krylov subspace method for the linearized FSI system using three hyperelastic models and meshes.[]{data-label="tab:pkamg"}
As we observe from Tab. \[tab:pkamg\], the AMG and AMLI method requires the same iteration numbers for each case. The AMG and AMLI preconditioned GMRES and FGMRES methods show improved performance with fewer iteration numbers than the AMG and AMLI methods. When the mesh is refined, we observe the iteration numbers using these methods stay in a very similar range. This demonstrates the robustness of the multigrid and multilevel method for the coupled FSI system with respect to the mesh refinement.
Visualization of the numerical solutions {#sec:vis}
----------------------------------------
In order to demonstrate the numerical simulation results, we visualize the structure deformations and fluid velocity fields in Fig. \[fig:numsol\], where the FSI solutions at time level $t=8$ ms using the structure models of the Neo-Hookean material, the Mooney-Rivlin material and the anisotropic two-layer thick walled artery, are respectively shown.
Comparison with the partitioned approach {#sec:comp}
----------------------------------------
In this section, we compare the numerical simulation results obtained by the monolithic approach with the results by the partitioned approach as in [@ULHY13].
We first compare the fluid pressure waves obtained from the FSI simulation using different structure models. In Fig. \[fig:cmp\_press\_nh\], Fig. \[fig:cmp\_press\_mr\] and Fig. \[fig:cmp\_press\_ga\], we plot fluid pressure waves along the center line with the starting point $(0, 0, 0)$ cm and ending point $(0, 0, 1.8)$ cm, for the model of Neo-Hookean material, Mooney-Rivlin material and anisotropic two-layer thick walled artery, respectively. In each subplot of these three figures, the horizontal line represents the center line (in cm), and the vertical line represents the pressure (in Pa).
We compare the pressure waves at different time levels using the monolithic and partitioned approach. According to our experiments, we observe at the first time steps, the solution obtained by using the monolithic and partitioned approach conforms to each other very well. With time stepping, the solution obtained by the partitioned approach has smaller magnitude than the solution by the monolithic approach. This is due to the fact that, at each time level of the partitioned approach, we apply the fixed-point method to the reduced interface equation in an iterative manner, which introduce some additional errors in the solution procedure. These additional errors are accumulated with time stepping. However, for the monolithic approach, we solve the coupled system in an all-at-once manner, such additional errors are eliminated.
Secondly, in order to see the effects of different structure models applied in the FSI simulation, we also compare the fluid pressure waves extracted from the FSI simulation using the model of Neo-Hookean material (solid lines), Mooney-Rivlin material (dashed lines) and anisotropic two-layer thick walled artery (dash dotted lines) in Fig. \[fig:cmp\_press\_threemodel\], where the horizontal line represents the center line (in cm), and the vertical line represents the pressure (in Pa). As we observe, the simulation results obtained from the model of Neo-Hookean and Mooney-Rivlin material are quite similar to each other (the speed and magnitude of the pressure waves). This is due to the fact that these two models have only one term difference in the energy functional; see (\[eq:nhenerg\]) and (\[eq:mrenerg\]). The pressure waves obtained from the model of the anisotropic two-layer thick walled artery travels with slower speed and smaller magnitude than the other two models.
As we discussed in [@ULHY13], for the partitioned approach, we need around $50\sim 55$ fixed-point iterations at each time step; for the monolithic approach we need about $4$ Newton iterations. In each fixed-point iteration, we need about $4-5$ Newton iterations for solving the fluid and structure sub-problems; and in each Newton iteration, we apply the AMG sub-problem solvers for the linearized systems. For each Newton iteration in the monolithic approach, we need about $10$ coupled AMG or AMLI iterations; and each coupled AMG or AMLI iteration requires apply one iteration of AMG sub-problem solvers. Altogether we observe almost $50\%$ saving of the computational cost in the monolithic approach in comparison with the partitioned approach. Further reduction in computational cost will be realized by using parallel computing, see, e.g, [@CCDGHUL03], that is considered as a forthcoming work.
Conclusions {#sec:con}
===========
In this work, we have developed the monolithic approach for solving the coupled FSI problem in an all-at-once manner. The Newton method for the nonlinear coupled system demonstrates its robustness and efficiency. For solving the linearized FSI system, the preconditioned Krylov sub-space, algebraic multigrid and algebraic multilevel methods have shown their good performance and robustness. In particular, the monolithic AMG and AMLI methods show more robustness than the preconditioned Krylov sub-space methods utilizing block factorization of the coupled system, i.e., the iteration numbers stay in a same range with the mesh refinement. Compare to the partitioned approach, the monolithic approach developed in the work shows its more robustness and efficiency with respect to the numerical results and solution methods.
|
---
abstract: 'The accumulation of atoms in the lowest energy level of a trap and the subsequent out-coupling of these atoms is a realization of a matter-wave analog of a conventional optical laser. Optical random lasers require materials that provide optical gain but, contrary to conventional lasers, the modes are determined by multiple scattering and not a cavity. We show that a Bose-Einstein condensate can be loaded in a spatially correlated disorder potential prepared in such a way that the Anderson localization phenomenon operates as a band-pass filter. A multiple scattering process selects atoms with certain momenta and determines laser modes which represents a matter-wave analog of an optical random laser.'
author:
- Marcin Płodzień
- Krzysztof Sacha
title: 'Matter-wave analog of an optical random laser'
---
Conventional optical lasers require two ingredients: material that provides optical gain and an optical cavity responsible for coherent feedback and selection of resonant laser modes. However, it is also possible to achieve laser action without the optical cavity provided the gain material is an active medium with disorder [@wiersma08]. Forty years ago Letokhov analyzed a light diffusion process with amplification and predicted that gain could overcome loss if the volume of a system exceeded a critical value [@letokhov67]. Random lasing (i.e. light amplification in disordered gain media), achieved in a laboratory in the 1990’s, attracts much experimental attention and offers possibilities for interesting applications [@lawandy94; @wiersma96; @cao98; @frolov99; @wiersma00; @wiersma08]. Theoretical understanding of this phenomenon is still imperfect. Although the Letokhov model of diffusion with gain is useful in predicting certain properties of random lasers, it neglects coherent phenomena. There are various theoretical models of random lasing but it is widely accepted that interference in a multiple scattering process determines the spatial and spectral mode structure of a random laser [@wiersma08].
Bose-Einstein condensation (BEC) of dilute atomic gases is a macroscopic accumulation of atoms in the lowest energy level of a trap when the temperature of the gas decreases [@anderson95]. This tendency of occupying a single state through the mechanism of stimulated scattering of bosons is an analog of mode selection in optical lasers due to the stimulated emission of photons. Gradual release of atoms from a trapped BEC allows for the realization of a matter-wave analog of a conventional optical laser [@mewes97; @hagley99; @bloch99; @cennini03; @guerin06; @tomek08]. The atom trap is an analog of the optical cavity. The lowest mode of the trap is a counterpart of an optical resonant mode. In conventional optical lasers the output coupler is usually a partially transmitting mirror. In atom lasers it involves, for example, a change of the internal state of the atom by means of a radio-frequency transition. In the present letter we propose the realization of a matter-wave analog of an optical random laser. Suppose the BEC of a dilute atomic gas has been achieved in a trapping potential. That is, we begin with the accumulation of atoms in a single mode of the resonator (i.e. the lowest eigenstate of the trap). Then, let us turn off the trap and turn on a weak disorder potential. Starting with a BEC we have a guarantee that the disorder medium is [*pumped*]{} with coherent matter-waves. We would like to raise the question of whether it is possible to prepare spatially correlated disorder potential in such a way that narrow peaks can be observed in the spectrum of atoms that are able to leave the area of the disorder potential? In other words: if the multiple scattering of atoms in a disorder medium can lead to a selective spectral emission of matter-waves from the medium?
In cold atom physics a disorder potential can be realized by means of an optical speckle potential [@schulte05; @clement06]. Transmission of coherent light through a diffusing plate leads to a random intensity pattern in the far field. Atoms experience the presence of the radiation as an external potential $V({\mathbf{r}})\propto \chi|E({\mathbf{r}})|^2$ proportional to the intensity of the light field $E({\mathbf{r}})$ and atomic polarizabitlity $\chi$ whose sign depends on the detuning of the light frequency from the atomic resonance. Diffraction from the diffusive plate onto the location of atoms determines correlation functions of the speckle potential. We assume that the origin of the energy is shifted so that $\overline{V({\mathbf{r}})}=0$ where the overbar denotes an ensemble average over disorder realizations. Standard deviation of the speckle potential $V_0$ measures the strength of the disorder.
Let us begin with a one-dimensional (1D) problem. In a weak disorder potential atoms with $k$-momentum undergo multiple scattering, diffusive motion and finally localize with an exponentially decaying density profile due to the Anderson localization process, provided that the system size exceeds the localization length [@anderson58; @lee85; @tiggelen99]. Taking the Born approximation to the second order in the potential strength, the inverse of the localization length is [@lifshits88] $l_{loc}^{-1}=(mV_0/\hbar^2k)^2{\cal P}(2k)$, where the Fourier transform of the pair correlation function of the speckle potential is (k)=(q)(k-q), \[pk\] and $\gamma(k)=\int dz \tilde\gamma(z)e^{-ikz}$ is the Fourier transform of the complex degree of coherence $\tilde\gamma(z)=\overline{E^*(z+z')E(z')}/\overline{|E(z)|^2}=\int dy {\cal A}(y)e^{izy/\alpha}/\int dy {\cal A}(y)$. ${\cal A}(y)$ describes the aperture of the optics and $\alpha$ is a constant dependant on the wavelength of the laser radiation and the distance of the diffusive plate from the atomic trap. In Ref. [@billy08], where the experimental realization of the Anderson localization of matter-waves is reported (see also [@roati08]), a simple Heaviside step function ${\cal A}(z)=\Theta(R-|z|)$ describes the aperture. The corresponding $\gamma(k)=\pi\sigma_R\Theta(1-|k\sigma_R|)$, where $\sigma_R=\alpha/R$ is the correlation length of the speckle potential. Consequently the power spectrum (\[pk\]) decreases linearly and becomes zero for $|k|\ge 2/\sigma_R$. Thus, the Born approximation predicts an effective mobility edge at $|k|=1/\sigma_R$, i.e. atoms with larger momenta do not localize [@sanchez07; @billy08] (actually higher order calculations [@lugan09] show they do localize but with very large localization lengths, much larger than the system size in the experiment). Hence, neglecting atom interactions, if the width of the initial atom momentum distribution exceeds the mobility edge particles at the tail of the distribution avoid the Anderson localization and may leave the disorder area.
![(Color online) Examples of the speckle potential (top panels) and the corresponding localization length (bottom panels) obtained within the Born approximation (dashed black curves) and numerically in the transfer-matrix calculations (solid red curves). Panels (a) and (c) show the results for the single obstacle in the diffusive plate where $\sigma_R=0.066$ (0.31 $\mu$m), $\sigma_R/\sigma_\rho=0.4$ and $V_0=3.5$. Panels (b) and (d) correspond to the case of two obstacles with the same $\sigma_R$ and $V_0$ but $\sigma_R/\sigma_\rho=0.7$ and $\sigma_R/\sigma_\zeta=0.1$. The results are shown for rubidium-87 atoms. Red detuning of the laser radiation from the atomic resonance is assumed. All values are presented in the harmonic oscillator units, i.e. energy $E_0=\hbar\omega$ and length $l_0=\sqrt{\hbar/m\omega}$ where $\omega/2\pi=5.4$ Hz. []{data-label="one"}](Fig1.eps){width="0.95\linewidth"}
Let us modify the experiment reported in Ref. [@billy08] by introducing an obstacle at the center of the diffusive plate so that the aperture is now described by ${\cal A}(z)=\Theta(R-|z|)-\Theta(\rho-|z|)$ where $\rho<R$. It implies that (k)=(-)\^[-1]{}\[ (1-|k\_R|)-(1-|k\_|)\], where $\sigma_\rho=\alpha/\rho$. If the size of the obstacle $\rho>R/3$, interference of light passing through such a [*double-slit* ]{} diffusive plate creates a peculiar speckle potential. That is, the power spectrum (\[pk\]) disappears for $|k|\ge 2/\sigma_R$ as previously but it is also zero for $\frac{1}{\sigma_R}-\frac{1}{\sigma_\rho}<|k|<\frac{2}{\sigma_\rho}$. Thus, according to the Born approximation there is a momentum interval where the localization length diverges. It implies that the Anderson localization process is able to operate as a band-pass filter letting particles with specific momenta leave the region of the disorder. Detection of escaping atoms should reveal a peak in the momentum spectrum corresponding to the interval where the localization length diverges.
![(Color online) Momentum distributions of the atoms localized in the disorder potential (dashed black lines) and the atoms which escaped from the disorder area (solid red lines). Panels (a) and (c) show the results for a single experimental realization of the disorder with parameters as in Fig. \[one\]a,c while panels (b) and (d) are related to the parameters as in Fig. \[one\]b,d. Panel (a) corresponds to the evolution time $t=100$ (2.9 s), panel (b) to $t=70$ (2 s), panels (c) and (d) to $t=200$ (5.7 s). Fraction of atoms that escaped the disorder region is about 9% in (a) and (b), and 20% in (c) and (d). In order to take into account experimental resolution all data have been convoluted with Gaussian of $\Delta k=0.3$ width. All values are presented in the harmonic oscillator units, i.e. energy $E_0=\hbar\omega$, length $l_0=\sqrt{\hbar/m\omega}$ and time $t_0=1/\omega$ where $\omega/2\pi=5.4$ Hz.[]{data-label="two"}](Fig2.eps){width="0.9\linewidth"}
Introducing two (or more) obstacles in the diffusive plate we can increase the number of momentum intervals with diverging $l_{loc}$. In Fig. \[one\] we present examples of the speckle potentials in the single obstacle case and the case of two obstacles located symmetrically around the plate center. In the latter case the aperture is described by ${\cal A}(z)=\Theta(R-|z|)-\Theta(\rho-|z|)+\Theta(\zeta-|z|)$ where $\zeta<\rho$ and (k)&=& (-+)\^[-1]{} \[(1-|k\_R|) &&-(1-|k\_|) +(1-|k\_|)\], with $\sigma_\zeta=\alpha/\zeta$. The figure also presents localization lengths obtained numerically in the transfer-matrix calculation [@lugan09] that confirms the Born predictions.
To simulate an experiment, we follow the parameters used in Ref. [@billy08] where Anderson localization of matter-waves has been observed. We assume that a BEC of $N=1.7\cdot 10^4$ rubidium-87 atoms is initially prepared in a quasi-1D harmonic trap with longitudal and transverse frequencies $\omega/2\pi=5.4$ Hz and $\omega_\perp/2\pi=70$ Hz, respectively. In the following we adopt the harmonic oscillator units: $E_0=\hbar\omega$, $l_0=\sqrt{\hbar/m\omega}$ and $t_0=1/\omega$ for energy, length and time, respectively. When the trapping potential is turned off and the speckle potential is turned on the expansion of the atomic cloud is initially dominated by the particle interactions until the density drops significantly and the atoms start feeling only the disorder potential. This initial stage of the gas expansion sets the momentum distribution of the atoms which may be approximated by an inverted parabola with an upper cut-off $k_{max}=2\sqrt{\mu}=12.7$ where $\mu$ is the initial chemical potential of the system [@sanchez07; @miniatura09].
The disorder potentials we choose are attainable in the experiment reported in Ref. [@billy08], i.e. they extend 862 units (4 mm) along the $z$ direction with the correlation length $\sigma_R=0.066$ (0.31 $\mu$m). We consider the potentials obtained by introducing one or two obstacles in the diffusive plate that are presented in Fig. \[one\]. If the atomic cloud starts at the center of the disorder potentials and if the cut-off of the momentum distribution is $k_{max}\lesssim 13$ then we may expect that time evolution leads to emission of atoms with $|k|\approx 5.5$ in the single obstacle case (cf. Fig. \[one\]a,c) and with $|k|\approx 9$ in the case of two obstacles (cf. Fig. \[one\]b,d). In the latter case we may expect also a small leakage of atoms with $|k|\approx 3.5$. For $|k|\approx 3.5$ the localization length shown in Fig \[one\]d reaches locally a maximum value of $l_{loc}\approx 120$. Before the particle interactions become negligible the gas spreads over a significant range of the disorder region. Therefore the Anderson localization is not able to diminish completely the leakage of atoms with $|k|\approx 3.5$. Starting with the ground state of the stationary Gross-Pitaevskii equation [@mewes97; @hagley99; @bloch99; @cennini03; @guerin06; @tomek08; @schulte05] in the presence of the harmonic trap, we integrate the time-dependent Gross-Pitaevskii equation when the trap is turned off and a disorder is turned on. Figure \[two\] shows momentum distributions of atoms that escaped from the disorder region and those that are localized for the disorder potentials corresponding to Fig. \[one\] at different moments in time. The expected selective spectral emissions of atoms are apparent in the figure. Interestingly in Fig. \[two\]d, i.e. for longer evolution time, small peaks around $|k|\approx 3.5$ become visible.
![(Color online) The Boltzmann transport mean-free path (a) and the localization length (b) for atoms in the 2D speckle potential created by transmission of a laser beam through the circularly shaped diffusive plate with the obstacle in the form of a ring, i.e. the aperture of the optics is described by $\Theta(R-|{\mathbf{r}}|)-\Theta(\rho-|{\mathbf{r}}|)+\Theta(\zeta-|{\mathbf{r}}|)$ with $\sigma_R/\sigma_\rho=0.99$ and $\sigma_R/\sigma_\zeta=0.15$. The potential strength corresponds to $\eta=0.15$. The quantities presented in the figure are dimensionless. []{data-label="three"}](Fig3.eps){width="0.9\linewidth"}
Finally let us consider a possibility of the realization of an atom analog of an optical random laser in 2D. The Boltzmann transport mean-free path $l_B$ is the characteristic spatial scale beyond which memory of the initial direction of the particle momentum is lost. In 2D $l_{loc}= l_Be^{\pi kl_B/2}$ and thus the localization length is much larger than $l_B$ which is in contrast to the 1D case where these two quantities are nearly identical $l_{loc}=2l_B$ [@tiggelen99]. For the circularly shaped diffusing plate with the radius $R$ the classical transport mean-free path [@kuhn05; @miniatura09], to the second order in the potential strength, reads = (1-)[P]{}(2k\_R), \[2dlb\] where $\eta=V_0/E_\sigma$ is the ratio of the potential strength and correlation energy $E_\sigma=\hbar^2/(m\sigma_R^2)$ with $\sigma_R=\alpha/R$. The power spectrum ${\cal P}(k)$ of the optical speckle potential disappears for $k\ge 2/\sigma_R$. Nevertheless, the $l_B$ (and consequently also $l_{loc}$) is always finite. In the bulk 2D system, an initially prepared atomic wave-packet follows a diffusive motion at short time but eventually the dynamics slow down and freeze due to the Anderson localization process [@kuhn05; @miniatura09; @vincent10].
By introducing obstacles in the diffusive plate we are able to shape the power spectrum of the speckle potential. On one hand the fact that ${\cal P}(k)$ may disappear at certain momentum intervals does not mean divergence of the corresponding transport mean-free path (\[2dlb\]). On the other hand any non-monotonic behaviour of $l_B(k)$ is dramatically amplified in the behaviour of $l_{loc}(k)$ because the localization length is an exponential function of $l_B$. In Fig. \[three\] we present an example related to the obstacle in the form of a ring, i.e. the aperture of the optics is described by ${\cal A}({\mathbf{r}})=\Theta(R-|{\mathbf{r}}|)-\Theta(\rho-|{\mathbf{r}}|)+\Theta(\zeta-|{\mathbf{r}}|)$. At $k\sigma_R\approx 0.4$ both $l_B$ and $l_{loc}$ shows a maximum. However, while the transport mean-free path changes by only a few in the neighboring region the localization length changes by four orders of magnitude. If the width of the momentum distribution of a BEC loaded in such a disorder potential is smaller than $0.6/\sigma_R$ and the radius of the disorder medium is greater than $10^3\sigma_R$ but less than $10^5\sigma_R$ the multiple scattering process leads to an isotropic emission of atoms with $k\approx 0.4/\sigma_R$.
We have outlined a proposal for the realization of a matter-wave analog of an optical random laser. Spatially correlated disorder potential for atoms with a peculiar pair correlation function can be created by transmitting a laser beam through a diffusive plate with obstacles. The resulting Anderson localization length reveals non-monotonic behaviour as a function of particle momentum. It allows for filtering momenta of particles that leave the area of the disorder, if the size of the disorder medium is suitably chosen. The disorder medium is assumed to be initially loaded with a BEC which guarantees that the matter-waves emitted from the medium are coherent. We have restricted ourselves to the 1D and 2D cases but the atom analog of an optical random laser can be also anticipated in 3D. In 3D the Ioffe-Regel criterion discriminates between waves that are Anderson localized ($kl_B\lesssim 1$) or not [@tiggelen99; @kuhn07]. Thus, a spatially correlated disorder potential for which the Ioffe-Regel criterion is not fulfilled for specific momenta should allow for selective emission of matter-waves in 3D.
Our proposal is directly applicable to atomic matter-wave experiments. From the point of view of the optical random lasers our analysis is not complete because it is restricted to passive random materials without gain. There is an interesting question whether a disorder with properties similar to those analyzed here play a role in optical random lasers and which modes are important when the gain is included in a system.
The non-monotonic behaviour of the localization length results in the appearance of a multiple effective mobility edge if a disorder system is finite. Wave transport is then unusual and interesting on its own. A shallow non-monotonical bahaviour of the Anderson localization length versus energy has been observed also in a classical wave system, see Ref. [@chorwat].
We are grateful to D. Delande for encouraging discussion. and to R. Marcinek and J. Zakrzewski for critical reading of the manuscript. This work is supported by the Polish Government within research projects 2009-2012 (MP) and 2008-2011 (KS).
[*Note added:*]{} After submission of this article, we became aware of a related theoretical study [@laurent11].
[99]{}
D. S. Wiersma, Nature Physics [**4**]{}, 359 (2008).
V. S. Letokhov, Zh. Eksp. Teor. Fiz. [**53**]{}, 1442 (1967); Sov. Phys. JETP [**26**]{}, 835 (1968).
N. M. Lawandy [*et al.*]{}, Nature [**368**]{}, 436 (1994).
D. S. Wiersma and A. Lagendijk, Phys. Rev. E [**54**]{}, 4256 (1996).
H. Cao [*et al.*]{}, Appl. Phys. Lett. [**73**]{}, 3656 (1998); H. Cao [*et al.*]{}, Phys. Rev. Lett. [**82**]{}, 2278 (1999).
S.V. Frolov [*et al.*]{}, Phys. Rev. B [**57**]{}, 9141 (1999); T.V. Shahbazyan [*et al.*]{}, Phys. Rev. B [**61**]{}, 13 266 (2000).
D. S. Wiersma, Nature [**406**]{}, 135 (2000).
M. H. Anderson [*et al.*]{}, Science [**269**]{}, 198 (1995).
M.-O. Mewes [*et al.*]{}, Phys. Rev. Lett. [**78**]{}, 582 (1997).
E. W. Hagley [*et al.*]{}, Science [**283**]{}, 1706 (1999).
I. Bloch [*et al.*]{}, Phys. Rev. Lett. [**82**]{}, 3008 (1999).
G. Cennini [*et al.*]{}, Phys. Rev. Lett. [**91**]{}, 240408 (2003)
W. Guerin [*et al.*]{}, Phys. Rev. Lett. [**97**]{}, 200402 (2006)
A. Couvert [*et al.*]{}, Europhys. Lett. [**83**]{}, 50001 (2008).
J. T. Schulte [*et al.*]{}, Phys. Rev. Lett. [**95**]{}, 170411 (2005).
D Clément [*et al.*]{}, New J. Phys. [**8**]{}, 166 (2006).
P.W. Anderson, Phys. Rev. [**109**]{}, 1492 (1958).
P. A. Lee and T.V. Ramakrishnan, Rev. Mod. Phys. [**57**]{}, 287 (1985).
B. van Tiggelen, in [*Diffuse Waves in Complex Media*]{}, edited by J.-P. Fouque, NATO Advanced Study Institutes, Ser. C, Vol. 531 (Kluwer, Dordrecht, 1999).
I. M. Lifshits, S. Gredeskul, and L. A. Pastur, Introduction to the Theory of Disordered Systems (Wiley, New York, 1988).
J. Billy [*et al.*]{}, Nature [**453**]{}, 891 (2008).
G. Roati [*et al.*]{}, Nature [**453**]{}, 895 (2008).
L. Sanchez-Palencia [*et al.*]{}, Phys. Rev. Lett. [**98**]{}, 210401 (2007)
E. Gurevich and O. Kenneth, Phys. Rev. A [**79**]{}, 063617 (2009); P. Lugan [*et al.*]{}, Phys. Rev. A [**80**]{}, 023605 (2009).
R. C. Kuhn [*et al.*]{}, Phys. Rev. Lett. [**95**]{}, 250403 (2005)
C. Miniatura [*et al.*]{}, Eur. Phys. J. B [**68**]{}, 353 (2009).
M. Robert-de-Saint-Vincent [*et al.*]{}, Phys. Rev. Lett. [**104**]{}, 220602 (2010).
R. C. Kuhn [*et al.*]{}, New J. Phys. [**9**]{}, 161 (2007).
D. Čapeta [*et al.*]{}, Phys. Rev. A [**84**]{}, 011801(R) (2011).
M. Piraud, A. Aspect, L. Sanchez-Palencia, arXiv:1104.2314.
|
---
abstract: 'We investigate chiral properties of the domain-wall fermion (DWF) system by using the four-dimensional hermitian Wilson-Dirac operator. We first derive a formula which connects a chiral symmetry breaking term in the five dimensional DWF Ward-Takahashi identity with the four dimensional Wilson-Dirac operator, and simplify the formula in terms of only the eigenvalues of the operator, using an ansatz for the form of the eigenvectors. For a given distribution of the eigenvalues, we then discuss the behavior of the chiral symmetry breaking term as a function of the fifth dimensional length. We finally argue the chiral property of the DWF formulation in the limit of the infinite fifth dimensional length, in connection with spectra of the hermitian Wilson-Dirac operator in the infinite volume limit as well as in the finite volume.'
address: ' Institute of Physics, University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan\'
author:
- 'S. Aoki and Y. Taniguchi'
title: |
[hep-lat/0109022]{}\
[UTHEP-447]{}\
[UTCCP-P-112]{}\
Chiral properties of domain-wall fermions\
from eigenvalues of 4 dimensional Wilson-Dirac operator
---
Introduction
============
A suitable definition of the chiral symmetry has been a long standing problem in lattice field theories. Recently an ultimate solution to this problem seems to appear in the form of the Ginsberg-Wilson relation [@GW; @Luscher98]. Two explicit examples of the lattice fermion operators which satisfies the Ginsberg-Wilson relation have been found so far: One is the perfect lattice Dirac operator constructed via the renormalization group transformation[@perfect1; @perfect2] and the other is the overlap Dirac(OD) operator[@Neuberger98; @KN99] derived from the overlap formalism[@NN94] or from the domain-wall fermion(DWF)[@Kaplan92; @Shamir93; @Shamir95] in the limit of the infinite length of the 5th dimension. Since the explicit form is simpler for the latter, a lot of numerical investigations[@Blum-Soni; @AIKT; @Blum98; @cppacs-dwf; @RBC; @practical; @EHN98; @MILC] as well as analytic considerations[@HJL98; @Kikukawa99] have been carried out for the domain-wall fermion or the overlap Dirac fermion.
Recent numerical investigations for the domain-wall fermion[@cppacs-dwf; @RBC], however, bring puzzling results, which are summarized as follows.
Analytic considerations suggest that the overlap or domain-wall fermion works well at sufficiently weak gauge coupling or equivalently for sufficiently smooth gauge configurations[@HJL98; @Kikukawa99]. Initial numerical investigations supported this result[@Blum-Soni; @AIKT; @Blum98]. On the other hand, further investigations indicate that the domain-wall fermion at stronger coupling ceases to describe the massless fermion even in the $N_5 \rightarrow \infty$ limit[@cppacs-dwf; @RBC], where $N_5$ is the number of sites in the 5th dimension. Analytic results[@IN; @BS; @GS] in the strong coupling limit are controversial. However the latest one[@BBS] also suggests that the DWF does not work in the limit.
It has been argued[@cppacs-dwf] that this result may be understood by the relation between the phase structure of the lattice QCD with the 4 dimensional Wilson fermion[@Aoki-phase] and zero eigenvalues of the 4 dimensional Wilson-Dirac operator. It is well-known that the zero eigenvalues cause a trouble for the domain-wall fermion (or the overlap)[@Shamir95; @HJL98; @Kikukawa99], therefore it is natural to consider that the success/failure of DWF depends on the absence/presence of the zero eigenvalues. On the other hand, the zero eigenvalues of the 4 dimensional Wilson Dirac operator, denoted as $D_W$, is related to the parity-flavor breaking order parameter, ${\left\langle \bar q i\gamma_5\tau^3 q \right\rangle}$, where $\tau^3$ is the 2 $\times$ 2 flavor matrix. Introducing the external source $H$ coupled to $\bar q i\gamma_5\tau^3 q$, the following relation is easily derived. $$\begin{aligned}
{\left\langle \bar q i\gamma_5\tau^3 q \right\rangle} &=& -\lim_{H\rightarrow 0^+}
\lim_{V\rightarrow \infty}
\frac{1}{V}{\rm Tr}\frac{i\gamma_5\tau^3}{D_W + i\gamma_5\tau^3 H}
{\nonumber}\\&=&
-\lim_{H\rightarrow 0^+}\lim_{V\rightarrow \infty}\frac{1}{V}{\rm tr}
\left[\frac{i\gamma_5}{D_W + i\gamma_5 H}-
\frac{i\gamma_5}{D_W - i\gamma_5 H}
\right] \nonumber \\
&=& -i \lim_{H\rightarrow 0^+}\lim_{V\rightarrow \infty}\frac{1}{V}{\rm tr}
\left[\frac{1}{H_W + i H}-\frac{1}{H_W - i H}
\right] {\nonumber}\\
&=& -i \lim_{H\rightarrow 0^+}\lim_{V\rightarrow \infty}\frac{1}{V}
\sum_n {\left\langle \lambda_n\left| \frac{1}{\lambda_n + i H}-\frac{1}{\lambda_n - i H} \right| \lambda_n\right\rangle} \nonumber \\
&=& -i \lim_{H\rightarrow 0^+}
\int d\ \lambda \rho_{H_W}
(\lambda)\left[\frac{1}{\lambda + i H}-\frac{1}{\lambda - i H}\right]{\nonumber}\\
&=& -i\int d\ \lambda \rho_{H_W}(\lambda) (-2\pi i)\delta(\lambda)
= -2\pi \rho_{H_W}(0),\end{aligned}$$ where $\lambda_n$ and $\vert \lambda_n\rangle$ are the eigenvalue and the eigenstate of the hermitian Wilson-Dirac operator $H_W =\gamma_5 D_W$ and $\rho_{H_W}(\lambda)$ is the density of eigenvalues of $H_W$, defined by $$\rho_{H_W}(\lambda) =\lim_{V\rightarrow \infty}
\frac{1}{V} \sum_n \delta(\lambda-\lambda_n) .$$
The expected phase structure of lattice QCD with the Wilson fermion [@Aoki-phase] is given in Fig. \[fig:phase\], where $g$ is the gauge coupling and $M$ is the mass parameter. In the region B, ${\left\langle \bar q i\gamma_5\tau^3 q \right\rangle}\not= 0$, thus the density of the zero eigenvalues is nonzero. According to this phase structure, the DWF is successful between $ 0 < M < 2$ in the weak coupling limit of QCD. This allowed region of $M$ agrees with the analytic consideration of DWF[@Shamir93]. Once the gauge coupling becomes nonzero, the allowed region of $M$ shrinks and moves to larger values. This property has been predicted by the mean-field analysis[@AT] and has numerically been observed[@cppacs-dwf]. If the coupling becomes large so that $\beta=6/g^2 < \beta_c$, the allowed range of $M$ disappears and the massless fermion ceases to exist in the domain-wall QCD (DWQCD).
Numerical results mentioned before[@cppacs-dwf; @RBC] that DWF does not work in the strong coupling region seem to agree with the above expectation. This interpretation, however, has difficulties. Let us summarize the recent numerical result[@cppacs-dwf], where the quenched DWQCD has been investigated with the RG improved gauge action as well as the ordinary plaquette gauge action. With the former gauge action the quenched DWQCD works at $\beta = 2.6$ and fails at $\beta=2.2$, suggesting that $2.2 < \beta_c < 2.6$, while with the latter gauge action the quenched DWQCD fails even at $\beta = 6.0$, indicating $\beta_c > 6.0$. One problem is that the latter condition $\beta_c > 6.0$ with the plaquette gauge contradicts the previous numerical investigation[@AKU], which concludes that the region without parity-flavor breaking (: the allowed region in the present case) exists $\beta = 6.0$. A more serious problem is that the numerical analysis for the density of eigenvalues indicates non-zero value of $\rho_{H_W}(0)$ at any $\beta$[@EHN99] with the plaquette action, and the similar conclusion is obtained by the same analysis with the RG improved action [@cppacs-nagai]. If this is true, the phase structure in Fig.\[fig:phase\] is incorrect in the quenched QCD: the gap of the parity-flavor breaking phase never shows up in the weak coupling region, leading to the conclusion that DWF does not work at all in the quenched QCD.
This complicated situation is summarized as follows. While being consistent with the numerical result of the DWQCD for the plaquette action[@cppacs-dwf; @RBC], the numerical analysis for the density of the eigenvalues[@EHN99; @cppacs-nagai] contradicts the phase structure of QCD with Wilson fermion[@Aoki-phase; @AKU] and the numerical result of DWQCD with the RG action[@cppacs-dwf]. Furthermore the numerical result of the DWQCD with the plaquette action seems apparently inconsistent with the numerical result for the phase structure of the Wilson fermion with the same gauge action.
In this paper, we try to resolve the above mutual inconsistency, by analysing eigenvalues of the 4 dimensional hermitian Wilson-Dirac operator. In Sec. \[sec:action\] the action and the axial Ward-Takahashi identity are given for the latter use. In Sec. \[sec:effective\], we derive the formula for the explicit chiral symmetry breaking term of DWQCD, $m_{5q}$, in terms of eigenvalues and eigenstates of the modified hermitian Wilson-Dirac operator. In Sec. \[sec:expansion\], assuming the form of the eigenstates, the formula derived in Sec. \[sec:effective\] is reduced to a much simpler expression in terms of the eigenvalues only. The dependence of $m_{5q}$ on $N_5$ is discussed in Sec. \[sec:model\], for a given model of the density of the eigenvalues. In Sec. \[sec:analysis\], using the simplified formula, we discuss the chiral properties of DWQCD first in the finite volume, and consider it in the infinite volume limit. Some comments are also given on the phase structure and the exceptional configurations in the lattice QCD with the Wilson fermion. Our conclusion is presented in Sec. \[sec:conclusion\].
Action and Axial Ward-Takahashi identity {#sec:action}
========================================
We employ Shamir’s domain-wall fermion action[@Shamir93; @Shamir95]. Flipping the sign of the Wilson term and the domain wall height $M$, we write $$\begin{aligned}
S_{DW} &=&
a^4\sum_{x,y,s,s'}
{{\overline{\psi}}}(x,s)\biggl[
\left(D_W-\frac{M}{a}\right)_{x,y}\delta_{s,s'}
+\left(D_5\right)_{s,s'}\delta_{x,y}
\biggr]\psi(y,s'),
\\
D_W &=& \sum_\mu\left(
\gamma_\mu\frac{1}{2}\left(\nabla_\mu^f+\nabla_\mu^b\right)
-\frac{a}{2}\nabla_\mu^f\nabla_\mu^b\right),
\label{eqn:Wilson-Dirac}
\\
D_5 &=& \gamma_5\frac{1}{2}\left(\nabla_5^f+\nabla_5^b\right)
-\frac{a_5}{2}\nabla_5^f\nabla_5^b,\end{aligned}$$ where $x,y$ are four-dimensional space-time coordinates, and $s,s'$ are fifth-dimensional or “flavor” indices, bounded as $1 \le s, s' \le N_5$ with the free boundary condition at both ends (we assume $N_5$ to be even). The domain-wall height $M$ should be taken as $0<M<2$ at tree level to give massless fermion modes. $\nabla_\mu^f$ and $\nabla_\mu^b$ are forward and backward derivative in four dimensions $$\begin{aligned}
&&
\left(\nabla_\mu^f\right)_{x,y}=
\frac{1}{a}\left(\delta_{x+\mu,y}U_\mu(x)-\delta_{x,y}\right),
\\&&
\left(\nabla_\mu^b\right)_{x,y}=
\frac{1}{a}\left(\delta_{x,y}-\delta_{x-\mu,y}U_\mu^\dagger(y)\right)\end{aligned}$$ and $\nabla_5^f$ and $\nabla_5^b$ are those in the fifth dimension with the boundary condition $$\begin{aligned}
\left(\nabla_5^f\right)_{s,s'}
&=& \frac{1}{a_5}\left\{
\begin{array}{lll}
\delta_{s+1,s'} -\delta_{s,s'} &\ & (1\le s<N_5)\\
a_5m_f\delta_{s',1} -\delta_{s,s'} &\ & (s=N_5)
\end{array}
\right.~,
\\
\left(\nabla_5^b\right)_{s,s'}
&=& \frac{1}{a_5}\left\{
\begin{array}{lll}
\delta_{s,s'}-a_5m_f\delta_{s',N_5} &\ & (s=1)\\
\delta_{s,s'} -\delta_{s-1,s'} &\ & (1<s\le N_5)\\
\end{array}
\right.~,\end{aligned}$$ where $m_f$ is the mass for the quark field. The light fermion mode is extracted by the 4-dimensional quark field defined on two boundaries of the fifth dimension, $$\begin{aligned}
q(x) = P_L \psi(x,1) + P_R \psi(x,{N_5}),
{\nonumber}\\
{\overline{q}}(x) = {{\overline{\psi}}}(x,{N_5}) P_L + {{\overline{\psi}}}(x,1) P_R,
\label{eq:quark}\end{aligned}$$ where $P_{R/L}$ is the projection matrix $P_{R/L}=(1\pm\gamma_5)/2$. The quark mass term induces a coupling between these boundary fields through the bare quark mass $m_f$.
In this paper we investigate chiral property of the domain-wall fermion through the axial Ward-Takahashi (WT) identity defined in Ref. [@Shamir95] for the non-singlet axial transformation $$\begin{aligned}
&&
\delta_A^a \psi(x,s) = i\epsilon(N_5+1-2s) T^a\psi(x,s),
\\&&
\delta_A^a {{\overline{\psi}}}(x,s) = -i\epsilon(N_5+1-2s) T^a{{\overline{\psi}}}(x,s),\end{aligned}$$ where $\epsilon (x)$ is a sign function of $x$. The WT identity for some operator ${\cal O}$ is written as follows. $$\begin{aligned}
&&
{\left\langle \nabla^b_\mu A_\mu^a(x) {\cal O} \right\rangle} =
2m_f{\left\langle P^a(x) {\cal O} \right\rangle}
+2{\left\langle J_{5q}^a(x) {\cal O} \right\rangle}
-{\left\langle \delta_A^a {\cal O} \right\rangle},\end{aligned}$$ where an axial vector current $A^a_\mu$, a pseudo scalar density $P^a$, and an explicit chiral symmetry breaking term $J_{5q}$ at finite $N_5$ are given by $$\begin{aligned}
A_\mu^a(x) &=& \sum_{s=1}^{N_5} \epsilon(N_5+1-2s)
\frac{1}{2}\Bigl(
{{\overline{\psi}}}(x,s) T^a(1-\gamma_\mu) U_\mu(x) \psi(x+\mu,s)
{\nonumber}\\&&
-{{\overline{\psi}}}(x+\mu,s) (1+\gamma_\mu) U_\mu^\dagger(x) T^a \psi(x,s)
\Bigr),
\\
P^a(x) &=& {{\overline{q}}}(x)\gamma_5 T^a q(x),
\\
J_{5q}^a(x) &=&
\frac{1}{a_5}\left(
{{\overline{\psi}}}(x,\frac{N_5}{2}+1) T^a P_R \psi(x,\frac{N_5}{2})
-{{\overline{\psi}}}(x,\frac{N_5}{2}) T^a P_L \psi(x,\frac{N_5}{2}+1)
\right)
{\nonumber}\\&=&
\frac{1}{a_5}
{{\overline{\psi}}}'(x,\frac{N_5}{2})T^a\gamma_5\psi'(x,\frac{N_5}{2}).\end{aligned}$$ Here we define the following fields according to [@KN99] $$\begin{aligned}
&&
\psi'(x,\frac{N_5}{2})=P_R\psi(x,\frac{N_5}{2})+P_L\psi(x,\frac{N_5}{2}+1),
\\&&
{{\overline{\psi}}}'(x,\frac{N_5}{2})
={{\overline{\psi}}}(x,\frac{N_5}{2}+1)P_R+{{\overline{\psi}}}(x,\frac{N_5}{2})P_L.\end{aligned}$$ In this paper we consider the identity with ${\cal O}=P^b(y)$ $$\begin{aligned}
{\left\langle \nabla^b_\mu A_\mu^a(x) P^b(y) \right\rangle} &=&
2m_f{\left\langle P^a(x) P^b(y) \right\rangle}
+2{\left\langle J_{5q}^a(x) P^b(y) \right\rangle}
{\nonumber}\\&&
-\frac{1}{a^4}\delta_{x,y}{\left\langle {{\overline{q}}}(y)\{T^a,T^b\}q(y) \right\rangle}
\label{eqn:WTidentity}\end{aligned}$$ and measure the chiral symmetry breaking effect by $$m_{5q}=\lim_{t\to\infty}
\frac{\sum_{\vec x}\left<J_{5q}(t,{\vec x})P(0)\right>}
{\sum_{\vec x}\left<P(t,{\vec x})P(0)\right>},
\label{eq:m5q}$$ which we call an ‘anomalous quark mass’ [@cppacs-dwf]. Please notice that we have omitted flavor indices since the flavor factors are canceled in the definition of $m_{5q}$.
Effective theory of domain-wall fermions {#sec:effective}
========================================
We now derive the effective theory of the DWF system by integrating out the heavy bulk modes according to Ref. [@KN99]. Our aim is to rewrite the numerator and denominator of the anomalous quark mass [(\[eq:m5q\])]{} in terms of four dimensional quantities and hence to relate $m_{5q}$ to an hermitian Wilson-Dirac operator in the four dimensional theory. Introducing source fields to $q(x)$, ${{\overline{q}}}(x)$, $\psi'(x,N_5/2)$, ${{\overline{\psi}}}'(x,N_5/2)$, the propagators necessary for our purpose are derived as[@KN99] $$\begin{aligned}
&&
{\left\langle q(x){{\overline{q}}}(y) \right\rangle}=
\frac{a_5}{a^5}
\frac{1}{1-am_f}\left(D_{N_5}^{-1}-a\right),
\\&&
{\left\langle \psi'(x,\frac{N_5}{2}){{\overline{q}}}(y) \right\rangle}=
\frac{a_5}{a^5}\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}
D_{N_5}^{-1},
\\&&
{\left\langle q(x){{\overline{\psi}}}'(y,\frac{N_5}{2}) \right\rangle}=
\frac{a_5}{a^5}D_{N_5}^{-1}
\gamma_5\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}\gamma_5,\end{aligned}$$ where ${\widetilde{H}}$ is a DWF Hamiltonian in the 5th direction, with which the transfer matrix in fifth direction is given by $$\begin{aligned}
T=e^{-a_5{\widetilde{H}}}.\end{aligned}$$ $D_{N_5}$ is a truncated Ginsparg-Wilson (GW) Dirac operator which satisfies GW relation in $N_5\to\infty$ limit. ${\widetilde{H}}$ and $D_{N_5}$ are related to the four dimensional hermitian Wilson-Dirac operator $H_W$ as, $$\begin{aligned}
&&
D_{N_5} = \frac{1}{2a}\left[(1+a m_f)+(1-a m_f)
\gamma_5\tanh\frac{N_5}{2}a_5{\widetilde{H}}\right],
\label{eqn:DN5}
\\&&
{\widetilde{H}} =\frac{1}{a_5}\log \frac{1+aH'}{1-aH'},
\\&&
H'=H_W\frac{1}{2+a\gamma_5H_W},
\label{eqn:H'}
\\&&
H_W=\gamma_5\left(D_W-\frac{M}{a}\right).\end{aligned}$$ Here we adopted Boriçi’s notation[@Borici] for ${\widetilde{H}}$.
The numerator of $m_{5q}$ can be written in terms of the above propagators, $$\begin{aligned}
X(t) &=& \sum_{\vec{x}}{\left\langle J_{5q}(\vec{x},t)P(\vec{y},0) \right\rangle}
{\nonumber}\\&=&
-\frac{1}{a_5}\sum_{\vec{x}}
{{\rm tr}}\left[\gamma_5{\left\langle \psi'(\vec{x},t,\frac{N_5}{2}){{\overline{q}}}(\vec{y},0) \right\rangle}\gamma_5
{\left\langle q(\vec{y},0){{\overline{\psi}}}'(\vec{x},t,\frac{N_5}{2}) \right\rangle}
\right]
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}\sum_{\vec{x},\alpha,a}\sum_{\beta,b}
{\left\langle I\left| \frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger
\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}} \right| I\right\rangle},
\label{eqn:numerator}\end{aligned}$$ where $I=(\vec{x},t,\alpha,a)$, $J=(\vec{y},0,\beta,b)$. $\alpha,\beta$ are spinor indices and $a,b$ are color indices. ${\left| I\right\rangle}$ is an eigen-ket in the coordinate, spinor and color spaces. In the last equality we use a relation that $\gamma_5(D_{N_5}^{-1})\gamma_5=(D_{N_5}^{-1})^\dagger$. The denominator of $m_{5q}$ is given by $$\begin{aligned}
Y(t) &=& \sum_{\vec{x}}{\left\langle P(\vec{x},t)P(\vec{y},0) \right\rangle}
{\nonumber}\\&=&
-\sum_{\vec{x}}{{\rm tr}}\left[\gamma_5{\left\langle q(\vec{x},t){{\overline{q}}}(\vec{y},0) \right\rangle}
\gamma_5{\left\langle q(\vec{y},0){{\overline{q}}}(\vec{x},t) \right\rangle}\right]
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}\frac{1}{(1-a m_f)^2}\sum_{\vec{x},\alpha,a}\sum_{\beta,b}
{\left\langle I\left| \left(D_{N_5}^{-1}-a\right) \right| J\right\rangle}
{\left\langle J\left| \gamma_5\left(D_{N_5}^{-1}-a\right)\gamma_5 \right| I\right\rangle}
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}\frac{1}{(1-a m_f)^2}\sum_{\vec{x},\alpha,a}\sum_{\beta,b}
{\left\langle I\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| I\right\rangle}.
\label{eqn:denominator}\end{aligned}$$ Note that ${{\left\langle I | J \right\rangle}}=0$ for $t\neq0$.
Hereafter we set the bare quark mass $m_f=0$ without loss of generality.
Expansion in terms of eigenstates {#sec:expansion}
=================================
We expand $X(t)$ and $Y(t)$ in terms of eigenstates of the hermitian operator $H'$. Since the numerator [(\[eqn:numerator\])]{} has a suppression factor $1/(2\cosh\frac{N_5}{2}a_5{\widetilde{H}})$ only small eigenvalues of ${\widetilde{H}}$, or equivalently those of $H'$, contribute to $m_{5q}$ at large $N_5$. In this case we can expand $H'$ perturbatively in terms of the four dimensional hermitian Wilson-Dirac operator $H_W$ and can derive the formula which connect $m_{5q}$ and the eigenvalue of $H_W$ at large $N_5$.
The expansion of $X(t)$ and $Y(t)$ with these eigenstates is done by inserting a complete set of eigenstates $$\begin{aligned}
1=\sum_{n}{\left| n\right\rangle}{\left\langle n\right|},\end{aligned}$$ where ${\left| n\right\rangle}$ is $n$-th eigenstate of $H'$ $$\begin{aligned}
H'{\left| n\right\rangle}=\lambda'_n{\left| n\right\rangle}.\end{aligned}$$ With this substitution $X(t)$ and $Y(t)$ are written in terms of eigenvalue and eigenfunction.
In order to further simplify these Green functions as a function of eigenvalue only, we employ the following assumptions and approximations for typical form of the eigenfunctions. The recent numerical analyses indicate that the eigenvalues of $H_W$ seem to be classified into two groups [@HJL98; @Nagai00]; one is a group of small isolated eigenvalues and the other is a group of almost continuous eigenvalues above the isolated ones. The eigenvectors associated with the isolated eigenvalues are exponentially localized at some center in the coordinate space [@HJL98; @Nagai00], and those for the continuous eigenvalues are rapidly oscillating plane-wave functions in the coordinate space[@Nagai00]. This property of eigenvalues is expected to hold also for $H'$, since $H'$ can be expanded in terms of $H_W$ perturbatively for small eigenvalues. >From this consideration we assume that the eigenvector space of $H'$ is divided into two subspaces ${\cal S}$ and ${\widetilde{\cal S}}$, where ${\cal S}$ is spanned by localized eigenvectors and ${\widetilde{\cal S}}$ is expanded with plane wave functions.
Localized eigenvectors
----------------------
We adopt two types of the approximation for typical form of the localized eigenvectors; completely localized ones and partially localized ones. The completely localized eigenvector means that the eigenfunction $\psi_n$ of the $n$-th eigenvalue has non-zero value at a single parameter $(x_n,\alpha_n,a_n)$ $$\begin{aligned}
\psi_n(x,\alpha,a)=\delta_{x,x_n}\delta_{\alpha,\alpha_n}\delta_{a,a_n},\end{aligned}$$ which gives a unit vector in $(x,\alpha,a)$ space. $\psi_n(x,\alpha,a)$ is given by using the $n$-th eigenstate ${\left| n\right\rangle}$ of an eigenvalue $\lambda'_n$ $$\begin{aligned}
&&
\psi_n(x,\alpha,a)={{\left\langle x,\alpha,a | n \right\rangle}},
\\&&
H'{\left| n\right\rangle}=\lambda_n'{\left| n\right\rangle},
\\&&
{\widetilde{H}}{\left| n\right\rangle}={\widetilde{\lambda}}(\lambda_n'){\left| n\right\rangle},
\\&&
{\widetilde{\lambda}}(\lambda_n') =\frac{1}{a_5}\log
\frac{1+a\lambda'_n}{1-a\lambda_n'}.\end{aligned}$$ We assume that eigenvectors with different eigenvalues reside at different points $$\begin{aligned}
{{\left\langle n | x,\alpha,a \right\rangle}}{{\left\langle x,\alpha,a | m \right\rangle}}=0\quad
{\rm for}\; n \neq m.\end{aligned}$$ In this case ${\cal S}$ is spanned by a set of basis vectors $$\begin{aligned}
\pmatrix{1\cr0\cr\vdots},\quad
\pmatrix{0\cr1\cr\vdots},\quad\cdots.\end{aligned}$$
On the other hand, the partially localized eigenvector is non-zero at small range of $(x,\alpha,a)$ space. We call this small range of volume $h_n$ as ${\cal S}_n$ and ${\cal S}=\cup_n{\cal S}_n$. ${\cal S}_n$ is spanned by a set of eigenvectors $\psi_n^{(i)}$, where $i$ runs as $i=1,\cdots,h_n$ and $\psi_n^{(i)}$ is given by $$\begin{aligned}
&&
\psi_n^{(i)}(x,\alpha,a)={{\left\langle x,\alpha,a | n,i \right\rangle}},
\\&&
H'{\left| n,i\right\rangle}=\lambda_n^{'(i)}{\left| n,i\right\rangle}.\end{aligned}$$ We assume that ${\cal S}_n$ with different $n$ has no overlap each other; $$\begin{aligned}
{{\left\langle m,i | x,\alpha,a \right\rangle}}{{\left\langle x,\alpha,a | n,j \right\rangle}}=0\end{aligned}$$ for $m \neq n$ for any $i,j$. In the limit of $h_n\rightarrow 1$ the partially localized eigenstate is reduced to the completely localized one.
Correspondence between the eigenvalues and eigenvectors of $H'$ and $H_W$ is given perturbatively for small eigenvalues. We expand $H'$ in terms of $H_W$ $$\begin{aligned}
&&
2 H' = H_W+H_W\sum_{k=1}^{\infty}(-1)^{k}
\left(\frac{a\gamma_5H_W}{2}\right)^{k},\end{aligned}$$ and the eigenvalue $\lambda_n^W$ and eigenstate ${\left| n_W\right\rangle}$ of $H_W$ is defined by $$\begin{aligned}
H_W{\left| n_W\right\rangle}=\lambda^W_n{\left| n_W\right\rangle}.\end{aligned}$$ The standard perturbation theory gives the relation between $\lambda_n$, ${\left| n\right\rangle}$ and $\lambda_n^W$, ${\left| n_W\right\rangle}$ as follows. $$\begin{aligned}
&&
2\lambda'_n=\lambda_n^W
-\frac{1}{2}\left(\lambda_n^W\right)^2{\left\langle n_W\left| \gamma_5 \right| n_W\right\rangle}
+{\cal O}(\lambda^3),
\\&&
{\left| n\right\rangle}={\left| n_W\right\rangle}
-\frac{1}{2}\lambda_n^W\frac{\phi_n}{\lambda_n^W-H_W}H_W \gamma_5 {\left| n_W\right\rangle}
+{\cal O}(\lambda^3),\end{aligned}$$ where $\phi_n$ is a projection operator onto the space perpendicular to the $n$-th eigenstate: $$\begin{aligned}
\phi_n=1-{\left| n_W\right\rangle}{\left\langle n_W\right|}.\end{aligned}$$
Plane-wave eigenvector
----------------------
We approximate the plane-wave function with that of the free theory but relax the condition for possible momenta $p$ in the finite box. The operator $H'$ of the free theory in momentum space is given by $$\begin{aligned}
&&
H'(p)=\frac{1}{a}\gamma_5\left(i\sum_\mu\gamma_\mu A_\mu(p)+B(p)\right),
\label{eqn:free-H}
\\&&
A_\mu(p) = \frac{2s_\mu}{s^2+(2+2\hat{s}^2-M)^2},
\\&&
B(p)=\frac{s^2+(2\hat{s}^2-M)(2+2\hat{s}^2-M)}{s^2+(2+2\hat{s}^2-M)^2},\end{aligned}$$ where $s_\mu=\sin (a p_\mu )$, $\hat{s}^2=\sum_\mu\sin^2(ap_\mu/2)$ and $s^2=\sum_\mu s_\mu^2$. The eigenstate of free $H'$ and ${\widetilde{H}}$ becomes $$\begin{aligned}
&&
H'{\left| p,s,a\right\rangle}=\lambda'(p){\left| p,s,a\right\rangle},
\\&&
{\widetilde{H}}{\left| p,s,a\right\rangle}={\widetilde{\lambda}}(p){\left| p,s,a\right\rangle},
\\&&
\lambda'(p)=\pm\frac{1}{a}\sqrt{A^2(p)+B^2(p)},
\\&&
{\widetilde{\lambda}}(p)=
\frac{1}{a_5}\log\frac{1+a\lambda'(p)}{1-a\lambda'(p)},\end{aligned}$$ where the spinor index $s$ runs as $s=1,2,3,4$, and the eigenstate is degenerate in the color index $a$. The eigenvector in $(x,\alpha,a)$ space can be written as $$\begin{aligned}
{{\left\langle x,\alpha,a | p,s,b \right\rangle}}=
\frac{1}{\sqrt{N_p}}U_\alpha(p,s)e^{ipx}\delta_{a,b}
\label{eqn:plane-wave}\end{aligned}$$ within the subspace ${\widetilde{\cal S}}$, where $U_\alpha(p,s)$ is a normalized eigenfunction of free $H'(p)$, given in Appendix \[sec:appendix-a\], and $N_p$ is the total number of the plane-wave eigenvectors. The free truncated overlap Dirac operator and its inverse in momentum space are given by $$\begin{aligned}
{\left\langle k,s,a\left| D_{N_5} \right| p,t,b\right\rangle} &=&
\frac{1}{2a}
\left(i(\gamma_\mu)_{s,t}C_\mu(p) +\delta_{s,t}E(p)\right)
\delta_{k,p}\delta_{a,b},
\\
{\left\langle k,s,a\left| D_{N_5}^{-1} \right| p,t,b\right\rangle} &=&
2a\frac{-i(\gamma_\mu)_{s,t}C_\mu(p)+\delta_{s,t}E(p)}{C^2+E^2}
\delta_{k,p}\delta_{a,b},\end{aligned}$$ where $$\begin{aligned}
&&
C_\mu(p)=\frac{A_\mu(p)}{a|\lambda'(p)|}\tanh\frac{N_5}{2}a_5|{\widetilde{\lambda}}(p)|,
\\&&
E(p)=\left(1+\frac{B(p)}{a|\lambda'(p)|}
\tanh\frac{N_5}{2}a_5|{\widetilde{\lambda}}(p)|\right).\end{aligned}$$
In addition we also assume that the overlap of two eigenfunctions from different groups vanishes: $$\begin{aligned}
{{\left\langle n | x,\alpha,a \right\rangle}}{{\left\langle x,\alpha,a | p,s,b \right\rangle}}=0.\end{aligned}$$
The eigenvalues of the hermitian Wilson-Dirac operator $H_W$ is also given as a function of $p_\mu$ for the free theory. The free $H_W$ in the momentum space is given by $$\begin{aligned}
H_W&=&
\gamma_5\left(i\gamma_\mu s_\mu+2\hat{s}^2-M\right).\end{aligned}$$ The eigenvalue $\lambda_W(p)$ of free $H_W$ is given by $$\begin{aligned}
&&
H_W{\left| \lambda_W(p)\right\rangle}=\lambda_W(p){\left| \lambda_W(p)\right\rangle},
\\&&
\lambda_W(p)=\pm\sqrt{s^2+\left(2\hat{s}^2-M\right)^2},\end{aligned}$$ where ${\left| \lambda_W(p)\right\rangle}$ is the corresponding eigenstate. Although the eigenstates of $\lambda'(p)$ and $\lambda_W(p)$ are different, $\lambda'(p)$ of $H'$ is given in terms of $\lambda_W(p)$ and the momentum $p$ as $$\begin{aligned}
\lambda'(p)&=&
\frac{\lambda_W(p)}{\sqrt{\lambda_W(p)^2+4(2\hat{s}^2-M)+4}}.\end{aligned}$$
Simplification of the formula of $m_{5q}$
-----------------------------------------
We first consider a system with $N_l$ completely localized eigenvectors and plane wave functions with $N_p$ degrees of freedom for the momentum $p$. $N_l$ and $N_p$ satisfy a relation $N_l+4N_cN_p=4N_cN_x$, where $N_x$ is a number of total sites of our lattice $N_x=n_xn_yn_zn_t$. Since the total eigenvector space is separated into two subspaces ${\cal S}$ and ${\widetilde{\cal S}}$, the complete set of this system is given by $$\begin{aligned}
1=\sum_{n=1}^{N_l}{\left| n\right\rangle}{\left\langle n\right|}
+\sum_{p=1}^{N_p}\sum_{s=1}^4\sum_{a=1}^{N_c}{\left| p,s,a\right\rangle}{\left\langle p,s,a\right|},
\label{eqn:complete-1}\end{aligned}$$ where ${\left| n\right\rangle}\in{\cal S}$ and ${\left| p,s,a\right\rangle}\in{\widetilde{\cal S}}$.
We expand the numerator of $m_{5q}$ with the complete set in the above. $$\begin{aligned}
X(t) &=&
-\frac{a_5}{a^{10}}\sum_{\vec{x},\alpha,a}\sum_{b,\beta}
{\left\langle I\left| \frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger
\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}} \right| I\right\rangle}
{\nonumber}\\&=&
X_l(t) + X_c(t),\end{aligned}$$ where $X_l(t)$ and $X_c(t)$ are the contributions from the localized eigenvectors and the plane wave function, respectively. After a little algebra, the detail of which is given in Appendix \[sec:appendix-b\], we obtain $$\begin{aligned}
X_l(t) &=&
\frac{1}{a_5}Y(t)\frac{1}{4N_cN_x}\sum_{n=1}^{N_l}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2\end{aligned}$$ for the contribution from the localized eigenvectors, where $Y(t)$ is the denominator of $m_{5q}$.
Similarly, taking $t\to\infty$ limit, we have for the contribution of the continuous eigenvalues $$\begin{aligned}
X_c(t)|_{t\to\infty} &=&
-\frac{a_5}{a^{8}}n_{xyz}'\frac{1}{N_p}\sum_{p}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(p)}\right)^2
F(p),\end{aligned}$$ where $n'_{xyz}$ is a volume of ${\widetilde{\cal S}}$ in $xyz$ space, and $F(p)$ is defined as $$\begin{aligned}
F(p)=
\frac{4N_c}{N_p}\left(\frac{4}{C^2(p)+E^2(p)}\right).\end{aligned}$$
We next evaluate the denominator of $m_{5q}$ in the similar manner: $$\begin{aligned}
Y(t) &=&
-\frac{a_5^2}{a^{10}}\sum_{\vec{x},\alpha,a}\sum_{\beta,b}
{\left\langle I\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| I\right\rangle}
=
Y_l(t) + Y_c(t),\end{aligned}$$ where $$\begin{aligned}
Y_l(t)&=&\frac{N_l}{4N_cN_x}Y(t)\end{aligned}$$ and $$\begin{aligned}
Y_c(t)|_{t\to\infty} &=&
-\frac{a_5^2}{a^{10}}n_{xyz}'\frac{1}{N_p}
\sum_{p}F(p).\end{aligned}$$ These results imply $$\begin{aligned}
Y(t\to\infty)= -\frac{4N_cN_x}{4N_cN_x-N_l}
\frac{a_5^2}{a^{10}}n_{xyz}'
\frac{1}{N_p}\sum_{p}F(p).\end{aligned}$$
We can directly see that the function $F(p)$ is almost constant for all the range of $p$ as in Fig. \[fig:Fp\] except some discrete points of $p$. Therefore it is a reasonable approximation to assume that $F(p)$ is independent of $p$. Combining these formula and approximations the anomalous quark mass becomes $$\begin{aligned}
m_{5q}&=&\frac{\lim_{t\to\infty}X(t)}{\lim_{t\to\infty}Y(t)}
{\nonumber}\\&=&
\frac{1}{a_5}\frac{1}{4N_cN_x}
\left(
\sum_{n}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2
+\sum_{n}\rho_n\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}_n}\right)^2
\right),\end{aligned}$$ where $\rho_n$ is a number of degeneracy in ${\widetilde{\lambda}}_n$, $$\begin{aligned}
\rho_n=\sum_{p,\alpha,a}\delta_{{\widetilde{\lambda}}_n,{\widetilde{\lambda}}(p)}.\end{aligned}$$ If the degeneracy of $\rho_n$ in the free case is resolved by the presence of gauge fields, we are able to reconstruct $m_{5q}$ by simply summing up all eigenvalues of $H'$, $$\begin{aligned}
m_{5q}=\frac{1}{a_5}\frac{1}{4N_cN_x}\sum_{n}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2.\end{aligned}$$
1.0cm
The above result can be generalized to the case with the partially localized eigenstates. The formula becomes $$\begin{aligned}
m_{5q}&=&\frac{\lim_{t\to\infty}X(t)}{\lim_{t\to\infty}Y(t)}
{\nonumber}\\&=&
\frac{1}{a_5}\frac{1}{4N_cN_x}
\left(
\sum_{n=1}^{N_l}\sum_{i=1}^{h_n}\tilde h_n^{(i)}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(i)}_n)}\right)^2
+
\sum_{n}\rho_n\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}_n}
\right)^2\right),\end{aligned}$$ where the $n$-th set of localized eigenvectors have the (local) support with the dimensionless volume of $h_n$, and the $\tilde h_n^{(i)}$ ($1\le \tilde h_n^{(i)}\le 4^4h_n$) is the enhancement factor for the set, which depends on the shape of eigenvectors. See the detail of the derivation in Appendix \[sec:appendix-c\]. When the degeneracy in the continuous eigenvalues is resolved with gauge fields we have $$\begin{aligned}
&&
m_{5q}a_5=
\frac{1}{4N_cN_x}\left(
\sum_{\rm local} \tilde h_n
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2
+\sum_{\rm continuous}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2
\right).
{\nonumber}\\\end{aligned}$$
Model for eigenvalue density {#sec:model}
============================
In this section we consider the relation between the asymptotic behavior of $m_{5q}$ in $N_5$ and the distribution of the continuous eigenvalues. For simplicity we neglect the effect of the localized eigenvalues here and will discuss their effect in the next section. In this simple situation we can write $m_{5q}$ as an integral in continuous eigenvalues $$\begin{aligned}
m_{5q}a_5=
\int_{\lambda_{\rm min}}^{\lambda_{\rm max}}d\lambda
\rho(\lambda)
\left(\frac{1}{2\cosh\frac{N_5}{2}\lambda}\right)^2,\end{aligned}$$ where $\lambda$ is a dimensionless eigenvalue $\lambda=a_5{\widetilde{\lambda}}$, $\lambda_{\rm min}$ and $\lambda_{\rm max}$ are minimum and maximum of eigenvalues $0\le\lambda_{\rm min}\le\lambda\le\lambda_{\rm max}$. Without loss of generality, we consider non-negative $\lambda$ only, by taking $\rho(\lambda)=\rho_n(\lambda)+\rho_n(-\lambda)$, since $\cosh\frac{N_5}{2}\lambda$ is the even function. We adopt the following three types of $\rho(\lambda)$ $$\begin{aligned}
\rho(\lambda)=\sqrt{R^2-(\lambda-R-\delta)^2}\end{aligned}$$ with
1. $\delta<0$, $\lambda_{\rm min}=0$, $\lambda_{\rm max}=2R-|\delta|$, $\rho(0)\neq0$,
2. $\delta=0$, $\lambda_{\rm min}=0$, $\lambda_{\rm max}=2R$, $\rho(0)=0$,
3. $\delta>0$, $\lambda_{\rm min}=\delta$, $\lambda_{\rm max}=2R+\delta$, $\rho(0)=0$.
A typical form of $\rho(\lambda)$ for each case is given in Fig. \[fig:rho\]
A main support of the integral at large $N_5$ resides near $\lambda_{\rm min}$ and asymptotic behavior of $m_{5q}$ is evaluated by expanding $\rho(\lambda)$ around $\lambda_{\rm min}$ and adopting the leading term. Since contribution from the larger eigenvalues is negligible we set the integral range to $[\lambda_{\rm min},\infty]$. The integral in $\lambda$ is easily calculated with the following two formulas, $$\begin{aligned}
&&
\left(\frac{1}{2\cosh\frac{N_5}{2}\lambda}\right)^2=
\sum_{n=1}^{\infty}(-)^{n-1}ne^{-nN_5\lambda},
\\&&
\int_{0}^{\infty}d\lambda e^{-x\lambda}\lambda^\alpha
=\frac{\Gamma(\alpha+1)}{x^{\alpha+1}},\end{aligned}$$ which is valid for $x>0$, $\alpha>-1$.
For $\delta<0$ the asymptotic behavior of the anomalous quark mass becomes $$\begin{aligned}
m_{5q} &\to&
\frac{\rho(0)}{2N_5}+{\cal O}\left(\frac{1}{N_5^2}\right).\end{aligned}$$ For $\delta=0$ $$\begin{aligned}
m_{5q} &\to&
\sqrt{\frac{\pi R}{2}}(1-\sqrt{2})\zeta\left(\frac{1}{2}\right)
\frac{1}{N_5^{3/2}}
+{\cal O}\left(\frac{1}{N_5^{5/2}}\right),\end{aligned}$$ where $\zeta$ is the Riemann’s zeta function. For $\delta>0$ we have $$\begin{aligned}
m_{5q} &\to&
e^{-\delta N_5}\left(
\sqrt{\frac{\pi R}{2}}\frac{1}{N_5^{3/2}}
+{\cal O}\left(\frac{1}{N_5^{5/2}}\right)\right).\end{aligned}$$ We can see that $\delta=0$ gives power law even if the zero mode density vanishes $\rho(0)=0$. We need a gap at $\lambda =0$ in the continuous eigenvalues to realize exponential decay. Note that the similar analysis is also made in Ref. [@Shamir00]. Typical form of $m_{5q}$ is given in Fig. \[fig:m5q\] for $R=4$ and $\delta=-0.5$, $\delta=0$, $\delta=0.5$.
Effect of localized eigenstates and the infinite volume limit of the system {#sec:analysis}
===========================================================================
In this section, using the formula for $m_{5q}$ in terms of the eigenvalues, we propose our interpretation, which resolves the inconsistency among the numerical simulations for the domain-wall QCD, mentioned in the introduction.
We consider the situation that the density of the continuous eigenvalues has a gap at zero: $\rho(\lambda) = 0$ for $\vert \lambda \vert \le \delta $ with $\delta > 0$. On the other hand, we do not put such a restriction on the localized eigenvalues, so that they can become almost zero. In this situation, the contribution of the continuous eigenvalues to $m_{5q}$ vanishes exponentially in $N_5$. In the following subsections we will consider the contribution from the localized eigenstates to $m_{5q}$.
$N_5\rightarrow \infty$ at finite volume
----------------------------------------
In this subsection we discuss the behavior of $m_{5q}$ in the large $N_5$ at finite volume. This situation often corresponds to the one encountered in the numerical simulations, and in the infinite $N_5$ limit the domain-wall fermion becomes the overlap Dirac operator, which satisfies the Ginsparg-Wilson relation.
In the finite volume, it is almost impossible to have an [*exact*]{} zero eigenvalue. More precisely a probability to have the exact zero eigenvalue is zero, since no symmetry assures the existence of it. Indeed no exact zero is numerically found in the evaluations of eigenvalues of $H_W$. On the other hand, [*almost*]{} zero eigenvalues, $\vert \lambda \vert
\simeq 10^{-2}$ or less, appear, and they become smaller as the volume increases. Therefore it may be reasonable to assume that the average of the smallest eigenvalue vanishes in some power of $1/V$: $\langle \vert \lambda_{\rm min}\vert \rangle = c_0 V^{-c_1}$ with $c_0, c_1
> 0$.
In this situation $m_{5q}$ vanishes exponentially as $N_5$ increase: $$\begin{aligned}
m_{5q} &\propto& \frac{1}{4 N_c V} \exp [ - \frac{ N_5 c_0}{ V^{c_1}} ]\end{aligned}$$ as $N_5\rightarrow\infty$. This means that DWQCD always works in the finite volume: $m_{5q}$ vanishes exponentially in $N_5$. In other words, as long as the continuous eigenvalues have a gap around zero, DWF in the $N_5\rightarrow\infty$ limit, or equivalently, the overlap Dirac fermion, works well to describe the chiral modes in the finite volume, where the smallest eigenvalue of the localized modes is small but non-zero.
According to this consideration, we speculate how DWQCD behaves as the coupling constant varies. In the strong coupling, even the continuous eigenvalues have no gap such that $\vert\lambda_{\rm min}\vert = 0$, and therefore $m_{5q}$ vanishes only in some power of $1/N_5$. DWQCD does not work in the strong coupling region. Once the gap in the continuous eigenvalues opens ($\vert \lambda_{\rm min}
\vert > \delta > 0$) at $\beta > \beta_c$ in the weak coupling region, $m_{5q}$ vanishes exponentially in $N_5$ and DWQCD in the finite volume works well to describe the chiral symmetry.
We think that $\beta_c < 6.0$ for the plaquette action and $\beta_c < 2.6$ for the RG improved action. At first sight this seems to contradict with the numerical data of $m_{5q}$, which does not show the exponential decay in $N_5$, for the plaquette action[@cppacs-dwf; @RBC]. This contradicted behavior is explained as follows. In the intermediate values of $N_5$, the continuous eigenvalues give the main contribution of $m_{5q}$, so that $$\begin{aligned}
m_{5q} &\sim& C_{\rm cont}\exp [-N_5\vert\lambda_{\rm min}^{\rm cont}\vert],\end{aligned}$$ while as $N_5$ further increases the localized eigenvalues dominates, so that $$\begin{aligned}
m_{5q} &\sim& C_{\rm local}\exp [-N_5\vert\lambda_{\rm min}^{\rm local}\vert] ,\end{aligned}$$ where $C_{\rm cont}$ or $C_{\rm local}$ is proportional to the number of the modes near $\lambda_{\rm min}^{\rm cont}$ or $\lambda_{\rm min}^{\rm local}$, respectively. At $\beta=6.0$ for the plaquette action, the transition between the former exponential behavior with $\vert\lambda_{\rm min}^{\rm cont}\vert$ and the latter with $\vert\lambda_{\rm min}^{\rm local}\vert$ can be seen for $N_5 = 10\sim 40$ [@cppacs-dwf; @RBC]. The latter behavior looks almost constant in $N_5$ since $\lambda_{\rm min}^{\rm local}$ is very small. At $\beta=2.6$ for the RG improved action, on the other hand, only the former exponential behavior can be detected for $ N_5 = 10\sim 24$ [@cppacs-dwf]. This suggests that the ratio $C_{\rm local}/C_{\rm cont}$ is smaller for the RG action than for the plaquette action. Indeed it has been numerically found[@cppacs-nagai] that the number of the localized modes near $\lambda_{\rm min}^{\rm local}$ is much less for the RG action, while the number of the continuous modes near $\lambda_{\rm min}^{\rm cont}$ is similar in the two actions. (However $\vert\lambda_{\rm min}^{\rm local}
\vert$ seems larger for the RG action.) We then give two predictions, which should be checked in order to test this interpretation: the exponential decay, $\exp [-N_5\vert\lambda_{\rm min}^{\rm local}\vert] $, can be seen at larger $N_5 > 60$ for the plaquette action, and the transition between the former and the latter can be seen at $N_5 = 40\sim 60$ for the RG action.
The distribution of eigenvalues of $H_W$[@cppacs-nagai] suggests that even $\beta= 5.65$ for the plaquette action or $\beta=2.2$ for the RG action is already in the weak coupling region ($\beta > \beta_c$). Indeed it seems that the transition between two exponentials has been observed in the behavior of $m_{5q}$ at these $\beta$[@cppacs-dwf].
Infinite volume limit at finite $N_5$
-------------------------------------
In the previous subsection, we argue that $m_{5q}$ decay exponentially in $N_5$ in the finite volume as long as the distribution of the continuous eigenvalues has a gap around zero. The exponential decay rate $\vert \lambda_{\rm min}^{\rm local}\vert$, however, may vanish in the infinite volume limit. Therefore, it may be the case that $m_{5q}$ does not vanish exponentially if the infinite volume limit is taken before $N_5\rightarrow\infty$. Moreover, the value of $N_5$ necessary for the suppression of the chiral symmetry breaking, $m_{5q}$, may increase as the volume becomes larger. This may be a disaster from the practical point of view. In this subsection, we will discuss whether the effect of the localized eigenstates to $m_{5q}$ vanishes or not in the infinite volume limit.
In the case of the hermitian Wilson-Dirac operator, the plane-wave eigenvalues in the finite volume become the continuous spectra in the infinite volume. On the other hand, we can not predict the nature of the distribution for the localized eigenvalues in the infinite volume limit. Although the localized modes are always discrete in the finite volume, it can be continuous in the limit as will be discussed later.
We first consider the case where the number of the localized modes around zero does not grow linear in the volume. More precisely, the number of modes which satisfy $-\epsilon< \lambda <
\epsilon$ with $\epsilon > 0$, denoted by $N(-\epsilon, \epsilon)$, is bounded by $ c_0 \epsilon V^{c_1}$ with $c_0 >0$ and $c_1 < 1$. In this case the contribution to $m_{5q}$ in the infinite volume limit becomes $$\begin{aligned}
m_{5q}^{\rm localized}
&\simeq& \lim_{V\rightarrow\infty}
\frac{1}{V} \sum_{n:{\rm localized}} e^{- \vert\lambda_n\vert N_5}
< \lim_{V\rightarrow\infty}
\frac{N(-\epsilon,\epsilon)}{V} + O(e^{-\epsilon N_5}){\nonumber}\\
&<& \lim_{V\rightarrow\infty}
c_0 \epsilon V^{c_1-1} + O(e^{-\epsilon N_5})
= O(e^{-\epsilon N_5}),\end{aligned}$$ where the sum is taken only for the localized modes. This result shows that the contribution from the localized near-zero modes vanishes in the infinite volume limit and the remaining contribution also vanishes exponentially in $N_5$, as long as the number of the localized near-zero modes is bounded as $ N(-\epsilon,\epsilon)= c_0 \epsilon V^{c_1}$ with $c_1 < 1$. DWQCD works well in this case even in the infinite volume limit.
If the number of the localized near zero-modes is proportional to the volume; $N(-\epsilon,\epsilon) = c_0 \epsilon V$, the contribution from them remains non-zero in the infinite volume limit: $$\begin{aligned}
m_{5q}^{\rm localized}
\lim_{V\rightarrow\infty}
\simeq \frac{1}{V} \sum_{n:{\rm localized}} e^{- \vert\lambda_n\vert N_5}
= c_0 \epsilon + O(e^{-\epsilon N_5}) .\end{aligned}$$ DWQCD does not work in this case in the infinite volume limit.
If $N(-\epsilon,\epsilon) = c_0 \epsilon V$, the density of state $\rho(\lambda)$ is (almost) continuous and nonzero at $\lambda = 0$ in the infinite volume limit: $\rho(0)\not= 0$. In other words, a sum of the infinitely many $\delta$ functions from the localized modes, normalized by the volume, becomes the continuous function in the infinite volume limit. To be more precise, one should first define the integrated density of states by $$\begin{aligned}
k(\lambda) &=&
\lim_{V\rightarrow\infty}\frac{1}{V} N(-\infty, \lambda).\end{aligned}$$ Since $k(\lambda)$ is a monotonically increasing function in $\lambda$, its derivative, $\displaystyle\frac{d k(\lambda)}{d \lambda}$, becomes a well-defined measure. The previous statement is equivalent to that $\rho_{\rm localized}(\lambda)\equiv
\displaystyle\frac{d k(\lambda)}{d \lambda}$ is continuous and non-zero at $\lambda = 0$.
Let us explain the case that $\rho_{\rm localized}(0)\not=0$ more concretely by using a model of the eigenvalue distribution. Suppose that the discrete eigenvalues are uniformly distributed in the interval that $ -1 < \lambda < 1$ and the number of modes is equal to $ 2 c V$, so that the average interval between two successive eigenvalues becomes $ 1/(c V)$. In the infinite volume limit, we see that the eigenvalues are discrete but dense in the interval. If we calculate $m_{5q}$ using this distribution, we have $$\begin{aligned}
m_{5q} &=&
\lim_{V\rightarrow\infty}\frac{2}{V}\sum_{n=1}^{cV}
e^{-\frac{ n }{cV} N_5}
=\lim_{V\rightarrow\infty}\frac{2}{V}\frac{e^{-\frac{N_5}{cV}}
(1-e^{-N_5})}{1-e^{-\frac{N_5}{cV}}}
= \frac{2c}{N_5}(1-e^{-N_5}) .\end{aligned}$$ Therefore $m_{5q}$ does not vanish exponentially in $N_5$ if the infinite volume limit is first taken.
The appearance of the continuous density of states from the localized modes in the infinite volume is often observed if a kind of randomness is introduced in the interaction[@neuberger; @luscher; @nakamura]. Physically the Anderson localization[@anderson] is one of such examples. In QCD one can have infinitely many localized modes if infinitely many pairs of instanton and anti-instanton with a fixed topological charge exist. In lattice QCD, we have, in addition, very localized modes associated with the dislocations. Since the dislocations is local, the number of the discrete modes associated with them can be proportional to the volume[@BNN; @luscher].
Although the contribution of the localized modes to $m_{5q}$ vanishes in the finite volume, whether it vanishes in the infinite volume limit depends on the density of states of the localized modes in the infinite volume limit. The previous numerical investigations on $\rho(0)$[@EHN99; @cppacs-nagai] suggests $\rho(0)$ is small but non-zero at $\forall\beta\not=\infty$ in quenched QCD. Further investigations, however, are necessary in particular for the volume dependence of $\rho(0)$, to have a definite conclusion on this point whether $\rho(0)$ is non-zero or not. If $\rho(0)\not=0$ at $\forall\beta\not=\infty$, it is interesting and important to derive the $\beta$ dependence of $\rho(0)$[@EHN99] theoretically. Note however that $\rho(0)$ should vanish in the continuum limit for $ 0 < M < 2$, since no zero mode appears in the free theory.
Phase structure with the Wilson quarks
--------------------------------------
In this subsection we consider the relation between the above interpretation for the behavior of $m_{5q}$ in DWQCD and the parity-flavor breaking in the Wilson quark action in the quenched approximation.
We first show that the contribution of the localized modes to the parity-flavor breaking order parameter vanishes in the infinite volume limit, in the case that $N(-\epsilon,\epsilon) \le c_0 \epsilon V^{c_1}$ with $c_1 < 1$. In this case we have $$\begin{aligned}
\langle \bar\psi i\gamma_5\tau^3 \psi \rangle^{\rm localized} &=&
\lim_{H\rightarrow 0}\lim_{V\rightarrow\infty}\frac{1}{V}
\sum_{n:{\rm localized}} \frac{-2H}{\lambda_n^2 + H^2} {\nonumber}\\
&\le & \lim_{H\rightarrow 0}\lim_{V\rightarrow\infty}\frac{1}{V}
\left[ \epsilon c_0 V^{c_1} \frac{-2 H}{H^2}
+\sum_{n: \vert\lambda_n \vert >\epsilon}\frac{-2H}{\epsilon^2 + H^2} \right]
{\nonumber}\\
&\le& \lim_{H\rightarrow 0}\lim_{V\rightarrow\infty}c_0\epsilon V^{c_1-1}
\frac{-2}{H}
+ \lim_{H\rightarrow 0}\lim_{V\rightarrow\infty}\frac{1}{V}
C(\epsilon) V \frac{-2H}{\epsilon^2 + H^2}
{\nonumber}\\
&\le & C(\epsilon) \lim_{H\rightarrow 0}\frac{-2H}{\epsilon^2 + H^2} = 0,\end{aligned}$$ where $\sum_{n: \vert\lambda_n \vert >\epsilon} 1 \equiv C(\epsilon) V$. The first term vanishes in the $V\rightarrow\infty$ limit, while the second one vanishes in the $H\rightarrow 0$ limit.
We next show that the contribution of the localized modes to the parity-flavor breaking order parameter remains non-zero in the infinite volume limit, if the density of such states is non-zero at $\lambda =0$ in the infinite volume limit: $N(-\epsilon,\epsilon) = c_0 \epsilon V$. To see this, we again use the uniform distribution of eigenvalues between $-1$ and $1$, used in the previous subsection. $$\begin{aligned}
\langle \bar\psi i\gamma_5\tau^3 \psi \rangle^{\rm localized} &=&
\lim_{H\rightarrow 0}\lim_{V\rightarrow\infty}\frac{2}{V}
\sum_{n=1}^{cV} \frac{-2H}{(\frac{n}{cV})^2 + H^2}
= -4c\lim_{H\rightarrow 0}\lim_{V\rightarrow\infty}
\sum_{n=1}^{cV}\frac{HcV}{n^2 + (HcV)^2} {\nonumber}\\
&=& -4c\lim_{H\rightarrow 0}\lim_{V\rightarrow\infty}
\left[ \frac{\pi {\rm coth}(\pi HcV)}{2}-\frac{1}{2HcV} + \Delta
\right]
=-2c\pi,\end{aligned}$$ where $\lim_{H\rightarrow 0}\lim_{V\rightarrow\infty} \Delta \le
\lim_{H\rightarrow 0} \pi H/(2\sqrt{1+H^2}) = 0$.
The above two considerations show that the correspondence between the failure of DWQCD in the infinite volume limit and non-zero order parameter of the parity-flavor breaking in the quenched Wilson fermion still holds even in the case that the localized modes dominate in near-zero eigenvalues. The previous investigations[@EHN99; @cppacs-nagai] suggests that the gap in the phase structure in Fig. \[fig:phase\] closes even at $\beta > \beta_c$ in the quenched QCD with the Wilson quark. In this case we expect that non-zero value of $\rho_{\rm localized}(0)$ in the region A is much smaller than $\rho_{\rm continuous}(0)$ in the region B, the true parity-flavor breaking phase[@EHN99], so that $\rho(0)_{\rm localized}$ may be too small to be detected by measuring the pion mass. This may be a reason why the previous investigation indicates that the region $A$ exists at $\beta = 6.0$ for the plaquette action[@AKU]. Finally it is noted that the region A with $\rho_{\rm localized}(0)=0$ should always exist in full QCD, since appearances of the localized near-zero modes are suppressed by the fermion determinant, $\det D_W = \det H_W$.
Exceptional configurations
--------------------------
Finally let us comment on the exceptional configurations appeared in the quenched QCD with the Wilson-type quark action. It is sometimes observed at small quark masses in the quenched simulations that the pion propagator receives the anomalously large contribution from a particular configuration, which is called the exceptional configuration and is removed from the statistical average. In ref. [@AKU] the several configurations, which give the W shape in the pion correlation function in $t$ space, have been observed in the region A, where the gap(absence) of the party-flavor breaking phase is expected. It has been very difficult to understand the W shape propagator, since it means that the correlation increase as the separation $t$ increases. Now we argue that this may be understood by the localized modes.
The pion correlation function is defined by $$\begin{aligned}
\langle \bar\psi i\gamma_5\tau^a\psi (x)\cdot \bar\psi i\gamma_5\tau^a\psi (y)
\rangle &=& {\left\langle x\left| \frac{1}{H_w} \right| y\right\rangle} {\left\langle y\left| \frac{1}{H_w} \right| x\right\rangle} {\nonumber}\\
&=& \sum_n {{\left\langle x | \lambda_n \right\rangle}}\frac{1}{\lambda_n}{{\left\langle \lambda_n | y \right\rangle}}
\sum_l {{\left\langle y | \lambda_l \right\rangle}}\frac{1}{\lambda_l}{{\left\langle \lambda_l | x \right\rangle}}{\nonumber}\\
&=&
\sum_{n,l}\frac{1}{\lambda_n\lambda_l} \phi_n(x)\phi_l(x)^\dagger
\phi_l(y)\phi_n(y)^\dagger,\end{aligned}$$ where $\phi_n(x)={{\left\langle x | \lambda_n \right\rangle}}$ etc. If some localized eigenstate has a very small eigenvalue $\lambda_n$ on some configuration, the contribution of the mode becomes very large: $$\begin{aligned}
\langle \bar\psi i\gamma_5\tau^a\psi (x)\cdot \bar\psi i\gamma_5\tau^a\psi (y)
&=& \frac{1}{\lambda_n^2}\phi_n(x)\phi_n(x)^\dagger \phi_n(y)\phi_n(y)^\dagger
+ \mbox{other contributions} .\end{aligned}$$ This is the exceptional configuration. Furthermore suppose the eigenstate $\phi_n(\vec x, t)$ is localized at $x = (\vec x_n, t_n)$. Summing over $\vec{x}$ and $\vec{y}$, and taking $x_0=t$ and $y_0=0$, the time correlation is dominated by $$\begin{aligned}
C(t)&\equiv& \sum_{\vec x,\vec y} \frac{1}{\lambda_n^2}
\phi_n(\vec x,t)\phi_n(\vec x,t)^\dagger
\phi_n(\vec y,0)\phi_n(\vec y,0)^\dagger,\end{aligned}$$ and $C(t)$ has a peak around $t = t_n$. Together with an enhance factor $1/\lambda_n^2$ for small eigenvalues this explains the W shape behavior of the pion propagator.
Conclusions {#sec:conclusion}
===========
In this paper we first derive the formula for the chiral symmetry breaking term $m_{5q}$ in DWQCD in terms of the (modified) hermitian Wilson-Dirac operator in 4 dimensions. Using several simplifications and approximations we explicitly write down the formula of $m_{5q}$ in terms the eigenvalues only.
The important observation is that there are two different types for the eigenvalues of the the hermitian Wilson-Dirac operator. One is the continuous one, which corresponds to the plane-wave in the free theory, the other is the localized one, associated with the instanton or dislocations. We argue that the effect of the latter one to the physical observable such as $m_{5q}$ or the parity-flavor breaking order parameter vanishes in the infinite volume limit, unless the number of near zero localized modes increases linearly in the volume. On the other hand, the effect remains non-zero but small in the infinite volume limit, if it linearly increases.
The message of this paper is that domain-wall fermion or overlap Dirac fermion should work well to describe the vector-like chiral symmetry at weak coupling, $\beta > \beta_c$, where the gap in the continuous spectra opens, as long as the volume is finite. The small eigenvalues appeared in the numerical simulation seem to belong to the localized one, hence their contribution vanishes in the $N_5\rightarrow \infty$ limit in the finite volume. Whether domain-wall/overlap fermions are successful or not in the infinite volume, however, depends crucially on the distribution of the localized eigenvalues in the limit.
In practice one can make such small eigenvalues appeared in the finite volume larger by hand, without changing physical observables[@EH00; @HJL00]. This is a little costly. If the effect of chiral symmetry breaking is small enough or no dependence of observables on $N_5$ is detected, one may instead perform simulations at large but numerically affordable value of $N_5$. Except the quantities very sensitive to the small eigenvalues such as $m_{5q}$, $N_5$ dependences are expected to be rather week in general. This expectation is indeed true in the case of the quantum hall effect[@QHE].
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Profs. Y. Kikukawa, T. Izubuchi and K. I. Nagai for useful discussion. S.A would like to thank Profs. M. Lüscher, H. Neuberger and S. Nakamura for informative discussion, and Y.T would like to thank Prof. T. Onogi for valuable discussion. This work is supported in part by Grants-in-Aid of the Ministry of Education (Nos. 12304011,12640253, 13135204).
Plane wave {#sec:appendix-a}
==========
The eigenvector [(\[eqn:plane-wave\])]{} of the free Hamiltonian [(\[eqn:free-H\])]{} is given by combining positive and negative energy eigenvector $u_\alpha(p,\tau)$ and $v_\alpha(p,\tau)$ $$\begin{aligned}
&&
(H')_{\alpha\beta} v_\beta(p,\tau)=-|\lambda'(p)|v_\alpha(p,\tau),
\\&&
(H')_{\alpha\beta} u_\beta(p,\tau)=|\lambda'(p)|u_\alpha(p,\tau),\end{aligned}$$ where $\alpha$, $\beta$ are spinor indexes and $\tau$ runs $\tau=1,2$. We adopt the following combination in this paper; $$\begin{aligned}
U_\alpha(p,1)=v_\alpha(p,1),\;
U_\alpha(p,2)=v_\alpha(p,2),\;
U_\alpha(p,3)=u_\alpha(p,1),\;
U_\alpha(p,4)=u_\alpha(p,2).\end{aligned}$$ $u$ and $v$ are given as follows with Pauli matrix $\vec{\sigma}$ and two dimensional basis vector $\xi(\tau)$, $$\begin{aligned}
&&
v_\alpha(p,\tau)=\sqrt{\frac{|\lambda'(p)|-B(p)}{2|\lambda'(p)|}}
\pmatrix{\xi(\tau) \cr
\frac{-i\sigma_\mu^\dagger A_\mu(p)}{B(p)-|\lambda'(p)|}\xi(\tau)}_\alpha,
\\&&
u_\alpha(p,\tau)=\sqrt{\frac{|\lambda'(p)|-B(p)}{2|\lambda'(p)|}}
\pmatrix{\frac{-i\sigma_\mu A_\mu(p)}{B(p)-|\lambda'(p)|}\xi(\tau) \cr
\xi(\tau)}_\alpha,
\\&&
\sigma_\mu=\left(1,-i\vec{\sigma}\right),
\\&&
\xi(1)=\pmatrix{1 \cr 0},\; \xi(2)=\pmatrix{0 \cr 1}.\end{aligned}$$ $U_\alpha(p,s)$ satisfies the orthogonal and completeness condition $$\begin{aligned}
&&
\sum_{\alpha}U_\alpha^\dagger(p,s)U_\alpha(p,t)=\delta_{s,t},
\\&&
\sum_{s}U_\alpha^\dagger(p,s)U_\beta(p,s)=\delta_{\alpha,\beta}.\end{aligned}$$
Anomalous quark mass with completely localized and plane-wave eigenvectors {#sec:appendix-b}
==========================================================================
We consider a system with $N_l$ completely localized eigenvectors and plane-wave functions with $N_p$ degrees of freedom for momentum $p$.
We expand the two Green functions in $m_{5q}$ with the complete set in eq.(\[eqn:complete-1\]). $$\begin{aligned}
X(t) &=& \sum_{\vec{x}}
{\left\langle J_{5q}(x)P(y) \right\rangle}
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}\sum_{\vec{x},\alpha,a}\sum_{b,\beta}
{\left\langle I\left| \frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger
\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}} \right| I\right\rangle}
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}\sum_{\vec{x},\alpha,a}\sum_{b,\beta}
\Biggl(
\sum_{n,m=1}^{N_l}{{\left\langle I | n \right\rangle}}
{\left\langle n\left| \frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}D_{N_5}^{-1} \right| J\right\rangle}
{\nonumber}\\&&\times
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}} \right| m\right\rangle}
{{\left\langle m | I \right\rangle}}
{\nonumber}\\&+&
\sum_{p,s,c}\sum_{k,t,d}{{\left\langle I | p,s,c \right\rangle}}
{\left\langle p,s,c\left| \frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}D_{N_5}^{-1} \right| J\right\rangle}
{\nonumber}\\&&\times
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}} \right| k,t,d\right\rangle}
{{\left\langle k,t,d | I \right\rangle}}
\Biggr),\end{aligned}$$ where $I=(x,\alpha,a)$, $J=(y,\beta,b)$ and we use a relation ${{\left\langle I | n \right\rangle}}{{\left\langle k,t,d | I \right\rangle}}=0$.
Contribution from the localized eigenvector is given as follows $$\begin{aligned}
X_l(t) &=&
-\frac{a_5}{a^{10}}\sum_{\vec{x},\alpha,a}\sum_{b,\beta}
\sum_{n,m=1}^{N_l}
{{\left\langle I | n \right\rangle}}{{\left\langle m | I \right\rangle}}
{\nonumber}\\&&\times
{\left\langle n\left| \frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}} \right| m\right\rangle}
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}\sum_{n=1}^{N_l}\delta_{t,x_n^0}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2
\sum_{b,\beta}
{\left\langle n\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| \left(D_{N_5}^{-1}\right)^\dagger \right| n\right\rangle},
{\nonumber}\\\end{aligned}$$ where we use a relation $$\begin{aligned}
&&
{{\left\langle I | n \right\rangle}}={{\left\langle x,\alpha,a | n \right\rangle}}=
\delta_{\alpha,\alpha_n}\delta_{a,a_n}\delta_{x,x_n},
\\&&
{{\left\langle I | n \right\rangle}}{{\left\langle m | I \right\rangle}}
=\delta_{m,n}
\delta_{\alpha,\alpha_n}\delta_{a,a_n}\delta_{x,x_n}.\end{aligned}$$ By inserting the identity operator in $(x,\alpha,a)$ space $$\begin{aligned}
1=\sum_{x,\alpha,a}{\left| x,\alpha,a\right\rangle}{\left\langle x,\alpha,a\right|}\end{aligned}$$ we have $$\begin{aligned}
X_l(t) &=&
-\frac{a_5}{a^{10}}\sum_{n=1}^{N_l}\delta_{t,x_n^0}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2
{\nonumber}\\&&\times
\sum_{b,\beta}
\sum_{z,\gamma,c}\sum_{w,\delta,d}
{{\left\langle n | z,\gamma,c \right\rangle}}
{\left\langle z,\gamma,c\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| \left(D_{N_5}^{-1}\right)^\dagger \right| w,\delta,d\right\rangle}
{{\left\langle w,\delta,d | n \right\rangle}}
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}\sum_{n=1}^{N_l}\delta_{t,x_n^0}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2
{\nonumber}\\&&\times
\sum_{b,\beta}
{\left\langle x_n,\alpha_n,a_n\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| \left(D_{N_5}^{-1}\right)^\dagger \right| x_n,\alpha_n,a_n\right\rangle}
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}\frac{1}{4N_cN_x}\sum_{n=1}^{N_l}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2
{\nonumber}\\&&\times
\sum_{\vec{x},\alpha,a}
\sum_{b,\beta}
{\left\langle \vec{x},t;\alpha,a\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| \left(D_{N_5}^{-1}\right)^\dagger \right| \vec{x},t;\alpha,a\right\rangle}
{\nonumber}\\&=&
\frac{1}{a_5}Y(t)\frac{1}{4N_cN_x}\sum_{n=1}^{N_l}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2,\end{aligned}$$ where we assume that the center of the localized solution $(x_n,\alpha_n,a_n)$ is distributed uniformly in the $(x,\alpha,a)$ space and we can take average over $(x,\alpha,a)$ when enough number of configurations are summed in the simulation.
For the contribution from the continuous eigenvalues we take $t\to\infty$ limit to proceed, $$\begin{aligned}
X_c(t)|_{t\to\infty} &=&
-\frac{a_5}{a^{10}}\sum_{\vec{x},\alpha,a}\sum_{b,\beta}
\sum_{p,s}\sum_{k,s'}
\frac{1}{N_p}U_\alpha(p,s)U_\alpha^\dagger(k,t)e^{i(p-k)x}
{\nonumber}\\&&\times
{\left\langle p,s,a\left| \frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| \left(D_{N_5}^{-1}\right)^\dagger\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}} \right| k,s',a\right\rangle}
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}n_{xyz}'\frac{1}{N_p}\sum_{p,s,a}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(p)}\right)^2
{\nonumber}\\&&\times
\sum_{\beta,b}{\left\langle p,s,a\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| \left(D_{N_5}^{-1}\right)^\dagger \right| p,s,a\right\rangle},\end{aligned}$$ where we use following relations $$\begin{aligned}
&&
{{\left\langle x,\alpha,a | p,s,b \right\rangle}}=
\frac{1}{\sqrt{N_p}}U_\alpha(p,s)e^{ipx}\delta_{a,b},
\\&&
\sum_{\vec{x}}e^{i(\vec{p}-\vec{k})\vec{x}}
=n'_{xyz}\delta_{\vec{p},\vec{k}},
\\&&
\lim_{t\to\infty}e^{it(p^0-k^0)} = \delta_{p^0,k^0},
\\&&
\sum_\alpha U_\alpha^\dagger(p,s)U_\alpha(p,s')=\delta_{s,s'}.\end{aligned}$$ $n'_{xyz}$ is a volume of ${\widetilde{\cal S}}$ in $xyz$ space. By inserting the complete set [(\[eqn:complete-1\])]{} of eigenstates we have $$\begin{aligned}
X_c(t)|_{t\to\infty} &=&
-\frac{a_5}{a^{10}}n_{xyz}'\frac{1}{N_p}\sum_{p,s,a}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(p)}\right)^2
{\nonumber}\\&\times&
\sum_{\beta,b}\Biggl(
\sum_{n,m=1}^{N_l}{\left\langle p,s,a\left| D_{N_5}^{-1} \right| n\right\rangle}
{{\left\langle n | y,\beta,b \right\rangle}}{{\left\langle y,\beta,b | m \right\rangle}}
{\left\langle m\left| \left(D_{N_5}^{-1}\right)^\dagger \right| p,s,a\right\rangle}
{\nonumber}\\&&+
\sum_{k,s',c}{\left\langle p,s,a\left| D_{N_5}^{-1} \right| k,s',c\right\rangle}
{{\left\langle k,s',c | y,\beta,b \right\rangle}}
{\nonumber}\\&&\quad\times
\sum_{q,s'',d}{{\left\langle y,\beta,b | q,s'',d \right\rangle}}
{\left\langle q,s'',d\left| \left(D_{N_5}^{-1}\right)^\dagger \right| p,s,a\right\rangle}
\Biggr)
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}n_{xyz}'\frac{1}{N_p}\sum_{p}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(p)}\right)^2
{\nonumber}\\&&\times
\Biggl(
\sum_{a,s}\sum_{n=1}^{N_l}\delta_{y,x_n}
{\left\langle p,s,a\left| D_{N_5}^{-1} \right| n\right\rangle}{\left\langle n\left| \left(D_{N_5}^{-1}\right)^\dagger \right| p,s,a\right\rangle}
{\nonumber}\\&&+
\frac{N_c}{N_p}\sum_{s,s'}
2a\left(\frac{-i\gamma_\mu C_\mu(p)+E(p)}{C^2+E^2}\right)_{s,s'}
2a\left(\frac{i\gamma_\mu C_\mu(p)+E(p)}{C^2+E^2}\right)_{s',s}
\Biggr)
{\nonumber}\\&=&
-\frac{a_5}{a^{8}}n_{xyz}'\frac{1}{N_p}\sum_{p}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(p)}\right)^2
F(p),\end{aligned}$$ where $F(p)$ is defined as $$\begin{aligned}
F(p) &=& F_l(p)+F_c(p),
\\
F_l(p) &=& \sum_{n=1}^{N_l}\delta_{y,x_n}\sum_{s,a}\frac{1}{a^2}
{\left\langle p,s,a\left| D_{N_5}^{-1} \right| n\right\rangle}{\left\langle n\left| \left(D_{N_5}^{-1}\right)^\dagger \right| p,s,a\right\rangle},
\\
F_c(p) &=& \frac{4N_c}{N_p}\left(\frac{4}{C^2(p)+E^2(p)}\right).\end{aligned}$$ We can show that $F_l(p)$ vanishes as follows. We extend the orthogonality assumption of the eigenvectors ${{\left\langle n | p,s,a \right\rangle}}=0$ to the case with $\gamma_5$; ${\left\langle n\left| \gamma_5 \right| p,s,a\right\rangle}=0$. This is plausible since the eigenfunction $\psi_n(x)={{\left\langle x | n \right\rangle}}$ is localized in space-time and the overlap is suppressed even if we multiply $\gamma_5$. Then by using the explicit form [(\[eqn:DN5\])]{}, the matrix element of the truncated overlap Dirac operator $D_{N_5}$ turns out to be block diagonal, in which only the matrix elements ${\left\langle n\left| D_{N_5} \right| m\right\rangle}$ and ${\left\langle p,s,a\left| D_{N_5} \right| k,t,b\right\rangle}$ are non-zero and the off-diagonal part ${\left\langle p,s,a\left| D_{N_5} \right| n\right\rangle}$ becomes zero. This block diagonal property is kept even if we take inversion and the off-diagonal part becomes zero $$\begin{aligned}
{\left\langle p,s,a\left| D_{N_5}^{-1} \right| n\right\rangle}=0.\end{aligned}$$ It is noted that in the free theory the relation $D_W^\dagger D_W=D_WD_W^\dagger$ is satisfied and ${\left\langle n\left| D_{N_5}^{-1} \right| m\right\rangle}=0$ is shown exactly for non-degenerate $|\lambda'_n|\neq|\lambda'_m|$.
The pion propagator in the denominator of $m_{5q}$ is expanded similarly with eigenstates as $$\begin{aligned}
Y(t) &=& \sum_{\vec{x}}{\left\langle P(x)P(y) \right\rangle}
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}\sum_{\vec{x},a,\alpha}\sum_{b,\beta}
{\left\langle I\left| D_{N_5}^{-1} \right| J\right\rangle}{\left\langle J\left| (D_{N_5}^{-1})^\dagger \right| I\right\rangle}
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}\sum_{\vec{x},a,\alpha}\sum_{b,\beta}
\Biggl(
\sum_{n,m=1}^{N_l}
{{\left\langle I | n \right\rangle}}{\left\langle n\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| (D_{N_5}^{-1})^\dagger \right| m\right\rangle}{{\left\langle m | I \right\rangle}}
{\nonumber}\\&+&
\sum_{p,s,c}\sum_{k,s',d}
{{\left\langle I | p,s,c \right\rangle}}{\left\langle p,s,c\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| (D_{N_5}^{-1})^\dagger \right| k,s',d\right\rangle}{{\left\langle k,s',d | I \right\rangle}}
\Biggr).\end{aligned}$$ A contribution from localized eigenvector is given by $$\begin{aligned}
Y_l(t)&=&
-\frac{a_5^2}{a^{10}}\sum_{n=1}^{N_l}
\delta_{t,x^0_n}\sum_{b,\beta}
{\left\langle n\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| (D_{N_5}^{-1})^\dagger \right| n\right\rangle}
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}\sum_{n=1}^{N_l}
\delta_{t,x^0_n}\sum_{b,\beta}
\sum_{z,\gamma,c}{{\left\langle n | z,\gamma,c \right\rangle}}
{\left\langle z,\gamma,c\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\nonumber}\\&&\times
\sum_{w,\delta,d}{\left\langle y,\beta,b\left| (D_{N_5}^{-1})^\dagger \right| w,\delta,d\right\rangle}
{{\left\langle w,\delta,d | n \right\rangle}}
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}\sum_{n=1}^{N_l}
\delta_{t,x^0_n}\sum_{b,\beta}
{\left\langle x_n,\alpha_n,a_n\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| (D_{N_5}^{-1})^\dagger \right| x_n,\alpha_n,a_n\right\rangle}
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}
\sum_{n=1}^{N_l}
\frac{1}{4N_cN_x}\sum_{\vec{x},\alpha,a}\sum_{\beta,b}
{\left\langle \vec{x},t,\alpha,a\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| (D_{N_5}^{-1})^\dagger \right| \vec{x},t,\alpha,a\right\rangle}
{\nonumber}\\&=&
\frac{N_l}{4N_cN_x}Y(t).\end{aligned}$$ Taking $t\to\infty$ limit a contribution from the continuous sector becomes $$\begin{aligned}
Y_c(t)|_{t\to\infty} &=&
-\frac{a_5^2}{a^{10}}\sum_{\vec{x},a,\alpha}\sum_{b,\beta}
\sum_{p,s}\sum_{k,s'}
\frac{1}{N_p}U_\alpha(p,s)U_\alpha^\dagger(k,s')e^{i(p-k)x}
{\nonumber}\\&&\times
{\left\langle p,s,a\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| (D_{N_5}^{-1})^\dagger \right| k,s',a\right\rangle}
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}n_{xyz}'
\frac{1}{N_p}\sum_{p,s,a}\sum_{b,\beta}
{\left\langle p,s,a\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| (D_{N_5}^{-1})^\dagger \right| p,s,a\right\rangle}
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}n_{xyz}'
\frac{1}{N_p}\sum_{p,s,a}
{\nonumber}\\&\times&
\sum_{b,\beta}\Biggl(
\sum_{n,m=1}^{N_l}{\left\langle p,s,a\left| D_{N_5}^{-1} \right| n\right\rangle}{{\left\langle n | y,\beta,b \right\rangle}}
{{\left\langle y,\beta,b | m \right\rangle}}{\left\langle m\left| (D_{N_5}^{-1})^\dagger \right| p,s,a\right\rangle}
{\nonumber}\\&&
+\sum_{k,s',c}{\left\langle p,s,a\left| D_{N_5}^{-1} \right| k,s',c\right\rangle}
{{\left\langle k,s',c | y,\beta,b \right\rangle}}
{\nonumber}\\&&\quad\times
\sum_{q,s'',d}{{\left\langle y,\beta,b | q,s'',d \right\rangle}}
{\left\langle q,s'',d\left| (D_{N_5}^{-1})^\dagger \right| p,s,a\right\rangle}
\Biggr)
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}n_{xyz}'\frac{1}{N_p}
\sum_{p}F(p).\end{aligned}$$ As a consequence we have a relation $$\begin{aligned}
Y(t\to\infty)=
-\frac{4N_cN_x}{4N_cN_x-N_l}
\frac{a_5^2}{a^{10}}n_{xyz}'
\frac{1}{N_p}\sum_{p}F(p).\end{aligned}$$
Now we need to know behavior of $F(p)$ as a function of $p$ to further proceed. We can directly see that the function $F(p)=F_c(p)$ is almost constant for all the range of $p$ as in Fig. \[fig:Fp\]. We assume that $F(p)$ is independent of $p$.
By adopting this assumption the anomalous quark mass becomes $$\begin{aligned}
m_{5q}&=&\frac{\lim_{t\to\infty}X(t)}{\lim_{t\to\infty}Y(t)}
{\nonumber}\\&=&
\frac{1}{a_5}\frac{1}{4N_cN_x}\sum_{n}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2
{\nonumber}\\&&
-\frac{1}{Y(t\to\infty)}
\frac{a_5}{a^{10}}n_{xyz}'\frac{1}{N_p}\sum_{p}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(p)}\right)^2
F(p)
{\nonumber}\\&=&
\frac{1}{a_5}\frac{1}{4N_cN_x}\sum_{n}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2
{\nonumber}\\&&
+\frac{4N_cN_x-N_l}{4N_cN_x}
\frac{1}{a_5}
\frac{1}{\sum_{k}F(k)}
\sum_{p}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(p)}\right)^2
F(p)
{\nonumber}\\&=&
\frac{1}{a_5}\frac{1}{4N_cN_x}
\left(
\sum_{n}\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2
+4N_c\sum_{p}\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(p)}\right)^2
\right)
{\nonumber}\\&=&
\frac{1}{a_5}\frac{1}{4N_cN_x}
\left(
\sum_{n}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda'_n)}\right)^2
+
\sum_{n}\rho_n\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}_n}\right)^2
\right),\end{aligned}$$ where $\rho_n$ is a number of degeneracy in ${\widetilde{\lambda}}_n$, $$\begin{aligned}
\rho_n=\sum_{p,\alpha,a}\delta_{{\widetilde{\lambda}}_n,{\widetilde{\lambda}}(p)}.\end{aligned}$$
Anomalous quark mass with partially localized and plane-wave eigenvectors {#sec:appendix-c}
=========================================================================
We consider a system with $N_l$ subspaces ${\cal S}_n$ ($n=1,\cdots,N_l$), which is spanned by partially localized eigenvectors $\psi_n^{(i)}$. We assume that each subspace ${\cal S}_n$ has no overlap and its volume is $h_n$. Remaining subspace ${\widetilde{\cal S}}$ of the system is spanned by plane-wave eigenvectors as in the previous subsection. If we set the size of ${\widetilde{\cal S}}$ to be $4N_cN_p$ the total degrees of freedom of the system becomes $$\begin{aligned}
4N_cN_x=\sum_{n=1}^{N_l}h_n+4N_cN_p.\end{aligned}$$
The partially localized eigenvector is given by $$\begin{aligned}
&&
H'{\left| n,i\right\rangle}=\lambda_n^{(i)}{\left| n,i\right\rangle},
\\&&
{{\left\langle I | n,i \right\rangle}}=\frac{1}{\sqrt{h_n}}\psi_n^{(i)}(I),\end{aligned}$$ where the function $\psi_n^{(i)}(I)$ is non-zero within ${\cal S}_n$. We have a relation that $$\begin{aligned}
\psi_n^{(i)\dagger}(I)\psi_m^{(j)}(I)=0\quad\end{aligned}$$ for $n\neq m$ for any $i,j$. The orthogonality is given by $$\begin{aligned}
\sum_{I}\psi_n^{(i)\dagger}(I)\psi_m^{(j)}(I)
=h_n \delta_{n,m}\delta_{i,j}.\end{aligned}$$ The plane-wave eigenvector is the same as in the previous section [(\[eqn:plane-wave\])]{} with a constraint $$\begin{aligned}
{{\left\langle n,i | I \right\rangle}}{{\left\langle I | p,s,b \right\rangle}}=0.\end{aligned}$$ The completeness condition of this system is given by $$\begin{aligned}
1=\sum_{n=1}^{N_l}\sum_{i=1}^{h_n}{\left| n,i\right\rangle}{\left\langle n,i\right|}
+\sum_{l,s}\sum_{p}\sum_{a}{\left| p,l,s,a\right\rangle}{\left\langle p,l,s,a\right|}
\label{eqn:complete-2}\end{aligned}$$ with $$\begin{aligned}
\sum_{n=1}^{N_l}\sum_{i=1}^{h_n}\frac{1}{h_n}\psi_n^{(i)}(I)
\psi_n^{(i)\dagger}(J)
=\delta_{I,J}.\end{aligned}$$
We expand the two Green functions in $m_{5q}$ with the complete set in the above. $$\begin{aligned}
X(t) &=& \sum_{\vec{x}}{\left\langle J_{5q}(x)P(y) \right\rangle}
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}\sum_{\vec{x},\alpha,a}\sum_{b,\beta}
{\left\langle I\left| \frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger
\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}} \right| I\right\rangle}
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}\sum_{\vec{x},\alpha,a}\sum_{b,\beta}\Biggl(
\sum_{n,m=1}^{N_l}\sum_{i,j=1}^{h_n}
{{\left\langle x,\alpha,a | n,i \right\rangle}}
{\left\langle n,i\left| \frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}D_{N_5}^{-1} \right| J\right\rangle}
{\nonumber}\\&&\times
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}} \right| m,j\right\rangle}
{{\left\langle m,j | x,\alpha,a \right\rangle}}
{\nonumber}\\&+&
\sum_{p,s,c}\sum_{k,t,d}
{{\left\langle x,\alpha,a | p,s,c \right\rangle}}
{\left\langle p,s,c\left| \frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}D_{N_5}^{-1} \right| J\right\rangle}
{\nonumber}\\&&\times
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}} \right| k,t,d\right\rangle}
{{\left\langle k,t,d | x,\alpha,a \right\rangle}}
\Biggr).\end{aligned}$$ The contribution $X_c$ from the continuous eigenvalues is the same as in the previous subsection except for the definition of $F(p)$ $$\begin{aligned}
F(p)&=& F_l(p)+F_c(p),
\\
F_l(p) &=&
\sum_{n=1}^{N_l}\sum_{i,j=1}^{h_n}\sum_{p,s,a}
{\left\langle p,s,a\left| D_{N_5}^{-1} \right| n,i\right\rangle}
\frac{1}{h_n}\psi_n^{(i)\dagger}(J)\psi_n^{(j)}(J)
{\left\langle n,j\left| \left(D_{N_5}^{-1}\right)^\dagger \right| p,s,a\right\rangle},
\\
F_c(p)&=&
\frac{4N_c}{N_p}\left(\frac{4}{C^2(p)+E^2(p)}\right).\end{aligned}$$ Here we use the block diagonal condition $$\begin{aligned}
{\left\langle p,s,a\left| D_{N_5}^{-1} \right| n,i\right\rangle}=0\end{aligned}$$ by assuming the $\gamma_5$ orthogonality ${\left\langle n,i\left| \gamma_5 \right| p,s,a\right\rangle}=0$. $X_c$ is written in terms of the function $F(p)=F_c(p)$.
Contribution from the localized mode becomes $$\begin{aligned}
X_l(t) &=&
-\frac{a_5}{a^{10}}
\sum_{b,\beta}
\sum_{n=1}^{N_l}\sum_{i,j=1}^{h_n}
\frac{1}{h_n}f_n^{i,j}(t)
{\nonumber}\\&&\times
{\left\langle n,i\left| \frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}}D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{H}}} \right| n,j\right\rangle}
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}\sum_{n=1}^{N_l}\sum_{i,j=1}^{h_n}\frac{1}{h_n}f_n^{i,j}(t)
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(i)}_n)}\right)
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(j)}_n)}\right)
{\nonumber}\\&&\times
\sum_{b,\beta}
\sum_{z,\gamma,c}\sum_{w,\delta,d}
{{\left\langle n,i | z,\gamma,c \right\rangle}}
{\left\langle z,\gamma,c\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| w,\delta,d\right\rangle}
{{\left\langle w,\delta,d | n,j \right\rangle}}
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}
\sum_{n=1}^{N_l}\sum_{i,j=1}^{h_n}\frac{1}{h_n}f_n^{i,j}(t)
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(i)}_n)}\right)
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(j)}_n)}\right)
{\nonumber}\\&&\times
\sum_{b,\beta}
\sum_{z,\gamma,c}\sum_{w,\delta,d}\frac{1}{h_n}
\psi_n^{(i)\dagger}(z,\gamma,c)
{\left\langle z,\gamma,c\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| w,\delta,d\right\rangle}
\psi_n^{(j)}(w,\delta,d),
{\nonumber}\\\end{aligned}$$ where we assumed orthogonality for $m$ and $n$, $f_n^{i,j}$ is defined as $$\begin{aligned}
\sum_{\vec{x}}\sum_{\alpha,a}\psi_n^{(i)}(I)\psi_m^{(j)\dagger}(I)
=\delta_{n,m}f_n^{i,j}(t).\end{aligned}$$ $f_n^{i,j}(t)$ is non-zero when $t\in{\cal S}_n$. Here we have a comment. It is plausible to assume that the eigenfunction $\psi_n^{(i)}$ tends to be plane but rapidly oscillating inside ${\cal S}_n$ for large $i$. By using this fact we can see that the above $f_n^{i,j}$ is suppressed for different $i$, $j$ even if the summation over $t$ is not taken.
We need more information on the propagator multiplied with eigenvectors $$\begin{aligned}
\frac{1}{h_n}\sum_{I}\sum_{K}
\psi_n^{(i)\dagger}(I)
{\left\langle I\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| K\right\rangle}
\psi_n^{(j)}(K)
\label{eqn:propXeigen}\end{aligned}$$ to further discuss the detailed property of $X_l$. We start by adopting an assumption that the propagator $$\begin{aligned}
f(I,K)=
{\left\langle I\left| D_{N_5}^{-1} \right| J\right\rangle} {\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| K\right\rangle}\end{aligned}$$ is a slowly varying function of $I$ and $K$. Then we investigate the behavior of [(\[eqn:propXeigen\])]{} for two typical forms of eigenfunctions. The eigenfunctions are classified into two types, according to the behavior of the following integral of a single function $$\begin{aligned}
C=\sum_{I}\psi_n^{(i)}(I).\end{aligned}$$
\(i) Eigenfunction $\psi_n^{(i)}$ with $C\neq0$, which may be typical for the lowest mode $i=1$. It seems to be plausible [@HJL98; @Nagai00] to approximate this lowest mode with the exponential form. For simplicity we consider a one-dimensional case $$\begin{aligned}
\psi(x)=\frac{1}{\sqrt{\delta}}e^{-|x-x_0|/\delta},\end{aligned}$$ where $\delta$ is a width and $x_0$ is a center of the eigenfunction. Since the integral $\int dx\psi(x)$ gives non-zero value, a multiplication with a smooth function $f(x,y)$ produces a enhance factor $4\delta$, which is proportional to a width of the eigenfunction: $$\begin{aligned}
&&
\frac{1}{\delta}\int_{-\infty}^{\infty}dx dy e^{-|x-x_0|/\delta}f(x,y)
e^{-|y-x_0|/\delta}
{\nonumber}\\&&=
\frac{1}{\delta}\int_{-\infty}^{\infty}dz dw e^{-|z|/\delta}
f(z+x_0,w+x_0)e^{-|w|/\delta}
{\nonumber}\\&&=
\frac{1}{\delta}\int_{-\infty}^{\infty}dz dw e^{-|z|/\delta}
f(x_0,x_0)e^{-|w|/\delta}
+{\cal O}({\partial}f(x_0,x_0))
=(4\delta)\ f(x_0,x_0).\end{aligned}$$
The extension to four-dimensions is given by $$\begin{aligned}
\psi_n^{(1)}(x)=\frac{1}{\delta^2}e^{-\sum_i|x_i-(x_n)_i|/\delta}\end{aligned}$$ and [(\[eqn:propXeigen\])]{} becomes $$\begin{aligned}
&&
\frac{1}{h_n}\sum_{I}\sum_{K}
\psi_n^{(i)\dagger}(I){\left\langle I\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| K\right\rangle}\psi_n^{(j)}(K)
{\nonumber}\\&&
=4^4h_n\delta_{i,j}
{\left\langle I_n\left| D_{N_5}^{-1} \right| J\right\rangle}{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| I_n\right\rangle},\end{aligned}$$ where $I_n$ is a peak of the exponential in $\psi_n^{(i)}(I)$ and $h_n$ is given by $h_n=\delta^4$. Here we use a property that the eigenfunction $\psi_n^{(i)}$ with higher $i>1$ tends to be oscillating and a single summation $\sum_{I}\psi_n^{(i)}(I)$ is suppressed. Together with the suppression of $f_n^{i,j}$ for $i \neq j$ we have a factor $\delta_{i,j}$ in the above.
The term $X_l$ of the numerator for the contribution from the lowest eigenvector $i=1$ becomes $$\begin{aligned}
X_l(t) &=&
-\frac{a_5}{a^{10}}\sum_{n=1}^{N_l}\sum_{i=1}^{h_n}\delta_{i,1}4^4f_n^{i,i}(t)
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(i)}_n)}\right)^2
\sum_{b,\beta}
{\left\langle I_n\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| I_n\right\rangle}
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}\frac{1}{4N_cn_{xyz}}
\sum_{n=1}^{N_l}
\frac{4^4h_n}{h_{n,t}}\delta_{t\in{\cal S}_n}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(1)}_n)}\right)^2
{\nonumber}\\&&\times
\sum_{\vec{x}}\sum_{\alpha,a}
\sum_{b,\beta}
{\left\langle \vec{x},t,\alpha,a\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| \left(D_{N_5}^{-1}\right)^\dagger \right| \vec{x},t,\alpha,a\right\rangle},
{\nonumber}\\&=&
\frac{1}{a_5}Y(t)
\frac{4^4h_n}{4N_cN_{x}}\sum_{n=1}^{N_l}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(1)}_n)}\right)^2.\end{aligned}$$ Here we use a relation for $i=j$ $$\begin{aligned}
&&
\frac{1}{h_n}f_n^{i,i}(t)=\frac{1}{h_{n,t}}\delta_{t\in{\cal S}_n},\end{aligned}$$ where $h_{n,t}$ is a width of $t$ in ${\cal S}_n$.
\(ii) Plane but rapidly oscillating eigenvector $\psi_n^{(i)}$ with $C\simeq0$, which is typical for the excited mode with $i>1$. In this case as was mentioned in the above a single summation $\sum_{I}\psi_n^{(i)}(I)$ is suppressed and [(\[eqn:propXeigen\])]{} has non-zero value only when $I=K$, $$\begin{aligned}
&&
\frac{1}{h_n}\sum_{I}\sum_{K}
\psi_n^{(i)\dagger}(I){\left\langle I\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| K\right\rangle}\psi_n^{(j)}(K)
{\nonumber}\\&&=
\frac{1}{h_n}\sum_{I}
\psi_n^{(i)\dagger}(I){\left\langle I\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| I\right\rangle}\psi_n^{(j)}(I)
{\nonumber}\\&&=
\delta_{i,j}
{\left\langle I\left| D_{N_5}^{-1} \right| J\right\rangle}{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| I\right\rangle},\end{aligned}$$ where $I,J$ unsummed. $X_l$ then becomes $$\begin{aligned}
X_l(t) &=&
-\frac{a_5}{a^{10}}\sum_{n=1}^{N_l}\sum_{i>1}^{h_n}\frac{1}{h_n}f_n^{i,i}(t)
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(i)}_n)}\right)^2
\sum_{b,\beta}
{\left\langle I\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| \left(D_{N_5}^{-1}\right)^\dagger \right| I\right\rangle}
{\nonumber}\\&=&
-\frac{a_5}{a^{10}}\frac{1}{4N_cn_{xyz}}
\sum_{n=1}^{N_l}\sum_{i>1}^{h_n}\frac{1}{h_{n,t}}\delta_{t\in{\cal S}_n}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(i)}_n)}\right)^2
{\nonumber}\\&&\times
\sum_{\vec{x}}\sum_{\alpha,a}
\sum_{b,\beta}
{\left\langle \vec{x},t,\alpha,a\left| D_{N_5}^{-1} \right| y,\beta,b\right\rangle}
{\left\langle y,\beta,b\left| \left(D_{N_5}^{-1}\right)^\dagger \right| \vec{x},t,\alpha,a\right\rangle},
{\nonumber}\\&=&
\frac{1}{a_5}Y(t)
\frac{1}{4N_cN_{x}}\sum_{n=1}^{N_l}\sum_{i>1}^{h_n}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(i)}_n)}\right)^2 .\end{aligned}$$ In this case $m_{5q}$ is given by the same formula as in the previous section with the completely localized eigenvector. In addition we assume that the spin and color eigenfunction is always rapidly oscillating and set their enhance factor to be unity as is discussed in the above.
The pion propagator in the denominator is expanded as $$\begin{aligned}
Y(t) &=& \sum_{\vec{x}}{\left\langle P(x)P(y) \right\rangle}
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}\sum_{\vec{x},a,\alpha}\sum_{b,\beta}
\Biggl(
\sum_{n,m=1}^{N_l}\sum_{i,j=1}^{h_n}
\frac{1}{h_n}\psi_n^{(i)}(I)\psi_m^{(j)}(I)
{\left\langle n,i\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| (D_{N_5}^{-1})^\dagger \right| m,j\right\rangle}
{\nonumber}\\&+&
\sum_{p,s,c}\sum_{k,s',d}
\frac{1}{N_p}U_\alpha(p,s)U_\alpha^\dagger(k,s')e^{i(p-k)x}
{\left\langle p,s,a\left| D_{N_5}^{-1} \right| J\right\rangle}
{\left\langle J\left| (D_{N_5}^{-1})^\dagger \right| k,s',b\right\rangle}
\Biggr).\end{aligned}$$ By inserting a complete set $1=\sum{\left| I\right\rangle}{\left\langle I\right|}$ the contribution from the localized eigenvectors is rewritten in term of $Y(t)$. $$\begin{aligned}
Y_l(t)&=&
-\frac{a_5^2}{a^{10}}\sum_{\vec{x},a,\alpha}
\sum_{n,m=1}^{n_l}\sum_{i,j=1}^{h_n}\frac{1}{h_n}\psi_n^{(i)}(I)\psi_m^{(j)}(I)
\sum_{b,\beta}
\sum_{z,\gamma,c}{{\left\langle n,i | z,\gamma,c \right\rangle}}
{\left\langle z,\gamma,c\left| D_{N_5}^{-1} \right| J\right\rangle}
{\nonumber}\\&&\times
\sum_{w,\delta,d}{\left\langle J\left| (D_{N_5}^{-1})^\dagger \right| w,\delta,d\right\rangle}
{{\left\langle w,\delta,d | m,j \right\rangle}}
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}\sum_{\vec{x},a,\alpha}
\sum_{n,m=1}^{n_l}\sum_{i,j=1}^{h_n}\frac{1}{h_n}\psi_n^{(i)}(I)\psi_m^{(j)}(I)
{\nonumber}\\&&\times
\sum_{b,\beta}
\sum_{z,\gamma,c}
\frac{1}{h_n}\psi_n^{(i)\dagger}(z,\gamma,c)
{\left\langle z,\gamma,c\left| D_{N_5}^{-1} \right| J\right\rangle}
\sum_{w,\delta,d}{\left\langle J\left| (D_{N_5}^{-1})^\dagger \right| w,\delta,d\right\rangle}
\psi_m^{(j)}(w,\delta,d)
{\nonumber}\\&=&
-\frac{a_5^2}{a^{10}}\sum_{\vec{x},a,\alpha}
\sum_{b,\beta}
{\left\langle I\left| D_{N_5}^{-1} \right| J\right\rangle}{\left\langle J\left| (D_{N_5}^{-1})^\dagger \right| I\right\rangle}_{I\in\cup_n{\cal S}_n}
{\nonumber}\\&=&
\frac{\sum_{n=1}^{N_l} h_n}{4N_cN_x}Y(t),\end{aligned}$$ where we take average for $I\in\cup_n{\cal S}_n$.
The contribution from the plane-wave modes is the same as that of the completely localized case by using the off diagonal property ${\left\langle p,s,a\left| D_{N_5}^{-1} \right| n,i\right\rangle}=0$. Now we have a relation to write down $Y(t)$ in terms of $F(p)$ only, $$\begin{aligned}
&&
Y(t\to\infty)=
-\frac{4N_cN_x}{4N_cN_x-hN_l}
\frac{a_5^2}{a^{10}}n_{xyz}'
\frac{4N_c}{N_p}\sum_{p}F(p),\end{aligned}$$ where $ h N_l \equiv \sum_{n=1}^{N_l} h_n$, and the anomalous quark mass $m_{5q}$ can be written as $$\begin{aligned}
m_{5q}&=&\frac{\lim_{t\to\infty}X(t)}{\lim_{t\to\infty}Y(t)}
{\nonumber}\\&=&
\frac{1}{a_5}\frac{1}{4N_cN_x}
\Biggl(
\sum_{n=1}^{N_l}4^4h_n
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(1)}_n)}\right)^2
+\sum_{n=1}^{N_l}\sum_{i>1}^{h_n}
\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}(\lambda^{'(i)}_n)}\right)^2
{\nonumber}\\&&
+4N_c\sum_{n}\rho_n\left(\frac{1}{2\cosh\frac{N_5}{2}a_5{\widetilde{\lambda}}_n}
\right)^2
\Biggr) .\end{aligned}$$
In the actual case, the weight factor for the contribution from partially localized eigenstates lies between 1 and $4^4h_n$ depending on the index $i$. Therefore we adopt $\tilde h_n^{(i)}$ such that $1 \le \tilde h_n^{(i)} \le 4^4h_n$ as the weight factor in the text.
[99]{}
P. Ginsparg and K. Wilson, [[[Phys. Rev.]{}]{} [D25]{}, 2649 (1982)]{}.
M. Lüscher, [[[Phys. Lett.]{}]{} [B428]{}, 342 (1998)]{}.
P. Hasenfratz, [[[Nucl. Phys.]{}]{} [B525]{}, 401 (1998)]{}. P. Hasenfratz, V. Laliena and F. Niedermayer, [[[Phys. Lett.]{}]{} [B427]{}, 125 (1998)]{}.
H. Neuberger, [[[Phys. Lett.]{}]{} [B417]{}, 141 (1998)]{}; [[[Phys. Lett.]{}]{} [B427]{}, 353 (1998)]{}; [[[Phys. Rev.]{}]{} [D57]{}, 5417 (1998)]{}.
Y. Kikukawa and T. Noguchi, hep-lat/9902022.
R. Narayanan and H. Neuberger, [[[Nucl. Phys.]{}]{} [B412]{}, 574 (1994)]{}; [[[Nucl. Phys.]{}]{} [B443]{}, 305 (1995)]{}.
D. Kaplan, [[[Phys. Lett.]{}]{} [B288]{}, 342 (1992)]{}.
Y. Shamir, [[[Nucl. Phys.]{}]{} [B406]{}, 90 (1993)]{}.
V. Furman and Y. Shamir, [[[Nucl. Phys.]{}]{} [B439]{}, 54 (1995)]{}.
T. Blum and A. Soni, [[[Phys. Rev.]{}]{} [D56]{}, 174 (1997)]{}; [[[Phys. Rev. Lett.]{}]{} [79]{}, 3595 (1997)]{}; hep-lat/9712004.
S. Aoki, T. Izubuchi, Y. Kuramashi and Y. Taniguchi, [[[Phys. Rev.]{}]{} [D62]{}, 094502 (2000)]{}.
For a review, see T. Blum, [[[Nucl. Phys. B (Proc. Suppl.)]{}]{} [73]{}, 167 (1999)]{} and references there in.
CP-PACS Collaboration, A. Ali Khan [*et al.*]{}, [[[Phys. Rev.]{}]{} [D63]{}, 114504 (2001)]{}; [[[Nucl. Phys. B (Proc. Suppl.)]{}]{} [83-84]{}, 591 (2000)]{}.
T. Blum, P. Chen, N. Christ, C. Cristian, C. Dawson, G. Fleming, A. Kaehler, X. Liao, G. Liu, C. Malureanu, R. Mawhinney, S. Ohta, G. Siegert, A. Soni, C. Sui, P. Vranas, M. Wingate, L. Wu and Y. Zhestkov, hep-lat/0007038; P. Chen, N. Christ, G. Fleming, A. Kaehler, C. Malureanu, R. Mawhinney, G. Siegert, C. Sui, P. M. Vranas, Y. Zhestkov, hep-lat/9812011; L. Wu [*et al*]{}, [[[Nucl. Phys. B (Proc. Suppl.)]{}]{} [83-84]{}, 224 (2000)]{}; G. T. Fleming [*et al*]{}, [[[Nucl. Phys. B (Proc. Suppl.)]{}]{} [83-84]{}, 363 (2000)]{}.
H. Neuberger, [[[Phys. Rev. Lett.]{}]{} [81]{}, 4060 (1998)]{}.
R. G. Edwards, U. M. Heller and R. Narayanan, [[[Nucl. Phys.]{}]{} [B540]{}, 457 (1999)]{}; [[[Phys. Rev.]{}]{} [D59]{}, 094510 (1999)]{}.
T. DeGrand for MILC collaboration, [[[Phys. Rev.]{}]{} [D63]{}, 034503 (2001)]{}.
P. Hernandez, K. Jansen and M. Lüscher, [[[Nucl. Phys.]{}]{} [B552]{}, 363 (1999)]{}.
Y. Kikukawa, [[[Nucl. Phys.]{}]{} [B584]{}, 511 (2000)]{}.
I. Ichinose and K. Nagao, [[[Nucl. Phys.]{}]{} [B577]{}, 279 (2000)]{}; hep-lat/0001030.
R. C. Brower and B. Svetitsky, [[[Phys. Rev.]{}]{} [D61]{}, 114511 (2000)]{}.
M. Golterman and Y. Shamir, [[JHEP]{} [0009]{}, 006 (2000)]{}.
F. Berruto, R. C. Brower and B. Svetitsky, hep-lat/0105016.
S. Aoki, [[[Phys. Rev.]{}]{} [D30]{}, 2653 (1984)]{}; [[[Phys. Rev. Lett.]{}]{} [57]{}, 3136 (1986)]{}; [[[Nucl. Phys.]{}]{} [B314]{}, 79 (1989)]{}.
S. Aoki and Y. Taniguchi, [[[Phys. Rev.]{}]{} [D59]{}, 054510 (1999)]{}.
S. Aoki, T. Kaneda and A. Ukawa, [[[Phys. Rev.]{}]{} [D56]{}, 1808 (1997)]{}.
R. G. Edwards, U. M. Heller and R. Narayanan, [[[Nucl. Phys.]{}]{} [B535]{}, 403 (1998)]{}; [[[Phys. Rev.]{}]{} [D60]{}, 034502 (1999)]{}.
CP-PACS Collaboration, A. Ali Khan [*et al.*]{}, [[[Nucl. Phys. B (Proc. Suppl.)]{}]{} [94]{}, 725 (2001)]{}
A. Borici, hep-lat/9912040.
CP-PACS Collaboration, in preparation.
Y. Shamir, [[[Phys. Rev.]{}]{} [D62]{}, 054513 (2000)]{}.
H. Neuberger, private communication.
M. Lüscher, private communication.
S. Nakamura, private communication. W. Kirsch, “Random Schrödinger operators”, in Lecture Notes in Physics 345 (Springer-Verlag).
P. W. Anderson, [[[Rev. Mod. Phys.]{}]{} [50]{}, 191 (1978)]{}.
F. Berruto, R. Narayanan and H. Neuberger, [[[Phys. Lett.]{}]{} [B489]{}, 243 (2000)]{}.
R. G. Edwards and U. M. Heller, [[[Phys. Rev.]{}]{} [D63]{}, 094505 (2001)]{}.
P. Hernandez, K. Jansen and M. Lüscher, hep-lat/0007015.
R. B. Laughlin, [[[Phys. Rev.]{}]{} [B23]{}, 5632 (1981)]{}.
=5.5cm =5.5cm =5.5cm
|
---
abstract: 'We review the basic elements of the Minimal Geometric Deformation approach in details. This method has been successfully used to generate brane-world configurations from general relativistic perfect fluid solutions.'
author:
- |
J Ovalle$^{ab}$[^1] $\,$ R Casadio$^{cd}$[^2] $\,$ A Sotomayor$^{e}$[^3]\
\
$^a$[*Departamento de Física, Universidad Simón Bolívar,*]{}\
[*AP 89000, Caracas 1080A, Venezuela*]{}\
$^b$[*The Institute for Fundamental Study, Naresuan University*]{}\
[*Phitsanulok 65000, Thailand*]{}\
$^c$[*Dipartimento di Fisica e Astronomia, Alma Mater Università di Bologna*]{}\
[*via Irnerio 46, 40126 Bologna, Italy*]{}\
$^d$[*Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, I.S. FLAG*]{}\
[*viale Berti Pichat 6/2, 40127 Bologna, Italy*]{}\
$^e$[*Departamento de Matemáticas, Universidad de Antofagasta*]{}\
[*Antofagasta, Chile*]{}
title: |
**The Minimal Geometric Deformation Approach:\
a brief introduction**
---
Introduction
============
General Relativity (GR) represents one of the pillars of modern Physics. The predictions made by this theory range from the perihelion shift of Mercury, the deflexion of light and gravitational lensing, the gravitational redshift and time delay, and the existence of black holes. The observation of these effects, as well as the recent detection of the gravitational waves GW150914 [@ligo1] and GW151226 [@ligo2], have given GR the status of the benchmark theory of the gravitational interaction (for an excellent review, see Ref. [@cmw] and references therein). Why do we want to find new gravitational theories beyond GR then? The reason has to do with some fundamental questions associated with the gravitational interaction which GR does not seem to be able to answer satisfactorily. One is the problem of dark matter and dark energy, which require introducing some unknown matter-energy to reconcile GR predictions with the observations of galactic rotation curves and accelerated expansion of the universe, respectively. Then, there is the difficulty of reconciling GR with the Standard Model of particle physics, or equivalently, the failure to quantise GR by the same successful scheme used with the other fundamental interactions. Such issues have motivated the search of a new gravitational theory beyond GR that could help to explain part of the problems mentioned above. Indeed, there is already a long list of alternative theories, like $f(R)$ and higher curvature theories, Galileon theories, scalar-tensor theories, (new and topological) massive gravity, Chern-Simons theories, higher spin gravity theories, Horava-Lifshitz gravity, extra-dimensional theories, torsion theories, Horndeski’s theory, etc (See for instance Refs. [@Ber]–[@Bellorin]). Nonetheless, quantum gravity is still an open problem, and dark matter and dark energy remain a mystery so far.
The MGD was originally proposed [@jo1] in the context of the the Randall-Sundrum brane-world [@lisa1; @lisa2] and extended to investigate new black hole solutions [@MGDextended1; @MGDextended2]. While the brane-world is still an attractive scenario, since it explains the hierarchy of fundamental interactions in a simple way, to find interior solutions for self-gravitating systems is a difficult task, mainly due to the existence of non-linear terms in the matter fields. In addition, the effective four-dimensional Einstein equations are not a closed system, due to the extra-dimensional effects resulting in terms undetermined by the four-dimensional equations. Despite these complications, the MGD has proven to be useful, among other things, to derive exact and physically acceptable solutions for spherically symmetric and non-uniform stellar distributions [@jo2; @jo5] as well; to express the tidal charge in the metric found in Ref. [@dmpr] in terms of the usual Arnowitt-Deser-Misner (ADM) mass [@jo6]; to study microscopic black holes [@jo7]; to clarify the role of exterior Weyl stresses acting on compact stellar distributions [@jo8; @jo9]; to extend the concept of variable tension introduced in Refs. [@gergely2009] by analysing the shape of the black string in the extra dimension [@jo10]; to prove, contrary to previous claims, the consistency of a Schwarzschild exterior [@jo11] for a spherically symmetric self-gravitating system made of regular matter in the brane-world; to derive bounds on extra-dimensional parameters [@jo12] from the observational results of the classical tests of GR in the Solar system; to investigate the gravitational lensing phenomena beyond GR [@roldaoGL]; to determine the critical stability region for Bose-Einstein condensates in gravitational systems [@rrplb]; to study Dark $SU(N)$ glueball stars on fluid branes [@rolsun] as well as the correspondence between sound waves in a de Laval propelling nozzle and quasinormal modes emitted by brane-world black holes [@rol].
This brief review is organised as follows: the simplest ways to modified gravity are presented in Section \[s2\], emphasising some problems that arise when the GR limit is considered; in Section \[s3\], we recall the Einstein field equations on the brane for a spherically symmetric and static distribution of density $\rho$ and pressure $p$; in Section \[s4\], the GR limit is discussed and the basic elements of the MGD are presented in section \[s5\]; in Section \[s6\], we review the matching conditions between the interior and exterior space-time of self-gravitating systems within the MGD, and a recipe with the basic steps to implement the MGD is described in Section \[s7\]; finally, some conclusions are presented in the last section.
GR simplest extensions and their GR limit {#s2}
=========================================
This Section is devoted to describe in a qualitative way the so-called GR-limit problem, which arises when an extension to GR is considered. An explicit and quantitative description of this problem, as well as an explicit solution, is developed throughout the rest of the review.
One cannot try and change GR without considering the well-established and very useful Lovelock’s theorem [@lovelock], which severely restricts any possible ways of modifying GR in four dimensions. We will now show the simplest generic way.
Any extension to GR will eventually produce new terms in the effective four-dimensional Einstein equations. These “corrections” are usually handled as part of an effective energy-momentum tensor and appear in such a way that they should vanish or be negligible in an appropriate limit. For instance, they must vanish (or be negligible) at solar system scales, where GR has been successfully tested so far.[^4] This limit represents not only a critical point for a consistent extension of GR, but also a non-trivial problem that must be treated carefully.
The simplest way to extend GR is by considering a modified Einstein-Hilbert action, $$\label{corr1}
S
=
\int\left[\frac{R}{2\,k^2}+{\cal L}\right]
\sqrt{-g}\,d^4x
+\alpha\,({\rm correction})
\ ,$$ where $\alpha$ is a free parameter associated with the new gravitational sector not described by GR, as is schematically shown in Fig \[fig:extended\]. The explicit form corresponding to the generic correction shown in Eq. (\[corr1\]) should be, of course, a well justified and physically motivated expression. At this stage the GR limit, obtained by setting $\alpha = 0$, is just a trivial issue, so everything looks consistent. Indeed, we may go further and calculate the equations of motion from setting the variation $\delta S = 0$ corresponding to this new theory, $$\label{corr2}
R_{\mu\nu}-\frac{1}{2}\,R\, g_{\mu\nu}
=
k^2\,T_{\mu\nu}
+\alpha\,({\rm new\ terms})_{\mu\nu}
\ .$$ The new terms in Eq. (\[corr2\]) may be viewed as part of an effective energy-momentum tensor, whose explicit form may contain new fields, like scalar, vector and tensor fields, all of them coming from the new gravitational sector not described by Einstein’s theory. At this stage the GR limit, again, is a trivial issue, since $\alpha = 0$ leads to the standard Einstein’s equations $G_{\mu\nu}=k^2\,T_{\mu\nu}$.
All the above seems to tell us that the consistency problem, namely the GR limit, is trivial. However, when the system of equations given by the expression (\[corr2\]) is solved, the result may show a complete different story. In general, and this is very common, the solution eventually found cannot reproduce the GR limit by simply setting $\alpha = 0$. The cause of this problem is the non-linearity of Eq. (\[corr2\]), and should not be a surprise. To clarify this point, let us consider a spherically symmetric perfect fluid, for which GR uniquely determines the metric component $$\label{g11-1}
g^{-1}_{rr} = 1 - \frac{2\, m(r)}{r}
\ ,$$ where $m$ is the mass function of the self-gravitating system. Now, let us consider the same perfect fluid in the “new” gravitational theory (\[corr1\]). When Eq. (\[corr2\]) is solved, we obtain an expression which generically may be written as $$\label{g11def}
g^{-1}_{rr} = 1 - \frac{2\, m(r)}{r} + ({\rm geometric\ deformation})
\ ,$$ where by [*geometric deformation*]{} one should understand the deformation of the metric component (\[g11-1\]) due to the generic extension (\[corr1\]) of GR. (“deformation” hence means a deviation from the GR solution). It is now very important to note that the deformation (\[g11def\]) always produces [*anysotropic consequences*]{} on the perfect fluid, namely, the radial and tangential pressures are no longer the same and in consequence the self-gravitating system will not be described as a perfect fluid anymore. Indeed, and this is a critical point in our analysis, the anisotropy produced by the geometric deformation always takes the form (See further Eqs. (\[ppf\])-(\[ppf3\]) to see an explicit calculation) $$\label{any}
{\cal P} = A + \alpha\,B
\ .$$ This expression is very significant, since it shows that the GR limit cannot be [*a posteriori*]{} recovered by setting $\alpha=0$, since the “sector” denoted by $A$ in the anisotropy (\[any\]) does not depend on $\alpha$. Consequently, the perfect fluid GR solution ($A=0$) is not trivially contained in this extension, and one might say that we have an extension to GR which does not contain GR. This is of course a contradiction, or more properly a consistency problem, whose source can be precisely traced back to the [*geometric deformation*]{} shown in Eq. (\[g11def\]). The latter always takes the form (See Eq. (\[fsolution\]) for an explicit expression) $$\label{def}
({\rm geometric\ deformation}) = X + \alpha\,Y
\ ,$$ which contains a “sector” $X$ that does not depend on $\alpha$. This is again obviously inconsistent, since the deformation undergone by GR must depend smoothly on $\alpha$ and vanish with it. At the level of solutions, the source of this problem is the high non-linearity of the effective Einstein equations (\[corr2\]), which we want to emphasise has nothing to do with any specific extension of GR. Indeed, it is a characteristic of any high non-linear systems.
![The new gravitational sector outside GR is parametrised by $\alpha$, so that GR represents a “sub-space" in the extended theory of gravity. When the free parameter $\alpha$ is turned off, we should automatically recover the domain of GR.[]{data-label="fig:extended"}](Ch1-1.eps "fig:")\
A method that solves the non-trivial issue of consistency with GR described above is the so-called [*Minimal Geometric Deformation*]{} MGD approach [@jo1]. The idea is to keep under control the anisotropic consequences on GR appearing in the extended theory, in such a way that the $\alpha$-independent sector in the geometric deformation shown as $X$ in Eq. (\[def\]) always vanishes. Correspondingly, the $\alpha$-independent sector of the anisotropy $A$ in Eq. (\[any\]) will also vanish. This will ensure a consistent extension that recovers GR in the limit $\alpha\to 0$. In this approach, the generic expression $Y$ in Eq. (\[def\]) represents the [*minimal geometric deformation*]{} undergone by the radial metric component, the generic expression $B$ in Eq. (\[any\]) being the [*minimal anysotropic consequence*]{} undergone by GR due to correction terms in the modified Einstein-Hilbert action (\[corr1\]). The next key point is how we can make sure $X = 0$ in Eq. (\[def\]) in order to obtain a consistent extension to GR. This is accomplished when a GR solution is forced to remian a solution in the extended theory. Roughly speaking, we need to introduce the GR solution into the new theory, as far as possible, as suggested in Fig \[fig:MGD\]. This provides the foundation for the MGD approach. We want to emphasise that the GR solution used to set $X = 0$ in Eq. (\[def\]) will eventually be modified by using, for instance, the matching conditions at the surface of a self-gravitating system. One will therefore obtain physical variables that depend on the free parameter of the theory, here generically named $\alpha$. This free parameter could be, for instance, the one that measures deviation from GR in $f(R)$ theories, the brane tension in the brane-world, and so.
![When a GR solution is forced to be a solution in the new gravitational sector by the MGD, the $\alpha$-independent terms in the extended solution are eliminated, and the GR limit is recovered.[]{data-label="fig:MGD"}](Ch1-2.eps "fig:")\
Extra-dimensional gravity: the brane-world {#s3}
==========================================
In the generalised RS brane-world scenario, gravity lives in five dimensions and affects the gravitational dynamics in the (3+1)-dimensional universe accessible to all other physical fields, the so-called brane. The 5-dimensional Einstein equations projected on the brane give rise to the modified 4-dimensional Einstein equations [@maartRev2004x; @maartRev2010x; @smsx] [^5] $$G_{\mu \nu }
=
-k^{2}\,T_{\mu \nu }^{\mathrm{eff}}-\Lambda \,g_{\mu \nu }
\ ,
\label{4Dein}$$ where $G_{\mu\nu}$ is the 4-dimensional Einstein tensor. The effective energy-momentum tensor is given by $$T_{\mu \nu }^{\mathrm{eff}}
=
T_{\mu \nu }
+\frac{6}{\sigma }\,S_{\mu \nu }+\frac{1}{8\,\pi}\,\mathcal{E}_{\mu \nu }
+\frac{4}{\sigma }\,\mathcal{F}_{\mu \nu }
\ ,
\label{tot}$$ where $\sigma $ is the brane tension (which plays the role of the parameter $\alpha$ of the previous Section) and $$T_{\mu \nu }=(\rho +p)\,u_{\mu }\,u_{\nu }-p\,g_{\mu \nu }
\label{perfect}$$ is the 4-dimensional energy-momentum tensor of brane matter described by a perfect fluid with 4-velocity field $u^\mu$, density $\rho$ and isotropic pressure $p$. The extra term $$S_{\mu \nu }
=
\frac{T}{12}\,T_{\mu \nu}
-\frac{1}{4}\,T_{\mu \alpha }\,T_{\ \nu}^{\alpha }
+\frac{g_{\mu \nu }}{24}\left( 3\,T_{\alpha \beta}\,T^{\alpha \beta }-T^{2}\right)$$ represents a local high-energy correction quadratic in $T_{\mu\nu}$ (with $T=T_{\alpha }^{\ \alpha }$), whereas $$k^{2}\,\mathcal{E}_{\mu \nu }
=
\frac{6}{\sigma }\left[
\mathcal{U}
\left(
u_{\mu }\,u_{\nu }
+\frac{1}{3}\,h_{\mu \nu }\right)
+\mathcal{P}_{\mu \nu }
+\mathcal{Q}_{(\mu }\,u_{\nu )}\right]$$ contains the Kaluza-Klein corrections and acts as a non-local source arising from the 5-dimensional Weyl curvature. Here $\mathcal{U}$ is the bulk Weyl scalar, $\mathcal{P}_{\mu \nu }$ the Weyl stress tensor and $\mathcal{Q}_{\mu }$ the Weyl energy flux, and $h_{\mu\nu}=g_{\mu\nu}-u_\mu u_\nu$ denotes the projection tensor orthogonal to the fluid lines. Finally, the extra term $\mathcal{F}_{\mu \nu }$ contains contributions from all non-standard model fields possibly living in the bulk, but it does not include the 5-dimensional cosmological constant $\Lambda_5$, which is fine-tuned to $\sigma$ in order to generate a small 4-dimensional cosmological constant $$\Lambda
=
\frac{\kappa_5^{2}}{2}\left(\Lambda_{5}+\frac{1}{6}\,\kappa_5^{2}\,\sigma^{2}\right)
\simeq
0
\ ,$$ where the 5-dimensional gravitational coupling $$\kappa^4_5
=
6\,\frac{k^2}{\sigma}
\ .$$ In particular, we shall only allow for a cosmological constant in the bulk, hence $${\cal F}_{\mu\nu}=0
\ ,$$ which implies the conservation equation $$\nabla_\nu\,T^{\mu\nu}=0
\ ,
\label{dT0}$$ and there will be no exchange of energy between the bulk and the brane.
In this review, we are mostly interested in spherically symmetric and static configurations, for which the Weyl energy flux $$Q_\mu =0$$ and the Weyl stress can be written as $${\cal P}_{\mu\nu}
={\cal P}\left(r_\mu\, r_\nu+\frac{1}{3}\,h_{\mu\nu}\right)
\ ,$$ where $r_\mu$ is a unit radial vector. In Schwarzschild-like coordinates, the spherically symmetric metric reads $$ds^{2}
=
e^{\nu (r)}\,dt^{2}-e^{\lambda (r)}\,dr^{2}
-r^{2}\left( d\theta^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)
\ ,
\label{metric}$$ where $\nu =\nu (r)$ and $\lambda =\lambda (r)$ are functions of the areal radius $r$ only, ranging from $r=0$ (the star’s centre) to some $r=R$ (the star’s surface), and the fluid 4-velocity field is given by $u^{\mu }=e^{-\nu /2}\,\delta _{0}^{\mu }$ for $0\le r\le R$.
The metric (\[metric\]) must satisfy the effective Einstein equations (\[4Dein\]), which, for $\Lambda=0$, explicitly read [@germ; @matt; @fran] $$\begin{aligned}
\label{ec1}
&&
k^2
\left[ \rho
+\strut\displaystyle\frac{1}{\sigma}\left(\frac{\rho^2}{2}+\frac{6}{k^4}\,\cal{U}\right)
\right]
=
\strut\displaystyle\frac 1{r^2}
-e^{-\lambda }\left( \frac1{r^2}-\frac{\lambda'}r\right)
\\
&&
\label{ec2}
k^2
\strut\displaystyle
\left[p+\frac{1}{\sigma}\left(\frac{\rho^2}{2}+\rho\, p
+\frac{2}{k^4}\,\cal{U}\right)
+\frac{4}{k^4}\frac{\cal{P}}{\sigma}\right]
=
-\frac 1{r^2}+e^{-\lambda }\left( \frac 1{r^2}+\frac{\nu'}r\right)
\\
&&
\label{ec3}
k^2
\strut\displaystyle\left[p
+\frac{1}{\sigma}\left(\frac{\rho^2}{2}+\rho\, p
+\frac{2}{k^4}\cal{U}\right)
-\frac{2}{k^4}\frac{\cal{P}}{\sigma}\right]
=
\frac 14e^{-\lambda }\left[ 2\,\nu''+\nu'^2-\lambda'\,\nu'
+2\,\frac{\nu'-\lambda'}r\right]
\ .\end{aligned}$$ Moreover, the conservation Eq. (\[dT0\]) yields $$\label{con1}
p'=-\strut\displaystyle\frac{\nu'}{2}(\rho+p)
\ ,$$ where $f'\equiv \partial_r f$. We then note the 4-dimensional GR equations are formally recovered for $\sigma^{-1}\to 0$, and the conservation equation (\[con1\]) then becomes a linear combination of Eqs. (\[ec1\])-(\[ec3\]).
By simple inspection of the field equations (\[ec1\])-(\[ec3\]), we can identify an effective density $$\tilde{\rho}
=
\rho
+\frac{1}{\sigma }
\left( \frac{\rho ^{2}}{2}+\frac{6}{k^{4}}\,\mathcal{U}\right)
\ ,
\label{efecden}$$ an effective radial pressure $$\tilde{p}_{r}
=
p
+\frac{1}{\sigma }\left( \frac{\rho ^{2}}{2}+\rho \,p
+
\frac{2}{k^{4}}\,\mathcal{U}\right)
+\frac{4}{k^4\,\sigma }\,\mathcal{P}
\ ,
\label{efecprera}$$ and an effective tangential pressure $$\tilde{p}_{t}
=
p+\frac{1}{\sigma }\left( \frac{\rho ^{2}}{2}+\rho \,p
+\frac{2}{k^{4}}\,\mathcal{U}\right)
-\frac{2}{k^{4}\,\sigma}\,\mathcal{P}
\ .
\label{efecpretan}$$ This clearly illustrates that extra-dimensional effects generate an anisotropy $$\Pi
\equiv
\tilde{p}_{r}-\tilde{p}_{t}
=
\frac{6}{k^{4}\,\sigma}\,\mathcal{P}
\ ,$$ inside the stellar distribution. A GR isotropic stellar distribution (perfect fluid) therefore becomes an anysotropic stellar system on the brane.
Eqs. (\[ec1\])-(\[con1\]) contain six unknown functions, namely: two physical variables, the density $\rho(r)$ and pressure $p(r)$; two geometric functions, the temporal metric function $\nu(r)$ and the radial function $\lambda(r)$; and two extra-dimensional fields, the Weyl scalar function ${\cal U}$ and the anisotropy ${\cal P}$. These equations therefore form an indefinite system on the brane, an open problem to solve which one needs more information about the bulk geometry and a better understanding of how our 4-dimensional spacetime is embedded in the bulk [@cmazza; @darocha2012]. Since the source of this problem is directly related to the projection ${\cal E}_{\mu\nu}$ of the bulk Weyl tensor on the brane, the first logical step to overcome this issue would be to impose the constraint ${\cal E}_{\mu\nu}=0$ on the brane. However, it was shown in Ref. [@koyama05] that this condition is incompatible with the Bianchi identity on the brane, and a different and less radical restriction must thus be implemented. Another option that has led to some success consists in discarding only the anisotropic stress associated to ${\cal E}_{\mu\nu}$, that is, setting ${\cal P}_{\mu\nu}=0$. This constraint, which is useful to overcome the non-closure problem [@shtanov07], is nonetheless still too strong, since some anisotropic effects on the brane are generically expected as a consequence of the “deformation” induced on the 4-dimensional geometry by 5-dimensional gravity [@jo1].
The GR limit in the brane-world
===============================
\[s4\] Despite the non-closure problem that plagues the effective 4-dimensional Einstein equations, we shall see that it is possible to determine a brane-world version of every known GR perfect fluid solution. In order to do so, the first step is to rewrite the equations (\[ec1\])-(\[ec3\]) in a suitable way. First of all, by combining Eqs. (\[ec2\]) and (\[ec3\]), we obtain the Weyl anysotropy $$\label{pp}
\frac{6\,{\cal P}}{k^2\,\sigma}
=
G_{\ 1}^{1}-G_{\ 2}^2$$ and Weyl scalar $$\frac{6\,{\cal U}}{k^4\,\sigma}
=
-\frac{3}{\sigma}\left(\frac{\rho^2}{2}+\rho\,p\right)
+\frac{1}{k^2}\left(2\,G_{\ 2}^2+G_{\ 1}^1\right)-3\,p
\ ,
\label{uu}$$ where $$\label{g11}
G_{\ 1}^1
=
-\frac 1{r^2}+e^{-\lambda }\left( \frac 1{r^2}+\frac{\nu'}r\right)\ ,$$ and $$\label{g22}
G_{\ 2}^2
=
\frac 14\,e^{-\lambda }\left( 2\,\nu''+\nu'^2-\lambda'\,\nu'+2\, \frac{\nu'-\lambda'}{r}
\right)
\ .$$ Eqs. (\[pp\])-(\[g22\]) are equivalent to Eqs. (\[ec1\])-(\[con1\]) and we still have an open system of equations for the three unknown functions $\{p, \rho, \nu\}$ satisfying the conservation equation (\[con1\]). Next, we can proceed by plugging Eq. (\[uu\]) into Eq. (\[ec1\]), which leads to a first order linear differential equation for the metric function $\lambda$, $$\begin{aligned}
\label{edlrw}
e^{-\lambda}
\left(r\,\lambda'-1\right)
+1
-r^2\,k^2\,\rho
&\!\!=\!\!&
e^{-\lambda}
\left[\left(r^2\,\nu''+r^2\,\frac{\nu'^2}{2}+2\,r\,\nu'+1\right)
-r\,\lambda'\left(r\,\frac{\nu'}{2}+1\right)
\right]
-1
\nonumber
\\
&&
-r^2\,k^2\left[
3\,p
-\frac{\rho}{\sigma}\left(\rho+3\,p\right)
\right]
\ ,\end{aligned}$$ where the l.h.s. would be the standard GR equation if the extra-dimensional terms in the r.h.s. vanished. It is clear that not all of the latter terms are manifestly bulk contributions, since only the high-energy terms are explicitly proportional to $\sigma^{-1}$. The general solution is given by $$\begin{aligned}
\label{primsol}
e^{I(r)}\,e^{-\lambda(r)}
=
\int_{r_0}^r
\frac{2\,x\,e^{I(x)}}{x\,\nu'+4}
\left\{\frac{2}{x^2}
-k^2\left[
\rho-3\,p-\frac{1}{\sigma}
\left(\rho^2+3\,\rho\,p\right)
\right]
\right\}
dx+
\beta
\ ,\end{aligned}$$ with $$\begin{aligned}
\label{I}
I(r)
\equiv
\int_{r_0}^{r}
\frac{2\,x^2\,\nu''+x^2\,{\nu'}^2+4\,x\,\nu'+4}{x\,(x\,\nu'+4)}\,
dx
\ ,\end{aligned}$$ where $r_0$ and the integration constant $\beta$ will have to be determined according to the specific system at hand. For example, for a star centred around $r=0$, we will have $r_0=0$.
Given a solution $\{p,\rho,\nu\}$ of the conservation equation (\[con1\]), we could determine the corresponding $\lambda$, ${\cal P}$ and ${\cal U}$ by means of Eqs. (\[primsol\]), (\[pp\]) and (\[uu\]) respectively. However, it was shown in Ref. [@jo1] that this way does not lead in general to a metric function having the expected form (\[g11def\]), which now reads $$\begin{aligned}
\label{expectx}
e^{-\lambda(r)}
=
1-\frac{k^2}{r}
\int_0^r
x^2\,\rho\,dx
+\frac{1}{\sigma}({\rm bulk\ effects})
\ .\end{aligned}$$ In turn, if Eq. (\[expectx\]) does not hold, the GR limit cannot be recovered by simply taking $\alpha\equiv \sigma^{-1}\to 0$. The problem originates from the solution (\[primsol\]), which contains a mix of GR terms and non-local bulk terms that makes it impossible to regain GR from an arbitrary brane-world solution.
The Minimal Geometric Deformation {#s5}
=================================
As we argued in Section \[s2\], GR must be recovered in the limit $\alpha\equiv \sigma^{-1}\to 0$. Since a brane-world observer should also see a geometric deformation due to the existence of the fifth dimension, we restrict our search to $\{p,\rho,\nu\}$ that, beside being conserved according to Eq. , are such that the corresponding metric function $\lambda$ takes the form , which we rewrite as $$\begin{aligned}
\label{expectg}
e^{-\lambda(r)}
=
\mu(r)+f(r)
\ ,\end{aligned}$$ where $$\label{standardGR}
\mu(r)
\equiv
1-\frac{k^2}{r}\int_0^r x^2\,\rho\, dx
=1-\frac{2\,m(r)}{r}$$ is the standard GR solution containing the mass function $m$, and the unknown [*geometric deformation*]{}, described by the function $f$, stems from two sources: the extrinsic curvature and the 5-dimensional Weyl curvature.
Upon substituting (\[expectg\]) into Eq. (\[edlrw\]), we obtain the first order differential equation $$\label{diffeqtof}
f'
+
\frac{2\,r^2\,\nu''+r^2\,{\nu'}^2+4\,r\,\nu'+4}{r\,(r\,\nu'+4)}\,f
=
\frac{2\,r}{r\,\nu'+4}
\left[
\frac{k^2}{\sigma}\,\rho\,(\rho+3\,p)-H(p,\rho,\nu)
\right]
\ ,$$ where $$\label{H}
H(p,\rho,\nu)
\equiv
3\,k^2\,p
-\left[\mu'\left(\frac{\nu'}{2}+\frac{1}{r}\right)
+\mu\left(\nu''+\frac{\nu'^2}{2}+\frac{2\,\nu'}{r}+\frac{1}{r^2}\right)
-\frac{1}{r^2}\right]
\ .$$ Solving Eq. (\[diffeqtof\]) yields $$\label{fsolution}
f(r)
=
e^{-I(r)}\int_{0}^r
\frac{2\,x\,e^{I(x)}}{x\,\nu'+4}
\left[H(p,\rho,\nu)
+\frac{k^2}{\sigma}\left(\rho^2+3\,\rho\, p\right)\right]dx
+\beta(\sigma)\,e^{-I(r)}
\ ,$$ where the function $I=I(r)$ is again given in Eq. (\[I\]) with $r_0=0$ and the integration constant $\beta$ is taken to depend on the brane tension in such a way that it vanishes in the GR limit $\sigma^{-1}\to 0$.
Upon comparing with Eqs. (\[g11\]) and (\[g22\]) with $\mu$ given in Eq. , one can see that the non-local function $$\label{H2}
H(p,\rho,\nu)
=
3\,k^2 p-\left.\left(2\,G_2^2+G_1^1\right)\right|_{\sigma^{-1}\to 0}
\ ,$$ which clearly corresponds to an anisotropic term, since it vanishes in the GR case with a perfect fluid. This feature can also be seen explicitly from computing Eq. (\[pp\]), which now reads $$\begin{aligned}
\label{ppf}
\frac{6\,{\cal P}}{k^2}
=
-\frac{1}{r^2}
+\left(\frac{1}{r^2}+\frac{\nu'}{2\,r}-\frac{\nu''}{2}-\frac{{\nu'}^2}{4}\right)(\mu+f)
-\left(\nu'+\frac{2}{r}\right)\frac{\mu'+f'}{4}
\ .\end{aligned}$$
In order to recover GR, the geometric deformation (\[fsolution\]) must vanish for $\sigma^{-1}\to 0$. This is achieved provided $\beta(\sigma)\to 0$ and $$\label{constraintf}
\lim_{{\sigma}^{-1}\to 0}
\int_0^r\frac{2\,x\,e^{I(x)}}{x\,\nu'+4)}\,H(p,\rho,\nu)\,dx
=0\ ,$$ which can be interpreted as a constraint for physically acceptable solution. A crucial observation is now that, for any given (spherically symmetric) perfect fluid solution of GR, one obtains $$H(p,\rho,\nu)=0
\ ,
\label{H=0}$$ which means that every (spherically symmetric) perfect fluid solution of GR will produce a [*minimal*]{} deformation in the radial metric component (\[expectg\]) given by $$\label{fsolutionmin}
f^{*}(r)
=
\frac{2\,k^2}{\sigma}\,
e^{-I(r)}\int_0^r
\frac{x\,e^{I(x)}}{x\,\nu'+4}\left(\rho^2+3\,\rho\, p\right)
dx
\ .$$ We would like to stress that this deformation is minimal in the sense that all sources of the deformation in Eq. (\[fsolution\]) have been removed, except for those produced by the density and pressure, which will always be present in a realistic stellar distribution [^6]. The function $f^{*}(r)$ will therefore produce, from the GR point of view, a “minimal distortion” of the GR solution one wishes to consider. The corresponding anisotropy induced on the brane is also minimal, as can be seen from comparing its explicit form obtained from Eq. (\[pp\]), $$\begin{aligned}
\label{ppf3}
\frac{6\,{\cal P}}{k^2\,\sigma}
=
\left(\frac{1}{r^2}+\frac{\nu'}{2\,r}-\frac{\nu''}{2}-\frac{{\nu'}^2}{4}\right)f^{*}
-\left(\nu'+\frac{2}{r}\right)\frac{(f^*)'}{4}
\ ,\end{aligned}$$ with the general expression (\[ppf\]). In particular, the constraint (\[H=0\]) represents a condition of isotropy in GR, and it therefore becomes a natural way to generalise perfect fluid solutions (GR) in the context of the brane-world in such a way that the inevitable anisotropy induced by the extra dimension vanishes for $\sigma^{-1}\to 0$ (see Fig. \[fig1-4\]).
![For $H(p,\rho,\nu)=0$, the extra-dimensional effects on the variables $(p,\rho,\nu)$, namely $(\delta\,p,\delta\rho,\delta\nu)$, do not produce any further anisotropy. Hence the anisotropy remains minimal onto the brane, and this is what ensures the low-energy limit is given by GR. []{data-label="fig1-4"}](Ch1-4.eps "fig:")\
Matching condition for stellar distributions {#s6}
============================================
An important aspect regarding the study of stellar distributions is the matching conditions at the star surface ($r=R$) between the interior ($r<R$) and the exterior ($r>R$) geometry.
In our case, the interior stellar geometry is given by the MGD metric $$ds^{2}
=
e^{\nu^{-}(r)}\,dt^{2}
-\left(1-\frac{2\,\tilde{m}(r)}{r}\right)^{-1}dr^2
-r^{2}\left(d\theta ^{2}+\sin {}^{2}\theta d\phi ^{2}\right)
\ ,
\label{mgdmetric}$$ where the interior mass function is given by $$\label{effecmass}
\tilde{m}(r)
=
m(r)-\frac{r}{2}\,f^{*}(r)
\ ,$$ with $m$ given by the standard GR expression (\[standardGR\]) and $f^{*}$ the minimal geometric deformation in Eq. (\[fsolutionmin\]). Moreover, Eq. (\[fsolutionmin\]) implies that $$f^{\ast }(r)\geq 0
\ ,
\label{f*>0}$$ so that the effective interior mass (\[effecmass\]) is always reduced by the extra-dimensional effects.
The inner metric (\[mgdmetric\]) should now be matched with an outer vacuum geometry, with $p^+=\rho^+=0$, but where we can in general have a Weyl fluid described by the scalars $\mathcal{U}^{+}$ and $\mathcal{P}^{+}$ [@germ]. The outer metric can be written as $$ds^{2}
=
e^{\nu^{+}(r)}\,dt^{2}
-e^{\lambda^{+}(r)}\,dr^{2}
-r^{2}\left(d\theta ^{2}+\sin {}^{2}\theta d\phi ^{2}\right)
\ ,
\label{genericext}$$ where the explicit form of the functions $\nu ^{+}$ and $\lambda ^{+}$ are obtained by solving the effective 4-dimensional vacuum Einstein equations $$R_{\mu \nu }-\frac{1}{2}\,R^\alpha_{\ \alpha}\,g_{\mu \nu}
=
\mathcal{E}_{\mu \nu }
\qquad
\Rightarrow
\qquad R^\alpha_{\ \alpha}=0
\ ,$$ where we recall that extra-dimensional effects are contained in the projected Weyl tensor $\mathcal{E}_{\mu \nu }$. Only a few such analytical solutions are known to date [@MGDextended1; @MGDextended2; @dmpr; @germ; @fabbri]. Continuity of the first fundamental form at the star surface $\Sigma$ defined by $r=R$ reads $$\left[ ds^{2}\right] _{\Sigma }=0
\ ,
\label{match1}$$ where $[F]_{\Sigma }\equiv F(r\rightarrow R^{+})-F(r\rightarrow R^{-})\equiv F_{R}^{+}-F_{R}^{-}$, for any function $F=F(r)$, which yields $${\nu ^{-}(R)}
=
{\nu ^{+}(R)}
\ ,
\label{ffgeneric1}$$ and $$1-\frac{2\,M}{R}+f_{R}^{*}
=
e^{-\lambda ^{+}(R)}
\ ,
\label{ffgeneric2}$$ where $M=m(R)$. Likewise, continuity of the second fundamental form at the star surface reads $$\left[G_{\mu \nu }\,r^{\nu }\right]_{\Sigma }
=
0
\ ,
\label{matching1}$$ where $r_{\mu }$ is a unit radial vector. Using Eq. (\[matching1\]) and the general Einstein equations (\[4Dein\]), we then find $$\left[T_{\mu \nu }^{\rm eff}\,r^{\nu }\right]_{\Sigma}
=
0
\ ,
\label{matching2}$$ which leads to $$\left[ p
+\frac{1}{\sigma }\left( \frac{\rho ^{2}}{2}+\rho \,p+\frac{2}{k^{4}}\,\mathcal{U}\right)
+\frac{4\,\mathcal{P}}{k^4\,\sigma }\right]_{\Sigma }
=
0
\ .
\label{matching3}$$ Since we assumed the star is only surrounded by a Weyl fluid characterised by $\mathcal{U}^{+}$, $\mathcal{P}^{+}$, this matching condition takes the final form $$p_{R}
+\frac{1}{\sigma }\left( \frac{\rho _{R}^{2}}{2}+\rho _{R}\,p_{R}+\frac{2}{k^{4}}\,\mathcal{U}_{R}^{-}\right)
+\frac{4\,\mathcal{P}_{R}^{-}}{k^4\,\sigma }
=
\frac{2\,\mathcal{U}_{R}^{+}}{k^4\,\sigma }
+\frac{4\,\mathcal{P}_{R}^{+}}{k^4\,\sigma }
\ ,
\label{matchingf}$$ where $p_{R}\equiv p^{-}(R)$ and $\rho _{R}\equiv \rho^{-}(R)$. Finally, by using Eqs. (\[uu\]) and (\[ppf3\]) in the condition (\[matchingf\]), the second fundamental form can be written in terms of the MGD at the star surface, denoted by $f_{R}^{\ast }$, as $$p_{R}
+\frac{f_{R}^{*}}{k^2}\left( \frac{\nu _{R}^{\prime }}{R}+\frac{1}{R^{2}}\right)
=
\frac{2\,\mathcal{U}_{R}^{+}}{k^4\,\sigma }
+\frac{4\,\mathcal{P}_{R}^{+}}{k^4\,\sigma }
\ ,
\label{sfgeneric}$$ where $\nu _{R}^{\prime }\equiv \partial _{r}\nu^{-}|_{r=R}$. Eqs. (\[ffgeneric1\]), (\[ffgeneric2\]) and (\[sfgeneric\]) are the necessary and sufficient conditions for the matching of the interior MGD metric (\[mgdmetric\]) to a spherically symmetric “vacuum” filled by a brane-world Weyl fluid.
The matching condition (\[sfgeneric\]) yields an important result: if the outer geometry is given by the Schwarzschild metric, one must have $\mathcal{U}^{+}=\mathcal{P}^{+}=0$, which then leads to $$p_{R}
=
-\frac{f_{R}^{\ast }}{k^2}
\left( \frac{\nu _{R}^{\prime }}{R}+\frac{1}{R^{2}}\right)
\ .
\label{pnegative}$$ Given the positivity of $f^*$, Eq. (\[f\*>0\]), an outer Schwarzschild vacuum can only be supported in the brane-world by exotic stellar matter, with $p_{R}<0$ at the star surface.
The recipe {#s7}
==========
Let us conclude this brief introduction of the MGD approach by listing the basic steps to implement it:
Step 1:
: pick a known perfect fluid solution $\{p,\rho,\nu\}$ of the conservation equation (\[con1\]). This solution will ensure that $H(p,\rho,\nu)=0$ and the radial metric component $\lambda$ will be given by Eq. (\[expectg\]) with $f=f^*$ in Eq. (\[fsolutionmin\]). The GR solution will be recovered in the limit $\sigma^{-1}\to 0$ by construction.
Step 2:
: determine the Weyl functions ${\cal P}$ and ${\cal U}$ from Eqs. (\[pp\]) and (\[uu\]).
Step 3:
: use the second fundamental form given in Eq. (\[sfgeneric\]) to express any GR constant $C$ as a function of the brane tension $\sigma$, that is, $C(\sigma)$. Then we are able to find the bulk effect on pressure $p$ and density $\rho$, that is, $p(\sigma)$ and $\rho(\sigma)$.
Conclusions {#s8}
===========
In the context of the Randall-Sundrum brane-world, a brief and detailed description of the basic elements of the MGD was presented. The explicit form of the anisotropic stress ${\cal P}$ was obtained in terms of the geometric deformation $f$ undergone by the radial metric component, thus showing the role played by this deformation as a source of anisotropy inside stellar distributions. It was shown that this geometric deformation is minimal when a GR solution is considered, therefore any perfect fluid solution in GR belongs to a subset of brane-world solutions producing a minimal anisotropy onto the brane. It was shown that with this approach it is possible to generate the brane-world version of any known GR solution, thus overcoming the non-closure problem of the effective 4-dimensional Einstein equations. A simple recipe showing the basic steps to implement the MGD approach was finally presented. A final natural question arises: is the MGD an useful approach to deal only with the effective 4-dimensional Einstein equations in the brane-world context? The answer is no. Indeed, we have found [@conf] that any modification of general relativity can be studied by the MGD provided that such modification can be represented by a traceless energy-momentum tensor. This mean that the MGD is particularly useful as long as the new gravitational sector is associated with a conformal gravitational sector.
Competing Interests {#competing-interests .unnumbered}
===================
The authors declares that there is no conflict of interest regarding the publication of this paper.
Acknowledgements
================
A.S. is partially supported by Project Fondecyt 1161192, Chile.
[99]{}
B. P. Abbott et al. \[LIGO Scientific and Virgo Collaborations\], Phys. Rev. Lett. 116 (2016)no.6, 061102 \[arXiv:1602.03837 \[gr-qc\]\]. B. P. Abbott et al. \[LIGO Scientific and Virgo Collaborations\], Phys. Rev. Lett. 116 (2016)no.24, 241103 \[arXiv:1606.04855 \[gr-qc\]\]. Clifford M Will, Living Rev. Rel, [**9**]{} (2006). Bergshoeff, Eric A; Hohm, Olaf; Townsend, Paul K, [*Massive Gravity in Three Dimensions*]{} [*Phys.Rev.Lett.*]{} [**102**]{}: 201301 (2009); arXiv:0901.1766.
Claudia de Rham, [*Massive gravity*]{} [*Living Rev. Relativity*]{} [**17**]{},7 (2014); arXiv:1401.4173v2 \[hep-th\]. Eugeny Babichev, Kazuya Koyama, David Langlois, Ryo Saito, Jeremy Sakstein, [*Relativistic Stars in Beyond Horndeski Theories*]{}, Class. Quantum Grav. [**33**]{} 235014 (2016); arXiv:1606.06627v3 \[gr-qc\]. Martin Krššák, Emmanuel N. Saridakis, [*The covariant formulation of f(T) gravity*]{}, Class. Quantum Grav. [**33**]{},115009 (2016); arXiv:1510.08432v2 \[gr-qc\]. Manuel Hohmann, [*Parameterized post-Newtonian limit of Horndeski’s gravity theory*]{}, Phys. Rev. D [**92**]{}, 064019 (2015); arXiv:1506.04253v2 \[gr-qc\]. Nathan Chow, Justin Khoury, [*Galileon Cosmology*]{}, [*Phys.Rev.D*]{} [**80**]{}, 024037 (2009); arXiv:0905.1325v4 \[hep-th\]. Antonio De Felice, Shinji Tsujikawa, [*f(R) theories*]{}, [*Living Rev. Rel.*]{} [**13**]{}, 3 (2010); arXiv:1002.4928 \[gr-qc\].
Thomas P. Sotiriou and Valerio Faraoni, [*f(R) Theories of Gravity*]{}, [*Rev. Mod. Phys.*]{} [**82**]{}, 451 (2010); arXiv:0805.1726 \[gr-qc\].
Sumanta Chakraborty and Soumitra SenGupta, [*Solving higher curvature gravity theories*]{}, [*Eur. Phys. J. C*]{} [**76**]{}, 552 (2016); arXiv:1604.05301v2 \[gr-qc\].
Salvatore Capozziello, Mariafelicia De Laurentis, [*Extended Theories of Gravity*]{}, [*Phys.Rept.*]{} [**509**]{} (2011), 167 (2011); arXiv:1108.6266v2 \[gr-qc\]. S. Capozziello, Vincenzo F. Cardone, A. Troisi [*Reconciling dark energy models with f(R) theories*]{}, [*Phys.Rev. D*]{} [**71**]{}, 043503 (2005); arXiv:astro-ph/0501426v1. Timothy Clifton, Pedro G. Ferreira, Antonio Padilla, Constantinos Skordis, [*Modified Gravity and Cosmology*]{},[*Phys.Rept*]{}. [**513**]{}, 1 (2012); arXiv:1106.2476v3 \[astro-ph.CO\].
Petr Horava, [*Quantum Gravity at a Lifshitz Point*]{}, [*Phys.Rev.D*]{} [**79**]{}, 084008 (2009); arXiv:0901.3775v2 \[hep-th\]. Jorge Bellorin, Alvaro Restuccia, [*On the consistency of the Horava Theory*]{}, Int.J.Mod.Phys. [**D21**]{} 1250029 (2012); arXiv:1004.0055 \[hep-th\]. J. Ovalle, [*Braneworld stars: anisotropy minimally projected onto the brane*]{}, [*in Gravitation and Astrophysics*]{} (ICGA9), Ed. J. Luo, World Scientific, Singapore, 173- 182 (2010); arXiv:0909.0531v2 \[gr-qc\].
L. Randall and R. Sundrum, [*A Large mass hierarchy from a small extra dimension*]{}, [*Phys. Rev. Lett*]{}. [**83**]{}, 3370 (1999); arXiv:hep-ph/9905221v1.
L. Randall and R. Sundrum, [*An Alternative to compactification*]{}, [*Phys. Rev. Lett*]{} [**83**]{}, 4690 (1999); arXiv:hep-th/9906064v1.
Roberto Casadio, Jorge Ovalle, Roldao da Rocha, [*The Minimal Geometric Deformation Approach Extended*]{}, Class. Quantum Grav. [**32**]{}, 215020 (2015); arXiv:1503.02873v2 \[gr-qc\]. J Ovalle, [*Extending the geometric deformation: New black hole solutions*]{}, Int. J. Mod. Phys. Conf. Ser. [**41**]{} 1660132 (2016); arXiv:1510.00855v2 \[gr-qc\]. J. Ovalle, [*Searching Exact Solutions for Compact Stars in Braneworld: a conjecture*]{}, [*Mod. Phys. Lett. A*]{}, [**23**]{}, 3247 (2008); arXiv:gr-qc/0703095v3.
J. Ovalle, [*Non-uniform Braneworld Stars: an Exact Solution*]{}, [*Int. J. Mod. Phys. D*]{}, [**18**]{}, 837 (2009); arXiv:0809.3547 \[gr-qc\]. J. Ovalle, [*The Schwarzschild’s Braneworld Solution*]{}, [*Mod. Phys. Lett. A*]{}, [**25**]{}, 3323 (2010); arXiv:1009.3674 \[gr-qc\]. J. Ovalle, Effects of density gradients on brane-world stars, in [ *Proceedings of the Twelfth Marcel Grossmann Meeting on General Relativity*]{}, eds. Thibault Damour, Robert T. Jantzen and Remo Ruffini. ISBN 978-981-4374-51-4. (World Scientific, Singapore, 2012), p.2243-2245 N. Dadhich, R. Maartens, P. Papadopoulos, V. Rezania, [*Black holes on the brane*]{}, Phys.Lett.B487,1-6(2000); arXiv:hep-th/0003061v3. R. Casadio, J. Ovalle, [*Brane-world stars and (microscopic) black holes*]{}, [*Phys. Lett. B*]{}, [**715**]{}, 251 (2012); arXiv:1201.6145 \[gr-qc\]. R. Casadio, J. Ovalle, [*Brane-world stars from minimal geometric deformation, and black holes*]{}, [*Gen. Relat. Grav.*]{}, [**46**]{}, 1669 (2014); arXiv:1212.0409v2 \[gr-qc\].
J. Ovalle, F. Linares, [*Tolman IV solution in the Randall-Sundrum Braneworld*]{}, [*Phys. Rev. D*]{}, [**88**]{}, 104026 (2013); arXiv:1311.1844v1 \[gr-qc\]. J Ovalle, F Linares, A Pasqua, A Sotomayor, [*The role of exterior Weyl fluids on compact stellar structures in Randall-Sundrum gravity*]{}, [*Class. Quantum Grav.*]{}, [**30**]{}, 175019 (2013); arXiv:1304.5995v2 \[gr-qc\]. László Á. Gergely, [*Friedmann branes with variable tension*]{}, Phys.Rev.D78:084006 (2008); arXiv:0806.3857v3 \[gr-qc\]. R. Casadio, J. Ovalle, R. da Rocha, [*Black Strings from Minimal Geometric Deformation in a Variable Tension Brane-World*]{}, [*Class. Quantum Grav.*]{}, [**30**]{}, 175019 (2014); arXiv:1310.5853 \[gr-qc\]. J. Ovalle, L.A. Gergely, R. Casadio, [*Brane-world stars with solid crust and vacuum exterior*]{}, [*Class. Quantum Grav.*]{}, [**32**]{}, 045015 (2015); arXiv:1405.0252v2 \[gr-qc\]. R. Casadio, J. Ovalle, R. da Rocha, [*Classical Tests of General Relativity: Brane-World Sun from Minimal Geometric Deformation*]{}, [*Europhys. Lett.*]{}, [**110**]{}, 40003 (2015); arXiv:1503.02316 \[gr-qc\]. R. T. Cavalcanti, A. Goncalves da Silva, Roldao da Rocha, [*Strong deflection limit lensing effects in the minimal geometric deformation and Casadio–Fabbri–Mazzacurati solutions*]{}, Class. Quantum Grav. [**33**]{}, 215007 (2016); arXiv:1605.01271v2 \[gr-qc\]. Roberto Casadio, Roldao da Rocha, [*Stability of the graviton Bose-Einstein condensate in the brane-world*]{}, Phys. Lett. B [**763**]{}, 434 (2016); arXiv:1610.01572 \[hep-th\]. Roldao da Rocha, [*Dark SU(N) glueball stars on fluid branes*]{}, arXiv:1701.00761 \[hep-ph\]. Roldao da Rocha, [*Black hole acoustics in the minimal geometric deformation of a de Laval nozzle*]{}, arXiv:1703.01528 \[hep-th\]. D. Lovelock, J. Math. Phys. [**12**]{}, 498 (1971). R. Maartens, [*Brane-world gravity*]{}, Living Rev.Rel. 7 (2004). R. Maartens, K. Koyama, [*Brane-world gravity*]{}, arXiv:1004.3962v1 \[hep-th\]. T. Shiromizu, K. Maeda and M. Sasaki, [*The Einstein Equations on the 3-Brane World*]{}, *Phys.Rev. D* [**62**]{} (2000) 024012; arXiv:gr-qc/9910076v3. C. Germani, R. Maartens, [*Stars in the brane-world*]{}, Phys.Rev. D64, 124010(2001); arXiv:hep-th/0107011v3. Tiberiu Harko, Matthew J. Lake, [ *Null fluid collapse in brane world models*]{}, Phys. Rev. D [**89**]{}, 064038 (2014); arXiv:1312.1420v3 \[gr-qc\]. Francisco X. Linares, Miguel A. Garcia-Aspeitia, L. Arturo Ureña-Lopez, [*Stellar models in Brane Worlds*]{}, Phys. Rev. D [**92**]{}, 024037 (2015); arXiv:1501.04869v1 \[gr-qc\]. R. Casadio, L. Mazzacurati, [*Bulk shape of brane world black holes*]{}, Mod. Phys. Lett. A [**18** ]{} (2003) 651-660 \[arXiv:gr-qc/0205129v2\]. R. da Rocha and J. M. Hoff da Silva, [*Black string corrections in variable tension brane-world scenarios*]{}, Phys. Rev. D [**85**]{}, 046009 (2012) \[arXiv:1202.1256v1 \[gr-qc\]\]. K. Koyama and R. Maartens, [*Structure formation in the DGP cosmological model*]{}, JCAP 0601, [**016**]{} (2006); arXiv:astro-ph/0511634v1. A. Viznyuk and Y. Shtanov, [*Spherically symmetric problem on the brane and galactic rotation curves*]{}, Phys.Rev.D,76 064009 (2007). R. Casadio, A. Fabbri and L. Mazzacurati, [*New black holes in the brane world?*]{}, Phys. Rev. D [**65**]{}, 084040 (2002), [*New black holes in the brane world?*]{}, gr-qc/0111072. J. Ovalle, R. Casadio and A. Sotomayor, [*Searching for modified gravity: a conformal sector?* ]{}; arXiv:1702.05580 \[gr-qc\].
[^1]: jovalle@usb.ve
[^2]: casadio@bo.infn.it
[^3]: adrian.sotomayor@uantof.cl
[^4]: Of course any deviation from GR/Newton theory at i) very short distances or ii) beyond the Solar System scale is welcome as long as it could deal with the quantum problem or dark matter problem
[^5]: We use units with $G$ the 4-dimensional Newton constant, $k^{2}=8\,\pi \,G$, and $\Lambda$ the 4-dimensional cosmological constant.
[^6]: There is a MGD solution in the case of a dust cloud, with $p=0$, but we will not consider it in the present work.
|
---
abstract: 'Safety in autonomous systems has been mostly studied from a human-centered perspective. Besides the loads they may carry, autonomous systems are also valuable property, and self-preservation mechanisms are needed to protect them in the presence of external threats, including malicious robots and antagonistic humans. We present a biologically inspired risk-based triggering mechanism to initiate self-preservation strategies. This mechanism considers environmental and internal system factors to measure the overall risk at any moment in time, to decide whether behaviours such as fleeing or hiding are necessary, or whether the system should continue on its task. We integrated our risk-based triggering mechanism into a delivery rover that is being attacked by a drone and evaluated its effectiveness through systematic testing in a simulated environment in Robot Operating System (ROS) and Gazebo, with a variety of different randomly generated conditions. We compared the use of the triggering mechanism and different configurations of self-preservation behaviours to not having any of these. Our results show that triggering self-preservation increases the distance between the drone and the rover for many of these configurations, and, in some instances, the drone does not catch up with the rover. Our study demonstrates the benefits of embedding risk awareness and self-preservation into autonomous systems to increase their robustness, and the value of using bio-inspired engineering to find solutions in this area.'
author:
- 'Sing-Kai Chiu, Dejanira Araiza-Illan, and Kerstin Eder[^1]'
bibliography:
- 'references.bib'
title: |
Risk-based Triggering of\
Bio-inspired Self-Preservation to\
Protect Robots from Threats
---
Introduction
============
Autonomous systems such as delivery drones, self-driving cars and robotic assistants are becoming an affordable reality in our daily life. Safety aspects so far have been studied from a human-centered perspective, i.e. keeping people and people’s property safe, exemplified by safety standards for robots that interact and collaborate with people (e.g. ISO/TS 15066:2016 Robots and robotic devices – Collaborative robots). Nonetheless, as robots and autonomous systems are also valuable property, and so are the loads they carry, they will need to look after their own safety if possible; i.e. they will need self-preservation mechanisms in the presence of external threats, such as vandalism and theft [@Bruscic2015; @Salvini2010].
Nature has evolved a range of strategies to survive in a dangerous environment, including morphological, ecological and behavioural adaptations. Animals utilize multiple environmental cues to assess whether they are at risk [@Stankowich2005]. The plasticity to exhibit behaviours in response to a potential threat is crucial for survival. Anti-predatory strategies with no detrimental effects on the predator, such as taking refuge, and late resort fleeing mechanisms such a protean flight, provide a source of bio-inspired behaviour for robotic safe threat avoidance, as they ensure safety for both the robot and its antagonist.
Although many strategies such as stealth navigation [@Tews2004] and fleeing behaviours have been designed and implemented for mobile autonomous systems [@Araiza2012; @Curiac2015] to avoid dangerous encounters, mechanisms to trigger one or several of these self-preservation strategies to achieve an adequate and timely response to the threats still need to be developed. In nature, the instant of evasion initiation depends on many biological and environmental factors [@Cooper2010; @Domenici2011]. How can we use this knowledge for the design of more competent and fully autonomous systems, able to respond to threats towards robust self-preservation?
In this paper, we propose a novel biologically inspired mechanism that emulates environmental and biological evasion initiation factors, to trigger self-preservation response behaviours based on a risk analysis of the dangerous situation. We demonstrate the construction and implementation of such a mechanism through a case study consisting of a delivery rover and an attacking drone. To evaluate the proposed risk-based triggering mechanism within a cost-effective realistic framework, we implemented a simulator in the Robot Operating System (ROS) [^2] and the 3D physics simulator Gazebo [^3]. In a simulation, the drone pursues the delivery rover either persistently or constrained within a time bound. The rover tries to avoid theft or damage by choosing from a variety of predefined response behaviours such as fleeing or seeking refuge, once it has evaluated the risk in the environment in the context of its internal state.
We compared the use of the triggering mechanism and different configurations of response behaviours to not using it at all, i.e. a rover that is unaware of the risk and cannot trigger self-preservation responses. Our results show that, overall, the triggering mechanism coupled with self-preservation responses has the potential to increase the rover’s success on reaching a delivery location, or at least the distance between the threat and the rover. This demonstrates the benefits of embedding risk awareness and self-preservation strategies into autonomous systems to increase their robustness, and the usefulness of employing bio-inspired engineering solutions towards achieving true autonomy.
Related Work
============
Anti-predator individual mechanisms are divided into different categories: detection avoidance, behavioural vigilance, warning signals, defensive adaptations, and last resort behaviours [@Caro2005]. Detection avoidance and defensive adaptations comprise morphological behaviours such as crypsis (matching the background of the environment), weaponry in the body (e.g. spines), the release of chemicals [@Barnett2007] to conceal their presence, deceive and mislead predators [@Caro2014], and also behaviours such as crouching for concealing the body, seeking refuge [@Martin1999], mobbing and distraction. As warning signals, vocal signals warn other animals of predators’ presence, whereas displays of coloration advertise potential chemical defence to dissuade predators. Morphological adaptations are difficult to implement within the design of robots, although some are emerging, e.g. robots that match their background [@Wang2016]. Avoidance and defensive behaviours do not negatively impact on the safety of the antagonist.
Last resort behaviours involve increasing the distance between prey and predators. Examples are protean behaviour or fleeing away in a zigzag (irregular) manner [@Humphries1970], along with freezing (immobility) where extreme examples are thanatosis or feigning death, and autotomy or leaving a limb behind. Fleeing, freezing and proteanism are well suited to autonomous navigation tasks [@Araiza2012; @Curiac2015].
Animals need to recognize the risk of predation. Vigilance is a behavioural adaptation where animals alternate between foraging and scanning for potential threats [@Caro2005]. Factors and cues such as predator size, approach velocity, perceived sounds, or physical weaponry, influence the choice of response behaviours once a threat has been detected [@Amo2004; @Chivers2014; @Helfman1989; @Smith2001; @Stankowich2005; @Stankowich2006]. As with animals, basic capabilities to assess risk from sensed environmental threats are necessary for robot autonomy.
Risk assessment procedures provide a systematic approach to guide developers in creating autonomous robots that are safe and dependable, from a human-centric perspective–i.e. for safe human-robot interactions–, at design time [@MartinG2010; @Woodman2012; @Dogramadzi2014; @Rezazadegan2015]. Environmental risk analyses can be adopted at runtime, e.g. as on-line risk monitors, to control the execution of self-preservation strategies, and even to trigger adaptation and learning towards dealing with threats in the environment as in [@Arcaini2015]. These domains, nonetheless, could benefit from considering biologically inspired mechanisms for efficient self-preservation responses as well as risk measures and factors.
This paper proposes such a bio-inspired runtime self-preservation mechanism to trigger different response behaviours according to perceived threats from the environment. Selecting a response behaviour might mean giving up on other behaviours, such as the delivery of a package, or reaching a final destination, either in the short or the longer term. The decision to trigger self-preservation behaviours is critical. A device is needed to assess whether and when the danger from the environment implies a greater risk and consequently the potential for greater costs and loss, than not reacting to it.
Mechanism to Trigger Self-preservation Behaviours According to Threats {#sc:mechanism}
======================================================================
The threats and dangerous situations in the environment that may affect an autonomous system differ widely, depending on the system’s application. Hazard analysis, as part of a rigorous and systematic risk assessment, involves customers, stakeholders and system designers in the identification and evaluation of the relevant threats and dangers, taking into consideration severity of the harm and the likelihood of it occurring, which results in a risk rating, from low, via moderate and high, to extreme. We assume that a set of possible threats has been identified using such a process, and that system designers have equipped the autonomous system with means, including sensing and real-time processing, to detect these in a timely manner.
For example, the analysis in Table \[table:analysis\] shows possible generic threats with their risk rating for a delivery rover, according to some hazard analysis, for different types of environments, together with the bio-inspired self-preservation response behaviours to mitigate these, such as fleeing, seeking refuge, thanatosis and autotomy. Physical harassment by small animals or children may not pose much of a threat, and adequate responses would include moving away or shutting down for some time (an implementation of thanatosis), in urban environments. If the rover is likely to be stolen with its contents, a distraction could be achieved by safely releasing the parcel it carries whilst fleeing (an implementation of autotomy). A cross is used to indicate a potentially beneficial response behaviour for the combination of threat and environment.
\[!t\]
-- -- ---------------- ------- -------------- --------
Urban Open terrain Indoor
Fleeing X
Seeking refuge X X X
Thanatosis X X X
Autotomy
Fleeing X
Seeking refuge X X X
Thanatosis
Autotomy X X
Fleeing X X
Seeking refuge X X X
Thanatosis
Autotomy
Fleeing X
Seeking refuge X X X
Thanatosis
Autotomy X X X
Fleeing X
Seeking refuge X X X
Thanatosis
Autotomy X X X
-- -- ---------------- ------- -------------- --------
: Analysis of environmental threats and suitable response behaviours in different environments for a delivery rover \[table:analysis\]
Qualitative processes to grade the risk of hazards provide metrics to classify their consequences, according to their severity and likelihood of occurrence [@Woodman2012]. For example, a risk classification matrix based on the one in the safety standard IEC 61508 ‘Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems’, where four risk classes are possible, from the most severe (Class I) to the least severe (Class IV), as shown in [@Woodman2012]. In our proposed mechanism, we have adapted these qualitative processes to compute a measure to trigger pertinent response behaviours against threats in the environment.
Following an analysis like the one in Table \[table:analysis\], where adequate self-preservation behaviours are chosen as response to particular threats, the next step is the implementation of a mechanism to trigger the start of such responses, once the risk level is assessed and deemed to be at the corresponding level. We propose the computation of a quantitative measure of risk with respect to the hazards in the environment, and other system-related internal factors that should be accounted for in terms of system safety, the latter emulating internal biological that influence the process of initiating defensive mechanisms in animals. We consider the existence of $N$ risk factors from environment sensing information collected by an autonomous system, which indicate the type of hazard or threat from the environment towards the system, and hence its risk rating, and $M$ other factors that assess relevant data about the current state of the system (e.g. battery life, distance to the destination, proximity to good users). Each factor is evaluated through a metric $r_i, i = 1, \ldots , N, N+1,\ldots, N+M, r_i \in \mathbb{R}$, a function over measured or sensed system variables $\bar{x}=[x_1,\ldots,x_j]$ that produces a score, i.e. $r_i: \bar{x} \rightarrow \mathbb{R}$. An overall risk score $r_{TOTAL}$ can be computed as the (weighted) accumulation of all these $r_i$ factors, e.g.$$r_{TOTAL}= \sum_{i=1}^{N+M} w_i\cdot r_i$$ to provide a mapping between a level of threat and a response $a \in \mathcal{A}$, i.e. $r: \mathbb{R} \rightarrow a$, where $\mathcal{A}$ is the set of all implemented possible response behaviours such as fleeing or freezing (thanatosis).
Case Study {#sc:casestudy}
==========
As a case study to evaluate the proposed risk-based self-preservation response triggering mechanism, we continue with the delivery rover example, pursued by an autonomous drone. Three particular scenarios from Table \[table:analysis\] were employed to create a risk scoring model, for which environmental and internal factors to sense and measure were derived, to compute a risk rating as explained in Section \[sc:mechanism\]. Additionally, these scenarios were used to choose and implement pre-defined response behaviours to be triggered according to the computed risk, by the response triggering mechanism:
1. The drone is at a long distance from the rover, where attempts to hack the rover’s control towards stealing the delivery consignment can be made. Fleeing has been chosen as the rover’s response behaviour by the designer.
2. The drone is harassing the rover at a closer distance, for which fleeing with proteanism could provide means to confuse the drone.
3. The drone is seeking to damage the rover, approaching until physical contact is made, for which refuge against the drone needs to be sought.
Note that as the distance between the rover and the drone decreases, the intentions of the drone might become more sinister and the perceived risk of damage to the rover increases accordingly.
After designing the risk scoring model for the triggering mechanism, a simulator was implemented in ROS and Gazebo. We used available robot models corresponding to real hardware platforms, to provide realism and validity to the experiments, at a computational cost.
Instantiation of the Triggering Mechanism for the Case Study {#scc:instmechanism}
------------------------------------------------------------
According to the scenarios, four main environmental and internal factors have been considered for the mechanism to trigger self-preservation responses: the perceived distance between the rover and the drone, the perceived drone sound, the perceived drone speed, and the rover’s battery life, i.e. $N=3$ and $M=1$. Each of these cues is considered to have equal impact in the measured total risk $r_{TOTAL}$. In practice, different scenarios may require a different weighting of the risk factors, and different number of environmental and internal cues, depending on the environment and what an autonomous system can detect and sense. The total risk $r_{TOTAL}$ is computed as the accumulation of the relevant individual risk metrics (from the distance $r_d$, sound $r_p$, speed $r_v$ and battery life $r_b$ respectively), each weighted by 0.25, $$r_{TOTAL} = 0.25r_d + 0.25r_p + 0.25r_v+ 0.25r_b.$$
Consider the Euclidean distance between the rover in location $(x,y,z)$ and the drone in location $(x_d,y_d,z_d)$ (all in meters) in the 3D space at time $t$, defined as $$\label{eq:distance}
d(t) = \sqrt{(x - x_d )^2 + (y - y_d )^2 + (z - z_d)^2}.$$ We assign a score $s(t)$ that is inverse to the distance $d(t)$, which increases if the drone approaches the rover, and decreases if the rover moves away, $$s(t)=\frac{100}{d(t)}.$$ We then compute five consecutive distance scores, i.e. samples $i=1,\ldots,5$ at times $t_1,\ldots,t_5$ (e.g. every second). Consider the gradient of these samples, $$\nabla s=\frac{\sum_{i=1}^{5}(s(t_i)-\mu_s)(t_i-\mu_t)}{\sum_{i=1}^{5}(s(t_i)-\mu_s)^2},$$ where $\mu_s$ is the average of distance scores over the samples $i=1,\ldots,5$, $\mu_t=3$ (the average of 5 seconds). If the gradient is positive, the rover is in greater risk of an attack, as the drone has moved closer. Whereas if the gradient is negative the robot is no longer in as high a risk as it was before. Consequently, we propose the computation of the risk given a distance change through the metric $$r_d = \left\lbrace \begin{array}{ll}\beta_d \nabla s & \qquad \text{if the gradient is positive} \\ 0 & \qquad \text{if the gradient is negative} \end{array} \right.,$$ where $\beta_d$ is a coefficient that normalizes $r_d$ to a value between 0 and 1.
The sound pressure $p$ at time $t$ is calculated from the measured distance $d(t)$ defined in (\[eq:distance\]), $$p(t) = \frac{60}{d(t)}.$$ The sound pressure increases if the drone gets closer to the rover. Note that this measure does not take into account how sound reflects from surfaces, nor the presence of objects in between the origin of sound and the sensor. The risk given the sound pressure change is also computed from the gradient of five pressure samples, $$r_p=\left\lbrace \begin{array}{ll}\beta_p \frac{\sum_{i=1}^{5}(p(t_i)-\mu_p)(t_i-\mu_t)}{\sum_{i=1}^{5}(p(t_i)-\mu_p)^2} & \qquad \text{if the gradient is positive} \\ 0 & \qquad \text{if the gradient is negative} \end{array} \right.,$$ where $\mu_p$ is the average over the pressure samples, and $\beta_p$ normalizes $r_p$ to a value between 0 and 1.
To calculate an approximation of the relative approach velocity, a sample of the distance $d(t)$ in meters is taken every two seconds (where $d(t_2)$ is the most recent sample, and $d(t_1)$ is the previous sample), and we use the standard definition of the velocity as the difference of the distance over a period of time (in this case 2s), $$v(t)=\frac{d(t_2)-d(t_1)}{2}.$$ The risk, given the velocity change, is also computed from the gradient of five approximations, $$r_v=\left\lbrace \begin{array}{ll}\beta_v \frac{\sum_{i=1}^{5}(v(t_i)-\mu_v)(t_i-\mu_t)}{\sum_{i=1}^{5}(v(t_i)-\mu_v)^2}& \qquad \text{if the gradient is positive} \\ 0 & \qquad \text{if the gradient is negative} \end{array} \right.,$$ where $\mu_v$ is the average over five velocity approximations, and $\beta_v$ normalizes $r_v$ to a value between 0 and 1. In general, the velocity of the drone remains constant once the maximum has been reached, with changes only at the initial lift from the ground, and when performing a rotation to face the rover’s direction.
Monitoring the battery life is analogous to biological internal factors such as hunger or health status, which influence the kind of triggered anti-predator strategies. The remaining battery energy level at time $t$ is computed considering a total capacity of $B_{TOTAL}$, and a linear discharge rate $\phi$, $$b(t) = 100-\frac{B_{TOTAL}-t \phi}{6}.$$ The risk given the battery life is calculated according to the energy level, $$r_b=\beta_b b(t).$$ where $\mu_b$ is the average over five computations, and $\beta_b$ normalizes $r_b$ to a value between 0 and 1.
Based on the computed total risk $r_{TOTAL}$, different sets of response behaviours can be programmed, to be triggered when risk thresholds are met. For example, the rover decides to pursue its delivery goal if $r_{TOTAL} < \gamma_{flee}$, flee towards the delivery goal if $r_{TOTAL} \geq \gamma_{flee}$, flee with proteanism if $r_{TOTAL} \geq \gamma_{prot}$, or seek refuge if $ r_{TOTAL} \geq \gamma_{ref}$, with $\gamma_{flee} \leq \gamma_{prot} \leq \gamma_{ref}$ as thresholds of risk. Alternatively, the rover could perform only one response behaviour, e.g. fleeing when $ r_{TOTAL} \geq \gamma_{flee}$.
This risk measuring model reflects the scenarios and the designer’s intentions regarding response strategies to avoid financial loss and damage. After executing a response behaviour for some time, the total risk $r_{TOTAL}$ is recomputed to determine if a change in behaviour is needed, i.e. if the rover should continue with the original task (e.g. reaching a delivery goal), try another response behaviour, or continue with the same response, as per the design.
Implementation of the Simulator in ROS and Gazebo
-------------------------------------------------
The ROS framework offers a platform to develop modular software for robots and autonomous systems, consisting of ‘nodes’ (concurrent programs in e.g. Python and C++), ‘topics’ (broadcast messages) and ‘services’ (one-to-one communication). ROS allows distributed computation through a server-client architecture. Gazebo is a 3D physics simulator compatible with ROS. Many robotic platforms are freely available in simulation for ROS and Gazebo. We constructed a simulator that uses the Clearpath Robotics Jackal as the rover[^4], and the Hector quadrotor[^5] as the drone. An example of both robots visualized in a Gazebo simulation is shown in Figure \[fig:robots\].
![Visualization of the 3D simulation of the Jackal rover and the Hector Quadrotor in Gazebo. \[fig:robots\]](roverdrone.png){width="70.00000%"}
The structure of our simulator is shown in Figure \[fig:simulator\]. The Drone and Rover Model Nodes (in dark gray) comprise the Gazebo 3D models, and the low-level motion control for the actuators (e.g. rotation of the wheels). The implemented bio-inspired risk-based triggering mechanism (as a single node) is shown in light gray, with data inputs from the sensor nodes, and outputs to the Rover Navigation Nodes. Other developed nodes (Drone and Rover Navigation and Sensors) are shown in white. All the developed nodes were implemented in Python.
![Structure of the ROS and Gazebo simulator for the case study, comprising a delivery rover and a drone trying to steal from, or vandalize the rover. A bio-inspired risk-based triggering mechanism selects adequate self-preservation response behaviours in the rover.\[fig:simulator\]](rosdiagram.png){width="70.00000%"}
The Drone Navigation Nodes control the quadrotor’s linear and angular velocity according to readings of the rover’s current location in the Gazebo model, aiming to minimize the distance between itself and the rover, $d(t)$. The drone indicates if the rover has managed to hide or reach its delivery goal, or if it has been successfully reached, i.e. if the distance $d(t)$ is smaller than a minimal threshold, $d_{capture}$. The drone rotates over the vertical axis at an angular speed $\omega_d$ to change its orientation towards the rover, and at the linear speed of $v_d$ to pursue the rover. A “persistent” drone has an infinite amount of battery charge, and will pursue the rover until a drone-rover interaction is finished. A “cautious” drone considers its finite battery charge, deciding to stop pursuit after some time has elapsed to be able to return to its base safely. Notice that the latter mode is more realistic than the former, as it represents a modelling refinement that considers individual costs that the threats in the environment would also need to consider.
The Jackal rover has been programmed to navigate autonomously towards a delivery goal, a location $(x_g,y_g)$, by iteratively performing angle correction at a speed of $\omega$, followed by a linear displacement at a speed $v$. If a self-preservation response behaviour is triggered by the risk-based mechanism, the Rover Navigation Nodes execute a combination of fleeing (moving faster towards the delivery goal), fleeing with proteanism (following sub-goals with randomized orientation angles but avoiding the pursuer, as proposed in [@Araiza2012]), or seeking refuge (navigating towards a refuge in a fixed location), all of these at an increased linear speed of $2v$. By decoupling the navigation nodes from the triggering mechanism, the modular structure allows testing different complex self-preservation behaviours.
Sensors Nodes emulate real sensing by reading data from the Gazebo models, such as the location of the drone, and through models of the rover’s internal state, such as the state of the battery charge. The sensing output is used by the risk-based metrics, embedded in the triggering mechanism node, to trigger adequate response behaviours.
Experiments and Results
=======================
Experiments in simulation were conducted to evaluate a self-preservation triggering mechanism presented in Section \[sc:mechanism\], and instantiated for the case study in Section \[scc:instmechanism\].
Setup
-----
Two different self-preservation configurations were tested in simulation. In the configuration A, the rover chooses fleeing, proteanism or seeking refuge, if $r_{TOTAL}$ exceeds the risk thresholds $\gamma_{flee} \leq \gamma_{prot} \leq \gamma_{ref}$, respectively. In the configuration B, the rover chooses fleeing if $ r_{TOTAL} \geq \gamma_{flee}$. Additionally, the rover does not have a triggering mechanism nor self-preservation behaviours in configuration C. Two drone pursuit modes were tested, persistent and cautious, in combination with the three configurations described before.
We generated (pseudorandomly) 150 sets of initial locations for the rover, the drone, the hideaway, and the rover’s delivery goal. The initial locations were restricted so that the distances between the rover and the drone to be sufficiently apart at the start of a simulation. Each one of these initial location sets was applied to each configuration A to C, in combination with a persistent or cautious pursuer, for a total of $150 \times 3 \times 2$ simulations. This allowed a fair comparison of all the configurations A to C, for the different kinds of pursuers. A simulation is run with each set, lasting an allowed maximum of 80 seconds (plus 20 seconds of launching overhead, and 45 of termination). The rover will stop moving if it is reached by the drone, or if it safely reaches the delivery location.
Other setup parameters for the triggering mechanism comprised $\beta_p = \frac{1}{8}$, $\beta_d= \frac{1}{14}$, $\beta_v = \frac{1}{4}$, $\beta_b = \frac{1}{100}$, $B_{TOTAL}=600$, $\phi=1$, $\gamma_{flee}=0.2$, $\gamma_{prot}= 30$, and $\gamma_{ref}=40$. For the drone, we used $\omega_d=0.4$ rad/s, and $v_d=0.5$ m/s, with a $d_{capture}=0.15$ m. The rover navigates with $v=0.5$ m/s and $\omega=1.0$ rad/s.
The simulations ran on a PC with Intel 3230M 2.60 GHz CPU, 8 GB of RAM, 64-bit Ubuntu 14.04, ROS Indigo, and Gazebo 2.2.5. For each simulation, we collected the sets of initial parameters, type of triggered self-preservation strategy according to $r_{TOTAL}$, and conclusion of the encounter (distances and elapsed simulation time). All the logged data and examples of simulations with varied initial conditions and observed behaviours are openly available online [^6].
Results
-------
We considered the following success criteria during a simulation: reaching the consignment delivery location before capture (strong success); increasing the distance between the drone and the rover when not captured (success); and changing the outcome to reaching the delivery location with configurations A and B, compared to being captured with configuration C, for the same initial condition (relative success). We expected that, in general, the first two types of success would be more frequent in the simulations when using the triggering mechanism and the self-preservation behaviours than without using any self-preservation at all. Using configurations A or B would make the rover reach the delivery goal in instances that it would not without self-preservation. Furthermore, we expected that using self-preservation would grant more success when the rover was pursued by a cautious drone than with a persistent one, as the rover would have the opportunity to reach the delivery goal once the drone gave up.
[llcccccc]{} &&\
& A && B && C\
& All && Fleeing && None\
\
Delivery goal reached (strong success) && 116/150 && 138/150$^\dagger$ && 138/150$^\dagger$\
Distance increased (success) && 97/118$^\ddagger$ && 114/138$^\ddagger$ && –\
Rover was captured (strong failure) && 32/150$^{*}$ && 12/150 && 11/150\
Not captured, goal not reached (inconclusive) && 2/150 && 0/150 && 1/150\
&& 8/11$^\circ$ && 6/11$^\circ$ && –\
&& 29/138$^\bullet$ && 7/138 && –\
\
Delivery goal reached (strong success) && 143/150 && 145/150$^\dagger$ && 145/150$^\dagger$\
Distance increased (success) && 87/144$^{\ddagger,\diamond}$ && 60/148$^{\ddagger,\diamond}$ && –\
Rover was captured (strong failure) && 6/150$^{*}$ && 2/150$^\$$ && 5/150\
Not captured, goal not reached (inconclusive) && 1/150 && 3/150 && 0/150\
&& 5/5 && 5/5$^\#$ && –\
&& 6/145 && 2/145$^\#$ && –\
Table \[table:results\] shows the number of simulations that were successful (according to the success criteria), were inconclusive (i.e. by the end of the time limit per simulation, the rover was not captured but did not reach the delivery goal either), failed (i.e. the rover was captured), for a drone in two pursuit modes (persistent or cautious), over 150 simulations with different initial locations (for the rover, drone, delivery goal and refuge), and with or without the triggering mechanism and different types of self-preservation behaviours. We also recorded which self-preservation strategies were triggered on each simulation, shown in Table \[table:strategies\], to confirm the correct functioning of the triggering mechanism.
The results show that, in general, the combination of the triggering mechanism and only fleeing (configuration B) is more successful than combining the triggering mechanism with the multiple anti-predator behaviours of configuration A, and than not reacting to the threat. We observed that seeking refuge sometimes lead the rover to move closer to the drone. Additionally, we observed an oscillation between navigation objectives due to the risk increasing and decreasing: moving towards a refuge or trying to reach the delivery goal, which in some cases caused the rover to be ‘stuck’ in a particular segment of the environment, and the drone was able to get closer. These issues are reflected in the strong failure results (see $^*$ in Table \[table:results\]).
In terms of the different drone pursuit behaviours, persistent and cautious, the mechanism in configuration B was as strongly successful (i.e. it reached the delivery goal) as a rover without any self-preservation (see $^\dagger$ in Table \[table:results\]). Nonetheless, in terms of increased overall distance between the drone and the rover by the end of a simulation, any of the self-preservation configurations A or B achieved better results for a persistent drone, than for a cautious drone (see $^\ddagger$ in Table \[table:results\]), which was contrary to our expectations. The behaviours in configurations A or B are triggered for longer and at a higher frequency for a persistent drone, which leads to more instances of success than for a cautious drone. Furthermore, only fleeing (configuration B) for longer under a persistent drone threat is more efficient at increasing the distance between the rover and the drone, than a combination of self-preservation behaviours (configuration A). The opposite happens for a cautious drone, where configuration A outperforms configuration B (see $^\diamond$ in Table \[table:results\]). This highlights the usefulness of self-preservation behaviours that momentarily change the navigation goals (proteanism or seeking refuge) when the threats in the environment are limited by the management of their own resources.
A rover with configuration A was more successful than one with configuration B at changing the simulation outcomes to reaching the delivery goal for a persistent drone, for the same starting conditions where the rover would be captured with configuration C (see $^\circ$ in Table \[table:results\]). Nonetheless, new and more capture instances were introduced with configuration A (see $^\bullet$ in Table \[table:results\]). Only configuration B achieved some relative success, for a cautious drone (see $^\#$ in Table \[table:results\]), coupled to the most reduced strong failure results (see $^\$$ in Table \[table:results\]).
[llrrr]{} &&\
& A && B\
&All && Only fleeing\
\
Use of simple fleeing && 148/150& & 148/150\
Use of fleeing with proteanism && 59/150&& –\
Use of refuge seeking && 25/150&& –\
No behaviours triggered && 2/150 && 2/150\
\
Use of simple fleeing && 141/150 && 142/150\
Use of fleeing with proteanism && 41/150 && –\
Use of refuge seeking && 2/150 && –\
No behaviours triggered && 9/150 && 8/150\
The results in Table \[table:strategies\] show that indeed the triggering of self-preservation behaviours takes place in the majority of the simulations. Note that, in the configuration A, different anti-predator behaviours were allowed per simulation. Fewer simulations where protean fleeing and seeking refuge were triggered evidence that performing fleeing beforehand helps reducing the risk.
Discussion
----------
As shown by the results in the previous section, the use of the triggering mechanism in combination with self-preservation behaviours was successful (i.e. increased the distance between the rover and the pursuer in more than half of the simulations with a variety of initial conditions) for a persistent pursuer, and also was strongly successful (i.e. allowed the rover to get to the delivery goal in more instances) for a cautious pursuer, compared to not reacting to the threats. Nonetheless, particular combinations of self-preservation behaviours were less strongly successful against a persistent pursuer, whereas for a cautious pursuer only fleeing was not that successful. Also, relative success results were varied. These mixed results, according to our expectations, require further examination of combinations of fleeing and refuge seeking behaviours, to provide a more conclusive evaluation of the triggering mechanism. Furthermore, anti-predator behaviours coupled with the triggering mechanism should be designed so that they are more effective than ‘doing nothing’.
An element that influences the functioning of the triggering mechanism is the number and inter-relationships of the risk factors. Variations of the risk models in Section \[scc:instmechanism\], such as the use of different weights and coefficients, would need to be explored further. There are evidently trade-offs between avoiding an attack and achieving a successful delivery. Hence, suitable models and computation of the risk factors need to be explored further, e.g. multi-objective optimization. Additionally, more sophisticated mechanisms could be used to enhance the risk computation, such as prediction models for the drone.
Threats to the validity of the case study used in this paper and the results include, besides a limited number of combination of self-preservation behaviours and risk factors, the definition of ‘success’ for the evaluation and result reporting. The selection of some success metrics or criteria over others has an impact on the reported results. Whereas only considering reaching the delivery goal as ‘success’ is intuitive, it leaves out other aspects of the encounter such as significantly increasing the distance between the drone and the rover, getting outside the line of view of the drone. These latter aspects can also be considered as successful encounters from the rover’s perspective, and altogether provide a better picture of the effect of the use of the triggering mechanism and the self-preservation behaviours, towards a more holistic evaluation methodology.
Conclusions and Future Work
===========================
We presented a biologically inspired risk-based triggering mechanism to initiate self-preservation strategies. This mechanism considers environmental and internal system factors to measure the overall risk at any moment in time, to decide whether behaviours such as fleeing or hiding are necessary, or whether the system should continue with its task. This emulates animal anti-predator behaviour initiation. The mechanism’s design is based on risk assessment methodologies for robotics design, complementing traditional human-centered safety analyses towards systems with more autonomy and self-preservation.
A case study was developed to evaluate such a triggering mechanism coupled with different self-preservation strategies, compared against not reacting to threats. In the case study, a delivery rover is attacked by a drone in a simulated environment in ROS and Gazebo, with a variety of different randomly generated conditions such as initial locations, and delivery goals.
Our study demonstrates the need for embedding risk awareness and self-preservation towards successful autonomous systems, and the usefulness of bio-inspired engineering solutions. In general, the triggering mechanism coupled with self-preservation strategies increases the distance between the threat of the drone and the rover. Nonetheless, some of the self-preservation behaviours lower the frequency of reaching the delivery goal.
As future work, an extensive study of combinations of adequate and optimized self-preservation behaviours is necessary to determine what actions lead to achieving a delivery objective while increasing the distance between the treat and the rover. Additionally, new risk metrics that consider more complex factors such as probable future actions (i.e. prediction) for the threats could be incorporated into the mechanism to obtain a more robust risk measure.
[**Acknowledgement:**]{} The work by D. Araiza-Illan and K. Eder was funded by the EPSRC project “Robust Integrated Verification of Autonomous Systems” (ref. EP/J01205X/1).
[^1]: Department of Computer Science, University of Bristol, United Kingdom. E-mails:
[^2]: http://www.ros.org/
[^3]: http://gazebosim.org/
[^4]: http://wiki.ros.org/Robots/Jackal
[^5]: http://wiki.ros.org/hector\_quadrotor
[^6]: https://github.com/riveras/self-preservation
|
---
author:
- 'C. Gadermaier$^{1}$, A. S. Alexandrov$^{2,1}$, V. V. Kabanov$^{1}$, P. Kusar$^{1}$, T. Mertelj$^{1}$, X. Yao$^{3}$, C. Manzoni$^{4}$, D. Brida$^{4}$, G. Cerullo$^{4}$, and D. Mihailovic$^{1}$'
title: 'Electron-phonon coupling in cuprate high-temperature superconductors determined from exact electron relaxation rates. Supplementary material.'
---
Sample preparation
------------------
The YBa$_{2}$Cu$_{3}$O$_{6.5}$ (YBCO) single crystal used here was grown by top-seeded solution growth using a Ba$_{3}$Cu$_{5}$O solvent[@Yao]. The as-grown single crystal was first annealed at 700 C for 70 h with flowing oxygen and quenched down to room temperature. The La$_{1.85}$Sr$_{0.15}$CuO$_{4}$ (LSCO) single crystal was synthesised by a travelling-solvent-floating-zone method utilizing infrared radiation furnaces (Crystal system, FZ-T-4000) and annealed in oxygen gas under ambient pressure at 600 C for 7 days[@sugai].
Femtosecond pump-probe set-up
-----------------------------
A detailed description of the set-up used is found in[@manzoni]. In a pump-probe experiment, a pump pulse excites the sample and the induced change in transmission or reflection of a delayed probe pulse monitors the relaxation behaviour. In the linear approximation $\frac{\Delta R}{R}$ directly tracks the electronic relaxation processes, and the time constants obtained from fits of its dynamics are the characteristic times of the underlying relaxation processes. In our data, this approximation is justified by two essential characteristics: (i) the $\frac{\Delta R}{R}$ amplitude is linear in the excitation intensity (see Figure 1a for LSCO), and (ii) the same decay times appear independently of the probe wavelength, only with different spectral weights of the individual components.
In order to resolve the dynamics of fast processes very short pulses are necessary, since the instrumental response function is given by the cross correlation between the pump and probe pulses. We use sub-10 fs probe pulses from an ultrabroadband (covering a spectral range from 500 to 700 nm) non-collinear optical parametric amplifier (NOPA) and $\thicksim$15 fs pump pulses from a narrower band (wavelength tunable, in our case centred at 530 nm) NOPA. The seed pulses for the NOPAs and the amplified pulses are steered and focussed exclusively with reflecting optics to avoid pulse chirping.
![Block scheme of the experimental setup. BS: beam splitter. SHG: second harmonics generation. (redrawn from [@manzoni]).](FigS1){width="70mm"}
A schematic of the experimental apparatus is shown in Fig. 1. The laser source is a regeneratively amplified modelocked Ti:sapphire laser (Clark-MXR Model CPA-1), delivering pulses at 1 kHz repetition rate with 780 nm center wavelength, 150 fs duration, and 500 $\mu$J energy. Both NOPAs are pumped by the second harmonic of the Ti:sapphire laser, which is generated in a 1-mm-thick lithium triborate crystal (LBO), cut for type-I phase matching in the XY plane ($\theta$ = 90, $\varphi$ = 31.68, Shandong Newphotons).
![Setup of the noncollinear optical parametric amplifier. BPF: short pass filter that transmits only the visible part of the white light continuum and cuts out the fundamental and infrared components. (redrawn from [@manzoni]).](FigS2){width="70mm"}
The ultrabroadband visible NOPA that generates the probe pulses has been described in detail before[@giulio]; a schematic of it is shown in Fig. 2. The white light continuum seed pulses are generated by a small fraction of the fundamental wavelength beam focused into a 1 mm thick sapphire plate. Parametric gain is achieved in a 1-mm-thick BBO crystal, cut at $\theta$= 32, which is the angle giving the broadest phase matching bandwidth for the noncollinear type-I interaction geometry; a single-pass configuration is used to maximize the gain bandwidth. The amplified pulses have energy of approximately 2 $\mu$J and peak-to-peak fluctuations of less than 5%.
The compressor for the ultrabroadband NOPA consists of two custom-designed double-chirped mirrors (DCMs), manufactured by ion-beam sputtering (Nanolayers GmbH), which are composed of 30 pairs of alternating SiO2 /TiO2 quarter-wave layers in which both the Bragg wavelength and the layer duty cycle are varied from layer pair to layer pair. The DCMs introduce a highly controlled negative group delay (GD) over bandwidths approaching 200 THz, compensating for the GD of the NOPA pulses.
The narrower bandwidth NOPA providing the pump pulses is built identically to the one described above, only the amplified bandwidth is reduced by choosing a suitable non-optimum angle between pump and signal beams (dashed path of the blue pump in Figure 2). The pulses are compressed by DCMs similar to those used for the broadband NOPA.
![Schematics of the experimental apparatus used for auto/ cross-correlation and pump-probe experiments. AF: attenuating filter. IF: interference filter. PD: photodiode. (redrawn from [@manzoni]).](FigS3){width="70mm"}
A schematic of the apparatus used for pulse characterization and pump-probe experiments (correlator in Fig. 1) is shown in Fig. 3. The delay line is formed by two 90 turning mirrors mounted on a precision translation stage with 0.1 $\mu$m positioning accuracy (Physik Instrumente GmbH, model M-511.DD), which corresponds to 0.66 fs time resolution. The two pulses are combined and focused on the sample by a silver spherical mirror (R=200 mm). The non-collinear configuration enables to spatially separate pump and probe beams. Upon reflection from the sample, the probe beam is selected by an iris and steered to the detector, either a silicon photodiode preceded by a 10 nm spectral width interference filter or an optical multichannel analyzer (OMA). The differential reflection (transmission) signal is obtained via synchronous detection (lock-in amplifier Stanford SR830 for the photodiode, custom made software for the OMA) referenced to the modulation of the pump beam at 500 Hz by a mechanical chopper (Thorlabs MC1000). This allows detection of differential reflection ($\Delta$R/R) signals as low as 10$^{-4}$.
Estimate of the electron-electron relaxation time
-------------------------------------------------
The applicability of the FL theory has been somewhat controversal in cuprate superconductors. The recent unambiguous observation of de Haas van Alphen oscillations[@hussey] has shown that the Fermi surface is almost cylindrical. In this quasi-two-dimensional case the electrons can be described as a FL if $r_{s}=a/a_{B}<37$, with $a$ being the mean electron distance and $a_{B}$ the Bohr radius[@tanatar]. We determine $a=1/\sqrt{n}=\sqrt{2\pi}/k_{F}$, with $k_{F}=$ 7.4 nm$^{-1}$ from[@hussey]. Using $a_{B}=\hbar^{2}\epsilon_{eff}/m^{*}e^{2}$, with $m^{*}=4m_{e}$ and the effective dielectric constant $\epsilon_{eff}\approx30$, we obtain $r_{s}\approx1$, safely in the FL regime.
Compared to a simple FL, e-e correlations increase the effective mass of carriers (or decrease the bare band-width), and heavier carriers form lattice polarons at a smaller value of l. Both spin and lattice polarons have the same Fermi surface as free electrons, but the Fermi energy is reduced [@para]. Our (Boltzmann) relaxation theory is based on the existence of the Fermi surface and the Pauli exclusion principle, which the dressed (polaronic) carriers obey like free electrons. Therefore, Equation 3 of the main paper and all our subsequent considerations are valid also for polarons in the presence of strong electron correlations.
In the weak photoexcitation regime where only a small fraction of conduction electrons is excited ($k_{B}T_{e}\ll E_{F}$), e-e scattering is impeded since the Pauli exclusion principle strongly limits the number of available final states. The e-e relaxation time $\tau_{e-e}$ can be estimated by: $1/\tau_{e-e}\approx\pi^{3}\mu_{c}^{2}(k_{B}T_{e})^{2}/4\hbar E_{F}$ [@Ashcroft], where $\mu_{c}=r_{s}/2\pi$ is the Coulomb pseudopotential characterising the electron-electron interaction. Even in the weak photoexcitation regime, the effective $T_{e}$ after excitation can differ significantly from room temperature. Assuming an electronic specific heat of 1.4 mJ/g.at.K$^{2}$ [@loram], and a pump laser penetration depth of 150 nm, one can estimate $T_{e}\thickapprox$ 400 K, $\tau_{e-e}\thickapprox$ 1.4 ps for the lowest and $T_{e}\thickapprox$ 800 K, $\tau_{e-e}\thickapprox$ 350 fs for the highest pump fluence used in Fig. 1a of the main manuscript.
Exact relaxation rates
----------------------
Here, differently from previous studies based on the two-temperature model (TTM), we analyze pump-probe relaxation rates using an analytical approach to the Boltzmann equation [@kabAlex], which is free of any quasi-equilibrium approximation. Due to the complex lattice structure of cuprate superconductors characteristic phonon frequencies spread over a wide interval $\hbar\omega/k_{B}\gtrsim200\div1000$ K. Very fast oxygen vibrations with frequencies $\omega\gtrsim0.1/fs$ do not contribute to the relaxation on the relevant time scale, but just dress the carriers. For the remaining part of the spectrum we can apply the Landau-Fokker-Planck expansion, expanding the e-ph collision integral at room or higher temperatures in powers of the relative electron energy change in a collision with a phonon, $\hbar\omega/(\pi k_{B}T)\lesssim1$. Then the integral Boltzmann equation for the nonequilibrium part of the electron distribution function $\phi(\xi,t)=f(\xi,t)-f_{0}(\xi)$ is reduced to a partial differential equation in time-energy space [@kabAlex]: $$\gamma^{-1}\dot{\phi}(\xi,t)=\frac{\partial}{{\partial\xi}}\left[\tanh(\xi/2)\phi(\xi,t)+\frac{\partial}{{\partial\xi}}\phi(\xi,t)\right],$$ where $f(\xi,t)$ is the non-equilibrium distribution function, and $\gamma=\pi\hbar\lambda\langle\omega^{2}\rangle/k_{B}T$. The electron energy, $\xi$, relative to the equilibrium Fermi energy is measured in units of $k_{B}T$. Here $\lambda\langle\omega^{2}\rangle$ is the second moment of the familiar Eliashberg spectral function [@Eliashberg], $\alpha^{2}F(\omega)$, defined for any phonon spectrum as: $$\lambda\langle\omega^{n}\rangle\equiv2\int_{0}^{\infty}d\omega\frac{\alpha^{2}F(\omega)\omega^{n}}{{\omega}}.$$ The coupling constant $\lambda$, which determines the critical temperature of the BCS superconductors, is $\lambda=2\int_{0}^{\infty}d\omega\alpha^{2}F(\omega)/\omega$.
Multiplying Eq.(1) by $\xi$ and integrating over all energies yield the rate of the energy relaxation: $$\dot{E}_{e}(t)=-\gamma\int_{-\infty}^{\infty}d\xi\tanh(\xi/2)\phi(\xi,t),$$ where $E_{e}(t)=\int_{-\infty}^{\infty}d\xi\xi\phi(\xi,t)$ and $\phi(\xi,t)$is the solution of Eq. 1. Apart from a numerical coefficient the characteristic e-ph relaxation rate (proportional to $\gamma$) is about the same as the TTM energy relaxation rate [@allen], $\gamma_{T}=3\hbar\lambda\langle\omega^{2}\rangle/\pi k_{B}T$.
{width="70mm"}
To establish the numerical coefficient we solved Eq.(1) and fitted the numerically exact energy relaxation by a near-exponential decay $E(t)=E_{0}\exp{(-at\gamma-bt^{2}\gamma^{2})}$, with the coefficients $a=0.14435$ and $b=0.00267$, which are virtually independent of the initial distribution, $\phi(\xi,0)$ and pump energy, $E_{0}$. The exact relaxation time $\tau=1/a\gamma$ turns out longer than the TTM relaxation time $\gamma_{T}^{-1}$, by a factor of 2, (see Fig. 4), $\tau=2\pi k_{B}T/3\hbar\lambda\langle\omega^{2}\rangle$. This as well as the shorter relaxation times observed in our ultra-fast pump-probe measurements lead to essentially higher values of EPI coupling constants compared with previous studies.
Exact transient electron distribution function
----------------------------------------------
In the TTM e-e collision creates a quasi-equilibrium electron distribution that subsequently cools down via e-ph interaction and electron diffusion. On the other hand, in our relaxation scheme, which is dominated by e-ph interaction, the electron distribution during relaxation is not a quasi-equilibrium one. The *deviation* of both functions from the equilibrium is shown in Fig. 5, for $t=\gamma^{-1}$. Since we limit our discussion to the linear regime, the TTM correction to the distribution function is $\Delta f\varpropto x/cosh(x^{2}/4)$, where $x=\xi/k_{B}T$. The width of the two lobes in $\Delta f$ indicate the effective electronic temperature. The significantly narrower lobes for the TTM compared to the exact distribution illustrate how the TTM underestimates the transient electronic temperature and hence the relaxation time. However, the e-ph dominated relaxation is not just a slower relaxation through the same quasi-equilibrium states as assumed by the TTM. We illustrate this by fitting our non-equilibrium distribution with a quasi-equilibrium one (dashed line in Fig. 5). Since the Fokker-Planck equation describes diffusion in the energy space the high energy tails are clearly seen in our distribution, which, compared to the Fermi-Dirac distribution have a much gentler fall-off towards high electron energies.
{width="70mm"}
![a) Numerically exact distribution function $f(\xi,t=\gamma^{-1})$ compared to a Fermi-Dirac distribution function with the same effective temperature. b) ARPES spectrum of Bi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8}$immediately after excitation and fit to a Fermi-Dirac function (redrawn from [@perfetti]).](FigS6){width="70mm"}
While in the deviation from the equilibrium distribution, the difference between the two models is clearly visible, the Fermi-Dirac distribution and the one obtained from solving the Fokker-Planck equation, themselves look rather similar, except for the high energy tail, as can be seen in Fig. 6a. This shines new light on fs ARPES experiments, which directly measure the transient electron distributions. Until now it was customary to use the TTM to describe electron relaxation, ARPES data were in good agreement and were invoked as confirmation of the ultrafast e-e thermalization. We have now shown that a *similar* distribution can be obtained also as a result of e-ph scattering. In Fig. 6b we redraw the best available time-resolved ARPES data for cuprate superconductors [@perfetti] together with their fit to a quasi-equilibrium Fermi-Dirac distribution, from which they estimate a hot electron temperature. The spectrum was taken immediately after excitation (at the end of the pump-probe pulse overlap, which is about $\tau/2$ of their decay time after maximum pump-probe overlap, i.e. on average the electrons are probed $\tau/2$ after their excitation). Compared to the Fermi-Dirac curve, their data show a high-energy tail very similar to the exact non-equilibrium distribution we calculated. This means that our relaxation scheme is not only more justified than the TTM on the basis of the physical reasoning described in the main text, it also agrees better with experimental data in the literature. This should in no way derogate the work done before our model was available, however, we propose to reassess quantitative conclusions along the lines of our relaxation scheme (see section G).
Predictions of BCS theory
-------------------------
{width="70mm"}
The most common expression that relates EPI to $T_{c}$ is the BCS-McMillan formula $k_{B}T_{c}=\hbar\omega_{0}\exp[-(1+\lambda)/\lambda]$ (if any repulsive Coulomb pseudopotential is neglected), where $\omega_{0}$ is a characteristic phonon frequency. Formally setting $\lambda\langle\omega^{2}\rangle=\lambda\omega_{0}^{2}$, one can rewrite this as a function of $\lambda\langle\omega^{2}\rangle$ and $\lambda$: $k_{B}T_{c}=\hbar\sqrt{\lambda\langle\omega^{2}\rangle/\lambda}\exp[-(1+\lambda)/\lambda]$. Since $\lambda\langle\omega^{2}\rangle$ is known from experiment, we keep it fixed and find a maximum $T_{c}(\lambda)$ (by finding a zero of the first derivative) at $\lambda=2$. $T_{c}$ as function of $\lambda$ is shown in Fig 7 for both YBCO and LSCO, the maximum critical temperatures are $T_{c}^{max}=$ 52 K for LSCO and $T_{c}^{max}=$ 37 K forYBCO. For the more realistic estimates given in the main text, $\lambda=0.5$ for LSCO and $\lambda=0.25$ for YBCO, we obtain lower $T_{c}$ values of 23 K and 3 K, respectively. Hence, contrary to experiment BCS theory predicts a lower $T_{c}$ for YBCO than for LSCO. It cannot explain the high $T_{c}$ value of YBCO (even for $\lambda=2$) and the reasonable agreement for LSCO is probably a coincidence.
Assessment of lambda values obtained with the TTM and with our model
--------------------------------------------------------------------
We now assess the quantitative differences between the values for $\lambda\langle\omega^{2}\rangle$ (and consequently the estimates for $\lambda$ that are usually made on their basis) obtained from data analysis using the TTM and our model. The differences between the analytic expressions that link $\tau_{e-ph}$ and $\lambda\langle\omega^{2}\rangle$ (equations 1 and 2 of the main manuscript) are a factor 2 and that $\tau_{e-ph}$ scales with the electron temperature $T_{e}$ in the TTM and with the lattice temperature $T_{l}$ in our model. As described in section C, $T_{l}$ is usually very close to the sample temperature without photoexcitation, while $T_{e}$ can be several 100 K higher, depending on the excitation conditions. For the fluences we used, $T_{e}$ = 600 $\pm$ 200 K, which introduces an additional uncertainty in $\lambda\langle\omega^{2}\rangle$ if we use the TTM estimate. The factor 2 in the equations incidentally cancels with the factor 2 in the temperatures.
material $T_{e}$ (K) $\tau_{e-e}$ (fs) $\tau_{e-ph}$ (fs) $\lambda\langle\omega^{2}\rangle_{TTM}$ (meV$^{2}$) $\lambda\langle\omega^{2}\rangle_{K-A}$ (meV$^{2}$)
--------------------------------- ------------- ------------------- -------------------- ----------------------------------------------------- -----------------------------------------------------
YBa$_{2}$Cu$_{3}$O$_{6.5}$ 400-800 350-1400 100 400$\pm$150 400$\pm$100
La$_{1.85}$Sr$_{0.15}$CuO$_{4}$ 400-800 350-1400 45 800$\pm$300 800$\pm$200
Cu 590 3300 1400 29 29
Au 650 1700 1900 23 21
Cr 720 380 130 110
W 1200 710 110 60
V 700 170 280 240
Nb 790 1000 170 320 240
Ti 820 160 350 260
Pb 570 6400 840 45 47
NbN 1070 110 640 360
V3Ga 1110 200 370 200
: $\lambda\langle\omega^{2}\rangle$ obtained via the TTM and the Kabanov-Alexandrov (K-A) model from our data on YBa$_{2}$Cu$_{3}$O$_{6.5}$ and La$_{1.85}$Sr$_{0.15}$CuO$_{4}$ and from [@brorson]. $\tau_{e-e}$ are calculated as in section C, $\tau_{e-ph}$ from [@brorson] are recalculated from published $\lambda\langle\omega^{2}\rangle$ and $T_{e}$ values.
Brorson et al. studied a series of metallic superconductors, using a quite diverse range of excitation conditions [@brorson]. We list their data together with ours in Table 1. Their $T_{e}$ values scatter from 570 to 1200 K, hence the differences between the TTM and our model can be up to a factor of 2. Unfortunately they did not study any intensity dependence, so we cannot use this criterion to decide which relaxation scheme is more appropriate. However, we estimate $\tau_{e-e}$ as described in section C for four of their samples, using $r_{s}$ and $E_{F}$ from [@Ashcroft] and obtain $\tau_{e-e}>\tau_{e-ph}$ in three cases, and $\tau_{e-e}\approx\tau_{e-ph}$ for Au. Therefore, one can assume that except for materials with very low EPI, our model is more appropriate, as has already been established in textbooks like [@Ashcroft].
Universality of the observed behavior
-------------------------------------
In the main paper we show intensity and wavelength dependent pump-probe time traces and show that the dynamics does not change with intensity and that the same characteristic time scales are found at different wavelengths. However, there we show only a few selected wavelengths out of a more comprehensive dataset. To illustrate that the behavior discussed in the main paper is universal, we show two-dimensional maps of the photoinduced $\Delta R/R$ as a function of probe wavelength and delay for both materials in Figure 8.
{width="11cm"}
[14]{} X. Yao, T. Mizukoshi, M. Egami, Y. Shiohara, *Physica C* **263**, 197 (1996).
S. Sugai, H. Suzuki, Y. Takayanagi, T. Hosokawa, N. Hayamizu, *Phys. Rev. B* **68** 184504 (2003).
C. Manzoni, D. Polli, G. Cerullo, *Rev. Sci. Instrum.* **77**, 023103 (2006).
G. Cerullo, S. De Silvestri, *Rev. Sci. Instrum.* **74**, 1 (2003).
B. Vignolle *et al*., *Nature* **455**, 952 (2008).
B. Tanatar and D. M. Ceperley, *Phys. Rev. B* **39**, 5005 (1989).
N. W. Ashcroft and N. D. Mermin ** (, ).
V. V. Kabanov, A. S. Alexandrov, *Phys. Rev. B* **78** 174514 (2008).
G. M. Eliashberg, *Zh. Eksp. Teor. Fiz.* **38**, 966 (1960); **39**, 1437 (1960) [\[]{}*Sov. Phys. JETP.* **11**, 696; **12**, 1000 (1960)[\]]{}.
P. B. Allen, *Phys. Rev. Lett.* **59**, 1460 (1987).
L. Perfetti, *et al*., *Phys. Rev. Lett.* **99**, 197001 (2007).
A. Paramekanti, M. Randeria, and N. Trivedi, *Phys. Rev. Lett.* **87**, 217002 (2001).
J. W. Loram, K. A. Mirza, J. R. Cooper, and W. Y. Liang, *Phys. Rev. Lett.* **71**, 1740 (1993).
Brorson, S.D. et al. *Phys. Rev. Lett.* **64**, 2172 (1990).
|
---
abstract: 'Video popularity is an essential reference for optimizing resource allocation and video recommendation in online video services. However, there is still no convincing model that can accurately depict a video’s popularity evolution. In this paper, we propose a dynamic popularity model by modeling the video information diffusion process driven by various forms of recommendation. Through fitting the model with real traces collected from a practical system, we can quantify the strengths of the recommendation forces. Such quantification can lead to characterizing video popularity patterns, user behaviors and recommendation strategies, which is illustrated by a case study of TV episodes.'
author:
-
bibliography:
- 'sigproc.bib'
title: Modeling and Quantifying the Forces Driving Online Video Popularity Evolution
---
./sec/introduction.tex ./sec/assumption.tex ./sec/model.tex ./sec/evaluate.tex ./sec/cstv.tex ./sec/related.tex ./sec/conclusion.tex
./sec/appendix.tex
|
---
abstract: |
We describe an algorithm for determining whether two convex polytopes $P$ and $Q$, embedded in a lattice, are isomorphic with respect to a lattice automorphism. We extend this to a method for determining if $P$ and $Q$ are equivalent, i.e. whether there exists an affine lattice automorphism that sends $P$ to $Q$. Methods for calculating the automorphism group and affine automorphism group of $P$ are also described.
An alternative strategy is to determine a normal form such that $P$ and $Q$ are isomorphic if and only if their normal forms are equal. This is the approach adopted by Kreuzer and Skarke in their [[<span style="font-variant:small-caps;">Palp</span>]{}]{} software. We describe the Kreuzer–Skarke method in detail, and give an improved algorithm when $P$ has many symmetries. Numerous examples, plus two appendices containing detailed pseudo-code, should help with any future reimplementations of these techniques. We conclude by explaining how to define and calculate the normal form of a Laurent polynomial.
address:
- |
Trinity College\
University of Cambridge\
Cambridge, CB$2$ $1$TQ\
UK
- |
Department of Mathematics\
Imperial College London\
London, SW$7$ $2$AZ\
UK
author:
- Roland Grinis
- 'Alexander M. Kasprzyk'
title: Normal forms of convex lattice polytopes
---
Introduction
============
Determining whether two convex polytopes $P$ and $Q$, embedded in a lattice $\Lambda$, are isomorphic with respect to a lattice automorphism is a fundamental computational problem. For example, in toric geometry lattice polytopes form one of the key constructions of projective toric varieties, and any classification must somehow address the issue of whether there exists an automorphism of the underlying lattice sending $P$ to $Q$. In general, any isomorphism problem can be solved in one of two ways: on a case-by-case basis by constructing an explicit isomorphism between the two objects, or by determining a normal form for each isomorphism class.
The first approach – dynamically constructing a lattice-preserving isomorphism in ${\mathrm{GL}}_n({\mathbb{Z}})$ between the two polytopes – is discussed in §\[sec:iso\_via\_face\_graph\]. We describe one possible way to determine isomorphism of polytopes via the labelled face graph ${\mathscr{G}\mleft({P}\mright)}$ (see §\[subsec:labelled\_face\_graph\]). This has the advantage that it works equally well for rational polytopes and for polytopes of non-zero codimension. By reducing the problem to a graph isomorphism question, well-developed tools such as Brendan McKay’s <span style="font-variant:small-caps;">Nauty</span> software [@McKay81; @Nauty] can then be applied.
Because our approach to isomorphism testing works equally well for rational polytopes, we are able to answer when two polytopes are equivalent, i.e. when there exists an isomorphism $B\in{\mathrm{GL}}_n({\mathbb{Z}})$ and lattice translation $c\in\Lambda$ such that $PB+c=Q$. This is discussed in §\[subsec:equivalence\]. We can also calculate the automorphism group ${\mathrm{Aut}\mleft({P}\mright)}\leq{\mathrm{GL}}_n({\mathbb{Z}})$ of $P$: this is a subgroup of the automorphism group of ${\mathscr{G}\mleft({P}\mright)}$, as explained in §\[subsec:aut\_P\]. Since our methods make no assumptions on the codimension of $P$, by considering the automorphism group of $P\times\{1\}$ in $\Lambda\times{\mathbb{Z}}$ we are able to calculate the group of affine automorphisms ${\mathrm{AffAut}\mleft({P}\mright)}\leq{\mathrm{GL}}_n({\mathbb{Z}})\ltimes\Lambda$. As an illustration of our methods, we calculate the order of the automorphism group for each of the $473,\!800,\!776$ four-dimensional reflexive polytopes [@KS00]: see Table \[tab:num\_4topes\].
The second approach – to compute a normal form ${\mathrm{NF}\mleft({P}\mright)}$ for each isomorphism class – is discussed in §\[sec:palp\_normal\_form\]. This is the approach adopted by Kreuzer and Skarke in their [[<span style="font-variant:small-caps;">Palp</span>]{}]{} software [@KS04], and was used to construct the classification of three- and four-dimensional reflexive polytopes [@KS98b; @KS00]. Briefly, row and column permutations are applied to the vertex–facet pairing matrix ${P\!M}$ of $P$, placing it in a form ${{{P\!M}^\text{max}}}$ that is maximal with respect to a certain ordering. This in turn defines an order in which to list the vertices of $P$; the choice of basis is fixed by taking the Hermite normal form. In §\[subsec:affine\_normal\_form\] we address how this can be modified to give an affine normal form for $P$, and in §\[subsec:palp\_normal\_form\] we describe how [[<span style="font-variant:small-caps;">Palp</span>]{}]{} applies an additional reordering of the columns of ${{{P\!M}^\text{max}}}$ before computing the normal form. The [[<span style="font-variant:small-caps;">Palp</span>]{}]{} source code for computing ${\mathrm{NF}\mleft({P}\mright)}$ is analyzed in detail in Appendix \[apx:palp\_source\_code\].
In §\[sec:exploiting\_aut\] we address the problem of calculating ${{{P\!M}^\text{max}}}$. We describe an inductive algorithm which attempts to exploit automorphisms of the matrix in order to simplify the calculation; pseudo-code is given in Appendix \[apx:matrix\_isomorphism\]. Applying our algorithm to smooth Fano polytopes [@Obr07], which often have large numbers of symmetries, illustrates the advantage of this approach: see §\[subsec:analysis\_smooth\_db\] and Table \[tab:timings\]. We end by giving, in §\[sec:laurent\_normal\_form\], an application of normal form to Laurent polynomials.
A note on implementation {#a-note-on-implementation .unnumbered}
------------------------
The algorithms described in §\[sec:iso\_via\_face\_graph\] were implemented using [[[Magma]{}]{}]{} in $2008$ and officially released as part of [[[Magma]{}]{}]{} V$2.16$ [@Magma; @ConvChap]; [[<span style="font-variant:small-caps;">Palp</span>]{}]{} normal form was introduced by Kreuzer and Skarke in their [[<span style="font-variant:small-caps;">Palp</span>]{}]{} software [@KS04] and reimplemented natively in [[[Magma]{}]{}]{} V$2.18$ by the authors. The [[[Magma]{}]{}]{} algorithms[^1], including the reimplementation of [[<span style="font-variant:small-caps;">Palp</span>]{}]{} normal form, have recently been ported to the [[[Sage]{}]{}]{} project [@sage] by Samuel Gonshaw[^2], assisted by Tom Coates and the second author, and should appear in the $5.6.0$ release.
Acknowledgments {#acknowledgments .unnumbered}
---------------
This work was motivated in part by discussions with Max Kreuzer during August and September 2010, shortly before his death that November. We are honoured that he found the time and energy for these conversations during this period. It forms part of the collaborative [$++$]{} project envisioned in [@palp++].
Our thanks to Tom Coates for many useful discussions, to Harald Skarke and Dmitrii Pasechnik for several helpful comments on a draft of this paper, to John Cannon for providing copies of the computational algebra software [[[Magma]{}]{}]{}, and to Andy Thomas for technical assistance. The first author was funded by a Summer Studentship as part of Tom Coates’ Royal Society University Research Fellowship. The second author is supported by EPSRC grant EP/I008128/1.
Isomorphism testing via the face graph {#sec:iso_via_face_graph}
======================================
Conventions {#conventions .unnumbered}
-----------
Throughout this section we work with very general convex polytopes; we assume only that $P\subset\Lambda_{\mathbb{Q}}:=\Lambda\otimes{\mathbb{Q}}$ is a (non-empty) rational convex polytope, not necessarily of maximum dimension in the ambient lattice $\Lambda$. The dual lattice ${\mathrm{Hom}}(\Lambda,{\mathbb{Z}})$ is denoted by $\Lambda^*$.
Given two polytopes $P$ and $P'$, how can we decide whether they are isomorphic and, if they are, how can we construct an isomorphism between them? There are, of course, some obvious checks that can quickly provide a negative answer. We give a few examples, although this list is far from comprehensive.
- Do the dimensions of the polytopes agree?
- Does $P$ contain the origin in its relative interior? Is the same true for $P'$?
- Are $P$ and $P'$ both lattice polytopes?
- Are the $f$-vectors of $P$ and $P'$ equal?
- Do $P$ and $P'$ have the same number of primitive vertices?
- Are $P$ and $P'$ simplicial? Are they simple?
- If $P$ is of codimension one then there exists a unique hyperplane $H\subset\Lambda_{\mathbb{Q}}$ containing $P$, where $H=\{v\in\Lambda_{\mathbb{Q}}\mid{\langle{v},{u}\rangle}=k\}$ for some non-negative rational value $k$ and primitive dual lattice point $u\in\Lambda^*$. In particular, $k$ is invariant under change of basis. Does $k$ agree for both $P$ and $P'$?
- If $P$ is of maximum dimension, any facet $F$ can be expressed in the form $F=\{v\in P\mid{\langle{v},{u_F}\rangle}=-c_F\}$, where $u_F\in\Lambda^*$ is a primitive inward-pointing vector normal to $F$, and $c_F\in{\mathbb{Q}}$ is the lattice height of $F$ over the origin. The value of $c_F$ is invariant under change of basis. Do the facet heights of $P$ and $P'$ agree, up to permutation?
- If $P$ is a rational polytope, let $r_P$ be the smallest positive integer such that the dilation $r_PP$ is a lattice polytope. Do $r_P$ and $r_{P'}$ agree?
From a computational point of view, the intention with the above list is to suggest tests that are easy to perform. We assume that data such as the vertices and supporting hyperplanes of $P$ have already been calculated. Some computations, such as finding the $f$-vector, are more involved, but since the calculations will be required in what follows it seems sensible to use them at this stage.
In practice a number of other invariants may already be cached and could also be used: the volume ${\mathrm{Vol}\mleft({P}\mright)}$ or boundary volume ${\mathrm{Vol}\mleft({\partial P}\mright)}$; the number of lattice points ${\left\vert{P\cap\Lambda}\right\vert}$ or boundary lattice points ${\left\vert{\partial P\cap\Lambda}\right\vert}$; the Ehrhart $\delta$-vector; information about the polar polyhedron $P^*$. In particular cases some of this additional data may be easy to calculate; in general they are usually more time-consuming to compute than the isomorphism test described below.
There are a few potential catches for the unwary when considering rational polytopes with $\dim{P}<\dim{\Lambda}$. For example, care needs to be taken when defining the supporting hyperplanes. Also, the notion of (normalised) volume ${\mathrm{Vol}\mleft({P}\mright)}$ requires some attention: the affine sublattice ${\mathrm{aff}\mleft({P}\mright)}\cap\Lambda$ may be empty, forcing us to either accept that ${\mathrm{Vol}\mleft({P}\mright)}$ can be undefined, or to employ interpolation. There is a natural dichotomy between those polytopes whose affine span contains the origin and those where $0\notin{\mathrm{aff}\mleft({P}\mright)}$. In the latter case, it is often better to consider the cone $C_P:={\mathrm{cone}\mleft({P}\mright)}$ equipped with an appropriate grading such that dilations of $P$ can be realised by taking successive slices through $C_P$.
The labelled face graph {#subsec:labelled_face_graph}
-----------------------
In order to determine isomorphism we make use of the face graph $G(P)$ of $P$.
Let $P$ be an $n$-dimensional polytope with $f$-vector $(f_{-1},f_0,\ldots,f_n)$, where $f_k$ denotes the number of $k$-faces of $P$. By convention we set $f_{-1}=f_n=1$, representing, respectively, the empty set $\varnothing$ and the polytope $P$. The *face graph* $G(P)$ is the graph consisting of $f_{-1}+f_0+\ldots+f_n$ vertices, where each vertex $v$ corresponds to a face $F_v$. Two vertices $v$ and $v'$ are connected by an edge if and only if $F_{v'}\subset F_v$ and $\dim{F_{v'}}=\dim{F_v}+1$. Here the dimension of the empty face $\varnothing$ is taken to be ${-1}$.
The face graph of a polytope is completely determined by the vertex–facet relations, and is the standard tool for determining combinatorial isomorphism of polytopes. We augment $G(P)$ by assigning labels to the vertices determined by some invariants of the corresponding face. Reducing a symmetry problem to the study of a (labelled) graph is a well-established computational technique: see, for example, [@KaSc03; @Pug05; @MBW09; @BSPRS12]. The intention is to decorate the graph with data capturing how $P$ lies in the underlying lattice $\Lambda$. To that end, we make the following definition.
For a point $v\in\Lambda_{\mathbb{Q}}$, let $u\in\Lambda$ be the unique primitive lattice point such that $v=\lambda u$ for some non-negative value $\lambda$ (set $u=0$, $\lambda=0$ if $v=0$). We define $\tilde{v}$ to be given by $\lceil\lambda\rceil u$, i.e. $\tilde{v}$ is the first lattice point after or equal to $v$ on the ray defined by $v$. Let $P\subset\Lambda_{\mathbb{Q}}$ be a polytope with vertices ${\mathcal{V}\mleft({P}\mright)}$. Then the *index* ${{\left\vert{{P}:\Lambda}\right\vert}}$ of $P$ is the index of the sublattice generated by $\left\{\tilde{u}\mid u\in{\mathcal{V}\mleft({P}\mright)}\right\}$ in ${\mathrm{span}\mleft({P}\mright)}\cap\Lambda$.
\[def:labelled\_face\_graph\] Let $P$ be an $n$-dimensional polytope with face graph $G(P)$. To each vertex $v$ of $G(P)$ we assign the label $$\left\{\begin{array}{rl}
(\dim{F_v}),&\text{ if }F_v=\varnothing\text{ or }F_v=P;\\
(\dim{F_v},{{\left\vert{{F_v}:\Lambda}\right\vert}}),&\text{ otherwise.}\\
\end{array}\right.$$ We denote this labelled graph by ${\mathscr{G}\mleft({P}\mright)}$.
In place of the index ${{\left\vert{{F_v}:\Lambda}\right\vert}}$, it is tempting to use the volume ${\mathrm{Vol}\mleft({F_v}\mright)}$. However, computing the index is basic linear algebra, whereas computing the volume is generally difficult.
Additional labels
-----------------
When $P$ contains the origin strictly in its interior, we can make use of the *special facets*. Recall from [@Obr07 §3.1] that a facet $F$ is said to be special if $u\in{\mathrm{cone}\mleft({F}\mright)}$, where $u:=\sum_{v\in{\mathcal{V}\mleft({P}\mright)}}v$ is the sum of the vertices of $P$. Since $P$ contains the origin, there exists at least one special facet; we can extend the labelling to indicate which vertices of ${\mathscr{G}\mleft({P}\mright)}$ correspond to a special facet.
The polytope $P:={\mathrm{conv}\mleft\{{(1,0),(0,1),(-2,-3)}\mright\}}$ and its labelled face graph ${\mathscr{G}\mleft({P}\mright)}$ are depicted below. In the graph, the top-most vertex represents $P$ and the bottom vertex $\varnothing$. The sum of the vertices is $(-1,-2)$, so there is a unique special facet: the edge joining vertices $(1,0)$ and $(-2,-3)$ of index three, labelled $(1,3,1)$ in ${\mathscr{G}\mleft({P}\mright)}$. The edge joining vertices $(1,0)$ and $(0,1)$ is of index one and labelled $(1,1,0)$; the remaining edge is of index two and labelled $(1,2,0)$. The final entry of each facet label is used to indicate whether this is a special facet.

If $P$ is a rational polytope, the vertices of $P$ provide an augmentation to the labelling of ${\mathscr{G}\mleft({P}\mright)}$. For any vertex $v\in{\mathcal{V}\mleft({P}\mright)}$ there exists a primitive lattice point $u\in\Lambda$ and a non-negative rational value $\lambda$ such that $v=\lambda u$. Since $\lambda$ is invariant under change of basis, the corresponding labels can be extended with this information. (Note that $\lambda={{\left\vert{{v}:\Lambda}\right\vert}}$ when $v$ is a lattice point, so this only provides additional information in the rational case.)
We do not claim that these are the only easily-computed invariants that can be associated with ${\mathscr{G}\mleft({P}\mright)}$. Other possibilities include encoding the linear relations between the vertices ${\mathcal{V}\mleft({P}\mright)}$ of $P$ in the graph labelling, and, in the maximum dimensional case, adding information about the lattice height of the supporting hyperplanes for each face.
Recovering the isomorphism
--------------------------
We now describe our algorithm for computing an isomorphism between two polytopes $P$ and $P'$. The initial step is to normalise the polytopes. If $P$ and $P'$ are not of maximum dimension in the ambient lattice $\Lambda$, then we first restrict to the sublattice ${\mathrm{span}\mleft({P}\mright)}\cap\Lambda$ (and, respectively, ${\mathrm{span}\mleft({P'}\mright)}\cap\Lambda$). It is possible that, even after restriction, $P$ and $P'$ are of codimension one. In that case, we work with the convex hull ${\mathrm{conv}\mleft({P\cup\{0\}}\mright)}$ (and similarly for $P'$). The important observations are that, after normalisation, $P$ is of maximum dimension, and that there exists at least one facet $F_0$ of $P$ such that $0\notin{\mathrm{aff}\mleft({F_0}\mright)}$.
Now we calculate an arbitrary graph isomorphism $\phi:{\mathscr{G}\mleft({P}\mright)}\rightarrow{\mathscr{G}\mleft({P'}\mright)}$. By restricting to the vertices of ${\mathscr{G}\mleft({P}\mright)}$ corresponding to the vertices ${\mathcal{V}\mleft({P}\mright)}$ of $P$, $\phi$ induces a map from the vertices of $P$ to the vertices of $P'$. The two polytopes $P$ and $P'$ are isomorphic only if $\phi$ exists, and any isomorphism $\Phi:\Lambda\rightarrow\Lambda$ mapping $P$ to $P'$ can be factored as $\phi\circ\chi$, where $\chi\in{\mathrm{Aut}\mleft({{\mathscr{G}\mleft({P}\mright)}}\mright)}$.
It remains to decide whether a particular choice of $\chi\in{\mathrm{Aut}\mleft({{\mathscr{G}\mleft({P}\mright)}}\mright)}$ determines a lattice isomorphism $\phi\circ\chi:\Lambda\rightarrow\Lambda$ sending $P$ to $P'$. For this we make use of the facet $F_0$. By construction $F_0$ is of codimension one, and does not lie in a hyperplane containing the origin. Hence there exists a choice of vertices $v_1,\ldots,v_n$ of $F_0$ which generate $\Lambda_{\mathbb{Q}}$ (over ${\mathbb{Q}}$). Denote the image of $v_i$ in $P'$ by $v'_i$, and consider the $n\times n$ matrices $V$ and $V'$ whose rows are given by, respectively, the $v_i$ and the $v'_i$. In order for $\phi\circ\chi$ to be a lattice map we require that $B:=V^{-1}V'\in{\mathrm{GL}}_n({\mathbb{Z}})$. In order for this to be an isomorphism from $P$ to $P'$ we require that $\{vB\mid v\in{\mathcal{V}\mleft({P}\mright)}\}={\mathcal{V}\mleft({P'}\mright)}$.
We make two brief observations. First, in practice the automorphism group ${\mathrm{Aut}\mleft({{\mathscr{G}\mleft({P}\mright)}}\mright)}$ is often small. Second, it is an easy exercise in linear algebra to undo our normalisation process, lifting $B$ back to act on the original polytope.
Testing for equivalence {#subsec:equivalence}
-----------------------
Recall that two polytopes $P,P'\subset\Lambda_{\mathbb{Q}}$ are said to be *equivalent* if there exists an isomorphism $B\in{\mathrm{GL}}_n({\mathbb{Z}})$ and a translation $c\in\Lambda$ such that $PB + c=P'$.
Let ${\mathcal{V}\mleft({P}\mright)}$ be the set of vertices of a polytope $P$. Then the *vertex average* of $P$ is the point $$b_P:=\frac{1}{{\left\vert{{\mathcal{V}\mleft({P}\mright)}}\right\vert}}\sum_{v\in{\mathcal{V}\mleft({P}\mright)}}v\in\Lambda_{\mathbb{Q}}.$$
Two polytopes $P$ and $P'$ are equivalent if and only if $b_P-b_{P'}\in\Lambda$ and $P-b_P$ is isomorphic to $P'-b_{P'}$.
Consider the simplices $$\begin{aligned}
P&:={\mathrm{conv}\mleft\{{(0,0,0),(2,1,1),(1,2,1),(1,1,2)}\mright\}},\\
P'&:={\mathrm{conv}\mleft\{{(0,1,2),(1,0,0),(3,1,4),(4,2,6)}\mright\}}.\end{aligned}$$ The vertex averages are $b_P=(1,1,1)$ and $b_{P'}=(2,1,3)$, and $(P - b_P)B = P' - b_{P'}$, where $$B:=\small\begin{pmatrix}
2&1&3\\
-2&0&-1\\
1&0&1
\end{pmatrix}\normalsize$$ Hence $P$ and $P'$ are equivalent.
Determining the automorphism group of a polytope {#subsec:aut_P}
------------------------------------------------
We can use the labelled face graph ${\mathscr{G}\mleft({P}\mright)}$ to compute the automorphism group ${\mathrm{Aut}\mleft({P}\mright)}$. We simply use the elements $\chi$ of ${\mathrm{Aut}\mleft({{\mathscr{G}\mleft({P}\mright)}}\mright)}$ to construct ${\mathrm{Aut}\mleft({P}\mright)}\le{\mathrm{GL}}_n({\mathbb{Z}})$. Notice that there is no requirement that $P$ is of maximum dimension in the ambient lattice $\Lambda$. Given this, we can also compute the affine automorphism group ${\mathrm{AffAut}\mleft({P}\mright)}$. Begin by embedding $P$ at height one in the lattice $\Lambda\times{\mathbb{Z}}$ (equivalently, consider the cone $C_P$ spanned by $P$ with appropriate grading). We refer to this embedded image of $P$ as $\tilde{P}$. The action of the automorphism group $\mathrm{Aut}\larger(\tilde{P}\larger)$ on $\tilde{P}$ restricts to an action on $P$, realising the full group of affine lattice automorphisms of $P$. A detailed discussion of polyhedral symmetry groups and their applications can be found in [@BEK84; @BSS09; @BSPRS12].
\[ex:involution\] Let $P$ be the three-dimensional simplicial polytope with seven vertices given by $(\pm1,0,0)$, $(0,\pm1,0)$, $(0,0,1)$, $(1,1,0)$, $(0,-1,-1)$. This is sketched below; the $f$-vector is $(1,7,15,10,1)$. The index ${{\left\vert{{F}:\Lambda}\right\vert}}$ of each face $F$ is one (in fact $P$ is a smooth Fano polytope[^3]), and $P$ has four special facets (the four facets incident to the vertex $(1,0,0)$). The resulting labelled graph ${\mathscr{G}\mleft({P}\mright)}$ has automorphism group of order four, however ${\mathrm{Aut}\mleft({P}\mright)}$ has order two, and is generated by the involution $(0,0,1)\mapsto(0,-1,-1)$.

\[ex:24cell\] The four-dimensional centrally symmetric polytope $P$ with vertices $$\begin{aligned}
&\pm(1,0,0,0), \pm(0,1,0,0), \pm(0,0,1,0), \pm(0,0,0,1),\\
&\pm(1,-1,0,0), \pm(1,0,-1,0), \pm(1,0,0,-1), \pm(0,1,-1,0), \pm(0,1,0,-1),\\
&\pm(1,0,-1,-1), \pm(0,1,-1,-1), \pm(1,1,-1,-1)\end{aligned}$$ is the reflexive realisation of the $24$-cell, with $f$-vector $(1,24,96,96,24,1)$. It is unique amongst all $473,\!800,\!776$ reflexive polytopes in having ${\left\vert{{\mathrm{Aut}\mleft({P}\mright)}}\right\vert}=1152$; in fact ${\mathrm{Aut}\mleft({P}\mright)}$ is isomorphic to the Weyl group $W(F_4)$. In particular, $P$ must be self-dual. The number of four-dimensional reflexive polytopes with ${\left\vert{{\mathrm{Aut}\mleft({P}\mright)}}\right\vert}$ of given size are recorded in Table \[tab:num\_4topes\].
[cc]{}
${\left\vert{{\mathrm{Aut}\mleft({P}\mright)}}\right\vert}$ $\# P$
------------------------------------------------------------- -----------
1 467705246
2 5925190
3 1080
4 151416
6 8218
8 6935
10 4
12 1509
16 756
18 2
20 4
24 247
32 23
: The number $\# P$ of four-dimensional reflexive polytopes with automorphism group of size ${\left\vert{{\mathrm{Aut}\mleft({P}\mright)}}\right\vert}$.[]{data-label="tab:num_4topes"}
&
${\left\vert{{\mathrm{Aut}\mleft({P}\mright)}}\right\vert}$ $\# P$
------------------------------------------------------------- --------
36 11
48 79
64 5
72 10
96 22
120 2
128 2
144 2
240 4
288 2
384 6
1152 1
: The number $\# P$ of four-dimensional reflexive polytopes with automorphism group of size ${\left\vert{{\mathrm{Aut}\mleft({P}\mright)}}\right\vert}$.[]{data-label="tab:num_4topes"}
\
\[ex:affaut\] Let $P={\mathrm{conv}\mleft\{{(0,0),(1,0),(0,1)}\mright\}}$ be the empty simplex in ${\mathbb{Z}}^2$. Then ${\mathrm{Aut}\mleft({P}\mright)}$ is of order two, corresponding to reflection in the line $x=y$. To compute the affine automorphism group ${\mathrm{AffAut}\mleft({P}\mright)}$ of $P$, we consider $\tilde{P}={\mathrm{conv}\mleft\{{(0,0,1),(1,0,1),(0,1,1)}\mright\}}$. The group $\mathrm{Aut}\larger(\tilde{P}\larger)$ is of order six, generated by $$\small\begin{pmatrix}-1&0&0\\-1&1&0\\1&0&1\end{pmatrix}
\normalsize\quad\text{and}\quad
\small\begin{pmatrix}0&-1&0\\1&-1&0\\0&1&1\end{pmatrix}.$$ The first generator corresponds to the involution exchanging the vertices $(0,0)$ and $(1,0)$ of $P$, whilst the second generator corresponds to rotation of $P$ about its barycentre $(1/3,1/3)$, given by $$(x,y)\mapsto
(x-1/3,y-1/3)\begin{pmatrix}
0&-1\\
1&-1
\end{pmatrix}+(1/3,1/3)=(x,y)\begin{pmatrix}
0&-1\\
1&-1
\end{pmatrix}+(0,1).$$
Normal forms {#sec:palp_normal_form}
============
The method for determining isomorphism adopted by Kreuzer and Skarke in the software package [[<span style="font-variant:small-caps;">Palp</span>]{}]{} [@KS04] is to generate a *normal form* for the polytope $P$. We shall briefly sketch their approach. Their algorithm is described in detail in Appendix \[apx:palp\_source\_code\].
Throughout we require that the polytope $P\subset\Lambda_{\mathbb{Q}}$ is a lattice polytope of maximum dimension. It is essential to the algorithm that the vertices are lattice points; one could dilate a rational polytope by a sufficiently large factor to overcome this restriction, but in practice the resulting large vertex coefficients can cause computational problems of their own. Let $n$ denote the dimension of $P$, and $n_v$ be the number of vertices ${\mathcal{V}\mleft({P}\mright)}$. We can represent $P$ by an $n\times n_v$ matrix $V$ whose columns are given by the vertices. Obviously $V$ is uniquely defined only up to permutations $\sigma\in S_{n_v}$ of the columns.
Given any matrix $V$ with integer entries, we can compute its Hermite normal form $H(V)$. This has the property that, for all $B\in{\mathrm{GL}}_n({\mathbb{Z}})$, $H(V)=H(V\cdot B)$, however permuting the columns of $V$ will result in different Hermite normal forms. Naïvely one could define the normal form ${\mathrm{NF}\mleft({P}\mright)}$ of $P$ to be $$\min\left\{H(\sigma V)\mid\sigma\in S_{n_v}\right\},$$ where $\sigma V$ denotes the matrix obtained by permuting the columns of $V$ by $\sigma$, and the minimum is taken with respect to some ordering of the set of $n\times n_v$ integer matrices (say, lexicographic ordering). Unfortunately the size of $S_{n_v}$ is too large for this to be a practical algorithm.
The pairing matrix
------------------
The key to making this approach tractable is the vertex–facet pairing matrix.
\[defn:PM\] Let $P$ be a lattice polytope with vertices $v_j$, and let $(w_i,c_i)\in\Lambda^*\times{\mathbb{Z}}$ define the supporting hyperplanes of $P$; each $w_i$ is a primitive inward-pointing vector normal to the facet $F_i$ of $P$, such that ${\langle{w_i},{v}\rangle}=-c_i$ for all $v\in F_i$. The *vertex–facet pairing matrix* ${P\!M}$ is the $n_f\times n_v$ matrix with integer coefficients $${P\!M}_{ij}:={\langle{w_i},{v_j}\rangle}+c_i.$$
In other words, the $ij$-th entry of ${P\!M}$ correspond to the lattice height of $v_j$ above the facet $F_i$. This is clearly invariant under the action of ${\mathrm{GL}}_n({\mathbb{Z}})$. It is also invariant under (lattice) translation of $P$. Permuting the vertices of $P$ corresponds to permuting the columns of ${P\!M}$, and permuting the facets of $P$ corresponds to permuting the rows of ${P\!M}$. Thus there is an action of $S_{n_f}\times S_{n_v}$ on ${P\!M}$: given $\sigma=(\sigma_f,\sigma_v)\in S_{n_f}\times S_{n_v}$, $$(\sigma{P\!M})_{ij}:={P\!M}_{\sigma_f(i),\sigma_v(j)}.$$ There is a corresponding action on $V$ given by restriction: $$\sigma V:=\sigma_v V.$$
Let ${{{P\!M}^\text{max}}}$ denote the maximal matrix (ordered lexicographically) obtained from ${P\!M}$ by the action of $S_{n_f}\times S_{n_v}$, realised by some element $\sigma_\text{max}$. Let ${\mathrm{Aut}\mleft({{{{P\!M}^\text{max}}}}\mright)}\le S_{n_f}\times S_{n_v}$ be the automorphism group of ${{{P\!M}^\text{max}}}$. Then:
\[defn:normal\_form\] The *normal form* of $P$ is $${\mathrm{NF}\mleft({P}\mright)}=\min\left\{H(\sigma\circ\sigma_\text{max} V)\mid\sigma\in{\mathrm{Aut}\mleft({{{{P\!M}^\text{max}}}}\mright)}\right\}.$$
Let $G$ be the group generated by the action of ${\mathrm{Aut}\mleft({{P\!M}}\mright)}$ on the columns of ${P\!M}$. Then ${\mathrm{Aut}\mleft({P}\mright)}\leq G$. Hence we have an alternative method for constructing the automorphism group when $P$ is a lattice polytope of maximum dimension.
\[eg:normal\_form\] Consider the three-dimensional polytope $P$ with vertices $(1,0,0)$, $(0,1,0)$, $(0,0,1)$, $(-1,0,1)$, $(0,1,-1)$, $(0,-1,0)$, $(0,0,-1)$; $P$ is isomorphic to the polytope in Example \[ex:involution\] via the change of basis $$\small\begin{pmatrix}
0&-1&-1\\
1&0&0\\
0&-1&0
\end{pmatrix}.$$ With the vertices in the order written above, and some choice of order for the facets, the vertex–facet pairing matrix is given by $${P\!M}=\small\begin{pmatrix}
1&0&0&0&1&2&2\\
0&0&0&1&1&2&2\\
2&0&1&0&0&2&1\\
0&0&1&2&0&2&1\\
0&2&0&1&3&0&2\\
1&2&0&0&3&0&2\\
0&1&2&3&0&1&0\\
0&2&2&3&1&0&0\\
3&2&2&0&1&0&0\\
3&1&2&0&0&1&0
\end{pmatrix}\normalsize.\phantom{{P\!M}=}$$ The maximum vertex–facet pairing matrix is $${{{P\!M}^\text{max}}}=
\small\begin{pmatrix}
3&2&2&1&0&0&0\\
3&2&2&0&1&0&0\\
1&2&0&3&0&2&0\\
1&2&0&0&3&2&0\\
1&0&2&1&0&0&2\\
1&0&2&0&1&0&2\\
0&1&0&3&0&2&1\\
0&1&0&0&3&2&1\\
0&0&1&2&0&1&2\\
0&0&1&0&2&1&2\\
\end{pmatrix}\normalsize,\phantom{{{{P\!M}^\text{max}}}=}$$ realised by, for example, the permutation $\left((1\ 5\ 2\ 6)(3\ 9)(4\ 10\ 7\ 8),(1\ 4\ 5)(3\ 6\ 7)\right)$ of ${P\!M}$. The automorphism group of ${{{P\!M}^\text{max}}}$ is of order two, generated by $$\left((1\ 2)(3\ 4)(5\ 6)(7\ 8)(9\ 10),(4\ 5)\right).$$ We see that ${\mathrm{NF}\mleft({P}\mright)}$ is equal to $$\small\begin{pmatrix}
1&0&1&0&-1&-1&0\\
0&1&-1&0&1&1&-1\\
0&0&0&1&-1&0&0\\
\end{pmatrix}\normalsize,$$ corresponding to the sequence of vertices $(1,0,0)$, $(0,1,0)$, $(1,-1,0)$, $(0,0,1)$, $(-1,1,-1)$, $(-1,1,0)$, and $(0,-1,0)$. In this example ${\mathrm{Aut}\mleft({{{{P\!M}^\text{max}}}}\mright)}\cong{\mathrm{Aut}\mleft({{\mathrm{NF}\mleft({P}\mright)}}\mright)}$, and acts by exchanging the vertices $(0,0,1)$ and $(-1,1,-1)$.
\[eg:matrix\_aut\_ne\_poly\_aut\] Let $P:={\mathrm{conv}\mleft\{{(-1,-2,-2),(1,0,0),(0,2,1),(0,0,1)}\mright\}}$ be a three-dimensional reflexive polytope. This has $$\phantom{,}{{{P\!M}^\text{max}}}=\small\begin{pmatrix}
4&0&0&0\\
0&4&0&0\\
0&0&4&0\\
0&0&0&4\\
\end{pmatrix}\normalsize,\phantom{{{{P\!M}^\text{max}}}=}$$ with ${\mathrm{Aut}\mleft({{{{P\!M}^\text{max}}}}\mright)}\cong S_4$ of order $24$. However, ${\left\vert{{\mathrm{Aut}\mleft({P}\mright)}}\right\vert}=8$; with ordering as above, the action on the vertices is given by the permutation group with generators $(1\ 4\ 2\ 3)$ and $(3\ 4)$.
Lattice polytopes of non-zero codimension
-----------------------------------------
Suppose that $P$ is a lattice polytope such that $\dim{P}<\dim{\Lambda}$. We can still define a normal form: how we proceed depends on whether $0\in{\mathrm{aff}\mleft({P}\mright)}$.
First suppose that $0\in{\mathrm{aff}\mleft({P}\mright)}$, so that ${\mathrm{aff}\mleft({P}\mright)}={\mathrm{span}\mleft({P}\mright)}$. Set $d=\dim{P}$. We restrict $P$ to the sublattice ${\mathrm{span}\mleft({P}\mright)}\cap\Lambda\cong{\mathbb{Z}}^d$ and calculate the normal form there. The result can be embeded back into $\Lambda$ via $$(a_1,\ldots,a_d)\mapsto(0,\ldots,0,a_1,\ldots,a_d).$$ Now suppose that $0\not\in{\mathrm{aff}\mleft({P}\mright)}$. In this case we consider the polytope $P_0:={\mathrm{conv}\mleft({P\cup\{0\}}\mright)}$. The normal form ${\mathrm{NF}\mleft({P_0}\mright)}$ can be calculated and then the origin discarded.
Let $P:={\mathrm{conv}\mleft\{{(-1,1,1,0),(1,1,1,1),(0,0,0,-1)}\mright\}}$ be a lattice polygon of codimension two. The three-dimensional sublattice ${\mathrm{span}\mleft({P_0}\mright)}\cap{\mathbb{Z}}^4$ has generators $(1,0,0,0)$, $(0,1,1,0)$, and $(0,0,0,1)$. Let $\varphi:{\mathbb{Z}}^3\rightarrow{\mathbb{Z}}^4$ be the embedding given by right multiplication by the matrix $$\small\begin{pmatrix}
1&0&0&0\\
0&1&1&0\\
0&0&0&1\\
\end{pmatrix}\normalsize.$$ Then $\varphi^*P_0$ has vertices $(-1,1,0)$, $(1,1,1)$, $(0,0,-1)$, and $(0,0,0)$, with normal form given by $(0,0,0)$, $(1,0,0)$, $(0,1,0)$, and $(1,1,2)$. Hence ${\mathrm{NF}\mleft({P}\mright)}$ corresponds to the vertices $(0,1,0,0)$, $(0,0,1,0)$, and $(0,1,1,2)$. In fact $P$ is isomorphic to ${\mathrm{NF}\mleft({P}\mright)}$ via the change of basis $$\phantom{\in{\mathrm{GL}}_4({\mathbb{Z}}).}\small\begin{pmatrix}
0&0&1&1\\
1&1&1&1\\
-1&0&0&0\\
0&-1&-1&-2\\
\end{pmatrix}\normalsize\in{\mathrm{GL}}_4({\mathbb{Z}}).$$
Affine normal form {#subsec:affine_normal_form}
------------------
The normal form can be adapted to give an *affine normal form* ${\mathrm{AffNF}\mleft({P}\mright)}$ such that ${\mathrm{AffNF}\mleft({P}\mright)}={\mathrm{AffNF}\mleft({P'}\mright)}$ if and only if polytopes $P$ and $P'$ are equivalent. One could simply define $${\mathrm{AffNF}\mleft({P}\mright)}:=\min\left\{{\mathrm{NF}\mleft({P-v}\mright)}\mid v\in{\mathcal{V}\mleft({P}\mright)}\right\}.$$ However, since the relative height of a vertex over a facet is unchanged by lattice translation, we have that ${{{P\!M}^\text{max}}}$ is invariant. Hence $${\mathrm{AffNF}\mleft({P}\mright)}=\min\left\{H\mleft(\sigma\circ\sigma_\text{max} (V-v)\mright)\mid\sigma\in{\mathrm{Aut}\mleft({{{{P\!M}^\text{max}}}}\mright)}, v\in{\mathcal{V}\mleft({P}\mright)}\right\}.$$
\[eg:affine\_normal\_form\] Returning to the polytope in Example \[eg:normal\_form\] we obtain $$\phantom{.}{\mathrm{AffNF}\mleft({P}\mright)}=\small\begin{pmatrix}
0&1&0&0&3&2&1\\
0&0&1&0&2&1&2\\
0&0&0&1&-1&0&0\\
\end{pmatrix}\normalsize.\phantom{{\mathrm{AffNF}\mleft({P}\mright)}=}$$
The [[<span style="font-variant:small-caps;">Palp</span>]{}]{} normal form {#subsec:palp_normal_form}
--------------------------------------------------------------------------
Kreuzer and Skarke’s [[<span style="font-variant:small-caps;">Palp</span>]{}]{} normal form applies an additional modification to the order of the columns of the maximum vertex–facet pairing matrix ${{{P\!M}^\text{max}}}$. For any $n_f\times n_v$ matrix $M$, let $c_M(j):=\max\left\{M_{ij}\mid 1\leq i\leq n_f\right\}$, and $s_M(j):=\sum_{i=1}^{n_f}M_{ij}$, where $1\leq j\leq n_v$. The following pseudo-code describes how the columns of ${{{P\!M}^\text{max}}}$ (or, equivalently, the vertices of $P$) are rearranged.
$M\gets{{{P\!M}^\text{max}}}$ $k\gets i$ $k\gets j$ $M\gets\mathrm{SwapColumn}(M,i,k)$
\[eg:palp\_normal\_form\] We revisit Example \[eg:normal\_form\]. In this case, ${{{P\!M}^\text{max}}}$ is modified by applying the permutation $(1\ 6\ 3\ 2)(4\ 7)$ to the columns, giving $$\phantom{.}\small\begin{pmatrix}
2&2&0&0&0&3&1\\
2&2&0&0&1&3&0\\
2&0&2&0&0&1&3\\
2&0&2&0&3&1&0\\
0&2&0&2&0&1&1\\
0&2&0&2&1&1&0\\
1&0&2&1&0&0&3\\
1&0&2&1&3&0&0\\
0&1&1&2&0&0&2\\
0&1&1&2&2&0&0\\
\end{pmatrix}\normalsize.$$ The resulting [[<span style="font-variant:small-caps;">Palp</span>]{}]{} normal form corresponds to the sequence of vertices $(1,0,0)$, $(0,1,0)$, $(0,-1,0)$, $(-1,0,0)$, $(0,0,1)$, $(1,1,0)$, and $(0,-1,-1)$.
\[eg:palp\_affine\_normal\_form\] The affine normal form for the polytope in Example \[eg:normal\_form\] with modified ${{{P\!M}^\text{max}}}$ is given by $$\phantom{.}{\mathrm{AffNF}\mleft({P}\mright)}=\small\begin{pmatrix}
0&1&1&2&0&0&2\\
0&0&2&2&0&-1&3\\
0&0&0&0&1&0&-1\\
\end{pmatrix}\normalsize.\phantom{{\mathrm{AffNF}\mleft({P}\mright)}=}$$
Exploiting the automorphism group of the pairing matrix {#sec:exploiting_aut}
=======================================================
A crucial part of the normal form algorithm described in §\[sec:palp\_normal\_form\] is the ability to efficiently calculate the maximum vertex–facet pairing matrix ${{{P\!M}^\text{max}}}$. One also needs to know a permutation $\sigma$ such that $\sigma{P\!M}={{{P\!M}^\text{max}}}$, and to be able to calculate ${\mathrm{Aut}\mleft({{{{P\!M}^\text{max}}}}\mright)}$. These data can be constructed as ${{{P\!M}^\text{max}}}$ is calculated – this is the approach taken by the [[<span style="font-variant:small-caps;">Palp</span>]{}]{} source code described in Appendix \[apx:palp\_source\_code\] – or recovered later. This section focuses on this second approach. A detailed algorithm is given in Appendix \[apx:matrix\_isomorphism\].
Consider a case when ${P\!M}$ is very symmetric, so that the order of ${\mathrm{Aut}\mleft({{P\!M}}\mright)}$ is large (for example, the vertex–facet pairing matrix for the $n$-dimensional polytope associated with projective space ${\mathbb{P}}^n$ has ${\left\vert{{\mathrm{Aut}\mleft({{P\!M}}\mright)}}\right\vert}=(n+1)!$). In such situations, the [[<span style="font-variant:small-caps;">Palp</span>]{}]{} algorithm is highly inefficient. Whilst computing ${{{P\!M}^\text{max}}}$ the symmetries are not taken into account, so the algorithm needlessly explores equivalent permutations. Intuitively, one should be able to improve on the [[<span style="font-variant:small-caps;">Palp</span>]{}]{} algorithm by exploiting the automorphism group of ${P\!M}$.
Given an $n_r\times n_c$ matrix ${P\!M}$ and a group of possible column permutations $S$ (initially set to $S_{n_c}$), one can inductively convert this into ${{{P\!M}^\text{max}}}$ as follows:
1. If $n_r=1$ then ${{{P\!M}^\text{max}}}=\max\left\{\sigma{P\!M}\mid\sigma\in S\right\}$.
2. If ${\left\vert{S}\right\vert}=1$ then no permutations of the columns of ${P\!M}$ are possible, and ${{{P\!M}^\text{max}}}$ is given by sorting the rows of ${P\!M}$ in decreasing order.
3. Suppose now that $n_r>1$ and ${\left\vert{S}\right\vert}>1$.
1. Let ${R^\text{max}}:=\max\left\{\sigma{P\!M}_i\mid\sigma\in S, 1\leq i\leq n_r\right\}$ be the largest row in ${P\!M}$, up to the action of $S$.
2. Set $S':=\{\sigma\in S\mid \sigma{R^\text{max}}={R^\text{max}}\}$.
3. For each row $1\leq i\leq n_r$ such that there exists a permutation $\sigma\in S$ with $\sigma{P\!M}_i={R^\text{max}}$, consider the matrix $M_{(i)}$ obtained from $\sigma{P\!M}$ by deleting the $i$-th row. If $M_{(i)}\cong M_{(j)}$ for some $j<i$, then skip this case. Otherwise let ${M_{(i)}^\text{max}}$ be the $(n_r-1)\times n_c$ matrix obtained by inductively applying this process with ${P\!M}\gets M_{(i)}$ and $S\gets S'$.
4. Set ${M^\text{max}}$ to be the maximum of all such ${M_{(i)}^\text{max}}$. Then $${{{P\!M}^\text{max}}}=\left(\begin{array}{cc}
{R^\text{max}}\\
\hline
{M^\text{max}}
\end{array}\right).$$
Test case: the database of smooth Fano polytopes {#subsec:analysis_smooth_db}
------------------------------------------------
The algorithm described in Appendix \[apx:matrix\_isomorphism\], which we shall hereafter refer to as <span style="font-variant:small-caps;">Symm</span>, was implemented by the authors and compared against the [[<span style="font-variant:small-caps;">Palp</span>]{}]{} algorithm. As Examples \[eg:symmetry\_vs\_palp\] and \[eg:palp\_vs\_symmetry\] illustrate, the difference in run-time between the two approaches can be considerable.
\[eg:symmetry\_vs\_palp\] Let $P$ be the six-dimensional polytope[^4] with $14$ vertices $$\begin{aligned}
&\pm(1,0,0,0,0,0), \pm(0,1,0,0,0,0), \pm(0,0,1,0,0,0), \pm(0,0,0,1,0,0),\\
&\pm(0,0,0,0,1,0), \pm(0,0,0,0,0,1), \pm(1,1,1,1,1,1).\end{aligned}$$ The automorphism group ${\mathrm{Aut}\mleft({{P\!M}}\mright)}$ is of order $10,\!080$. On our test machine the [[<span style="font-variant:small-caps;">Palp</span>]{}]{} algorithm took $512.88$ seconds, whereas the <span style="font-variant:small-caps;">Symm</span> algorithm took only $5.83$ seconds.
\[eg:palp\_vs\_symmetry\] Let $P$ be the six-dimensional polytope[^5] with $12$ vertices $$\begin{aligned}
&(1,0,0,0,0,0), (0,1,0,0,0,0), (0,0,1,0,0,0), (0,0,0,1,0,0), (0,0,0,0,1,0),\\
&(0,0,0,0,0,1), (-1,-1,-1,1,1,1), (0,0,1,-1,0,0), (0,0,-1,0,0,0),\\
&(0,1,1,-1,-1,-1), (0,-1,-1,0,0,0), (0,0,0,0,-1,-1).\end{aligned}$$ The automorphism group ${\mathrm{Aut}\mleft({{P\!M}}\mright)}$ is of order $16$; the [[<span style="font-variant:small-caps;">Palp</span>]{}]{} algorithm took $0.55$ seconds whilst the <span style="font-variant:small-caps;">Symm</span> algorithm took $4.30$ seconds.
Table \[tab:timings\] contains timing data comparing the [[<span style="font-variant:small-caps;">Palp</span>]{}]{} algorithm with the <span style="font-variant:small-caps;">Symm</span> algorithm. This data was collected by sampling polytopes from [Ø]{}bro’s classification of smooth Fano polytopes [@Obr07]. For each smooth polytope $P$ selected, the calculation was performed for both $P$ and $P^*$. In small dimensions the number of polytopes, and the time required for the computations, is small enough that the entire classification can be used. It is important to emphasise that the smooth Fano polytopes are atypical in that they can be expected to have a large number of symmetries, and so favour <span style="font-variant:small-caps;">Symm</span>. Experimental evidence suggests that the ratio $r:={\left\vert{{\mathrm{Aut}\mleft({{P\!M}}\mright)}}\right\vert}/{n_v}$ is a good proxy for deciding between the two choices. When $r<1$ the [[<span style="font-variant:small-caps;">Palp</span>]{}]{} algorithm often performs better, whereas larger values indicate <span style="font-variant:small-caps;">Symm</span> should be used.
--------------------------------------------------------------- -------- ----------- ------ ----------- ------ ----------- ------
Dim. $\# P$
(lr)[1-1]{} (lr)[2-2]{} (lr)[3-4]{} (lr)[5-6]{} (lr)[7-8]{} 4 248 6.28 0.03 4.48 0.02 3.41 0.01
5 1732 98.30 0.06 59.53 0.03 46.17 0.03
6 15244 6148.45 0.40 1510.32 0.10 1214.25 0.08
7 150892 152279.91 1.01 45230.55 0.30 34818.32 0.23
8 281629 611795.13 2.17 152902.73 0.54 111426.70 0.40
--------------------------------------------------------------- -------- ----------- ------ ----------- ------ ----------- ------
: Timing data, in seconds, for the [[<span style="font-variant:small-caps;">Palp</span>]{}]{} algorithm and for the <span style="font-variant:small-caps;">Symm</span> algorithm. The best possible time if one could infallibly choose the faster of the two algorithms is recorded by <span style="font-variant:small-caps;">Best</span>.[]{data-label="tab:timings"}
Applications to Laurent polynomials {#sec:laurent_normal_form}
===================================
Let $f\in{\mathbb{C}}[x_1^{\pm1},\ldots,x_n^{\pm1}]$ be a Laurent polynomial in $n$ variables, and let $P:={\mathrm{Newt}\mleft({f}\mright)}$ denote the Newton polytope of $f$. We require throughout that $\dim{P}=n$, i.e. that $P$ is of maximum dimension in the ambient lattice. An element $B\in{\mathrm{GL}}_n({\mathbb{Z}})$ corresponds to the invertible monomial transformation $$\label{eq:cob}
\begin{array}{r@{\hspace{2pt}}c@{\hspace{2pt}}l}
\varphi_B:({\mathbb{C}}^*)^n&\rightarrow&({\mathbb{C}}^*)^n\\
x_j&\mapsto&x_1^{B_{1j}}\cdots x_n^{B_{nj}},
\end{array}$$ and $g=\varphi_B^*f$ is also a Laurent polynomial. In particular, ${\mathrm{Newt}\mleft({g}\mright)}=P\cdot B$.
As when working with lattice polytopes, it can be advantageous to be able to present $f$ in a normal form with respect to transformations of type .
\[defn:laurent\_ordering\] Given two Laurent polynomials $f$ and $g$ such that ${\mathrm{Newt}\mleft({f}\mright)}={\mathrm{Newt}\mleft({g}\mright)}$, we define an order $\preceq$ on $f$ and $g$ as follows. Let $v_1<v_2<\ldots<v_k$ be the lattice points in ${\mathrm{Newt}\mleft({f}\mright)}$, listed in lexicographic order. To each point $v_i$ there exists a (possibly zero) coefficient $c_i$ of $x^{v_i}$ in $f$, and coefficient $d_i$ in $g$. Define ${\mathrm{coeffs}\mleft({f}\mright)}:=(c_1,c_2,\ldots,c_k)$. We write $f\preceq g$ if and only if ${\mathrm{coeffs}\mleft({f}\mright)}\leq{\mathrm{coeffs}\mleft({g}\mright)}$.
Any Laurent polynomial $f$ determines a pair $({\mathrm{coeffs}\mleft({f}\mright)},{\mathrm{Newt}\mleft({f}\mright)})$. Conversely, given any pair $(c,P)$, where $c\in{\mathbb{C}}^k$ and $P\subset\Lambda_{\mathbb{Q}}$ is a maximum-dimensional lattice polytope such that $k={\left\vert{P\cap\Lambda}\right\vert}$, we can associate a Laurent polynomial. If we insist that the $c_i$ associated with the vertices ${\mathcal{V}\mleft({P}\mright)}$ are non-zero then we have a one-to-one correspondence.
\[defn:laurent\_normal\_form\] Let $f$ be a Laurent polynomial, and set $P:={\mathrm{Newt}\mleft({f}\mright)}$. Let $B\in{\mathrm{GL}}_n({\mathbb{Z}})$ be such that $P\cdot B={\mathrm{NF}\mleft({P}\mright)}$. The *normal form* for $f$ is $${\mathrm{NF}\mleft({f}\mright)}:=\mathrm{min}_\preceq\left\{\varphi_A\circ\varphi_B(f)\mid A\in{\mathrm{Aut}\mleft({{\mathrm{NF}\mleft({P}\mright)}}\mright)}\right\}.$$
\[eg:laurent\_normal\_form\] Consider the Laurent polynomial $$f=2x^2y+\frac{1}{x}+\frac{3}{xy}.$$ Then ${\mathrm{NF}\mleft({P}\mright)}$ has vertices $(1,0)$, $(0,1)$, and $(-1,-1)$, with corresponding transformation matrix $$B=\small\begin{pmatrix}
0&-1\\-1&1
\end{pmatrix}\normalsize\in{\mathrm{GL}}_2({\mathbb{Z}}).$$ Under this transformation, $$\varphi_B^*f=3x+y+\frac{2}{xy}$$ and ${\mathrm{coeffs}\mleft({\varphi_B^*f}\mright)}=(2,0,1,3)$. The automorphism group ${\mathrm{Aut}\mleft({{\mathrm{NF}\mleft({P}\mright)}}\mright)}\cong S_3$ acts by permuting the non-zero elements in the coefficient vector, hence $${\mathrm{NF}\mleft({f}\mright)}=3x+2y+\frac{1}{xy}.$$
A naïve implementation of Laurent normal form faces a serious problem: listing the points in a polytope is computationally expensive, and will often be the slowest part of the algorithm by many orders of magnitude. With a little care this can be avoided. What is really needed in Definition \[defn:laurent\_normal\_form\] is not the entire coefficient vector, but the closure of the non-zero coefficients under the action of ${\mathrm{Aut}\mleft({{\mathrm{NF}\mleft({P}\mright)}}\mright)}$. We illustrate this observation with an example.
\[eg:orbit\_closure\] Consider the Laurent polynomial $$f=x^{50}y^{50}z^{50} + x^{50}y^{30} + \frac{x^{30}z^{30}}{y^{40}} + \frac{x^{10}}{y^{40}z^{20}} + xyz + \frac{y^{40}z^{20}}{x^{10}} + \frac{y^{40}}{x^{30}z^{30}} + \frac{1}{x^{50}y^{30}} + \frac{1}{x^{50}y^{50}z^{50}}.$$ Set $P={\mathrm{Newt}\mleft({f}\mright)}$. Notice that ${\left\vert{P\cap\Lambda}\right\vert}=285241$; enumerating the points in $P$ is clearly not the correct approach. The normal form ${\mathrm{NF}\mleft({P}\mright)}$ is given by change of basis $$B=\small\begin{pmatrix}
-3&-4&-6\\
5&7&10\\
-12&-16&-23\\
\end{pmatrix}\normalsize\in{\mathrm{GL}}_3({\mathbb{Z}}),$$ with $$\begin{aligned}
g:=\varphi_B^*f=x^{650}y^{880}z^{1270} + x^{500}y^{650}z^{950} + x^{10} + y^{10} + &\frac{1}{y^{10}} +\\
\frac{1}{x^{10}} + \frac{1}{x^{10}y^{13}z^{19}} + &\frac{1}{x^{500}y^{650}z^{950}} + \frac{1}{x^{650}y^{880}z^{1270}}.\end{aligned}$$ The automorphism group $G:={\mathrm{Aut}\mleft({{\mathrm{NF}\mleft({P}\mright)}}\mright)}$ is of order two, generated by the involution $u\mapsto -u$. We consider the closure of the nine lattice points corresponding to the exponents of $g$ under the action of $G$. The only additional point is $(10,13,19)$. Thus we can express ${\mathrm{coeffs}\mleft({g}\mright)}$ with respect to these ten points: $${\mathrm{coeffs}\mleft({g}\mright)}=(1,1,1,1,1,1,1,0,1,1).$$ The key observation is that the action of $G$ on $g$ will not introduce any additional points, hence the lexicographically smallest coefficient sequence with respect to these points will also be the smallest coefficient sequence with respect to all the points of ${\mathrm{NF}\mleft({P}\mright)}$. By applying the involution we obtain the smaller coefficient sequence $(1,1,0,1,1,1,1,1,1,1)$, hence $$\begin{aligned}
{\mathrm{NF}\mleft({f}\mright)}=x^{650}y^{880}z^{1270} + x^{500}y^{650}z^{950} + x^{10}y^{13}z^{19} + &x^{10} + y^{10} +\\
\frac{1}{y^{10}} + \frac{1}{x^{10}} + &\frac{1}{x^{500}y^{650}z^{950}} + \frac{1}{x^{650}y^{880}z^{1270}}.\end{aligned}$$
We conclude this section by remarking that the automorphism group ${\mathrm{Aut}\mleft({f}\mright)}\leq{\mathrm{GL}}_n({\mathbb{Z}})$ of a Laurent polynomial $f$ can easily be constructed from ${\mathrm{Aut}\mleft({{\mathrm{Newt}\mleft({f}\mright)}}\mright)}$ by restricting to the subgroup that leaves ${\mathrm{coeffs}\mleft({f}\mright)}$ invariant.
The Kreuzer–Skarke algorithm {#apx:palp_source_code}
============================
We describe in detail the algorithm used by Kreuzer and Skarke in [[<span style="font-variant:small-caps;">Palp</span>]{}]{} [@KS04] to compute the normal form of a lattice polytope $P$ of maximum dimension $n$. Any such polytope can be represented by a $n\times n_v$ matrix $V$ whose columns correspond to the vertices of $P$. This matrix is unique up to permutation of columns and the action of ${\mathrm{GL}}_n({\mathbb{Z}})$; i.e. one can change the order of the vertices and the underlying basis for the lattice to obtain a different matrix $V'$.
The [[<span style="font-variant:small-caps;">Palp</span>]{}]{} normal form is a unique representation of the polytope $P$ such that if $Q$ is any other maximum dimensional lattice polytope, then $P$ and $Q$ are isomorphic if and only if their normal forms are equal. For any matrix $V$ with integer entries, and any $G\in{\mathrm{GL}}_n({\mathbb{Z}})$, the Hermite normal form of $G\cdot V$ is uniquely defined. The question is how to define a canonical order for the vertices, since permuting the vertices will lead to a different Hermite normal form.
In what follows, the line numbers refer to the [[<span style="font-variant:small-caps;">Palp</span>]{}]{} source file `Polynf.c`[^6]. We have chosen our notation to correspond as closely as possible to the source code. The algorithm will be described in eight stages:
1. The pairing matrix;
2. The maximal pairing matrix;
3. Constructing the first row;
4. Computing the restricted automorphism group, step I;
5. Constructing the $k$-th row;
6. Updating the set of permutations;
7. Computing the restricted automorphism group, step II;
8. Computing the normal form of the polytope.
The pairing matrix {#asubsec:pairing_matrix}
------------------
We start by constructing the pairing matrix ${P\!M}$.
[0.9]{}[rX]{} Line:&197 (`Init_rVM_VPM`)\
Input:&A list of vertices and a list of equations for the supporting hyperplanes.\
Output:&The pairing matrix ${P\!M}$.\
Let $\left\{v_i\right\}_{i=1}^{n_v}$ be the vertices of $P$, in some order, and $\sum_{j=1}^nw_{ij}x_j+c_i=0$, $i=1,\ldots,n_f$, be the equations of the supporting hyperplanes of $P$. Here $n_v$ is equal to the number of vertices of $P$, and $n_f$ is equal to the number of facets of $P$. The $w_i$ are the inward-pointing primitive facet normals, and the $c_i$ are necessarily integers. The pairing matrix ${P\!M}$ is the $n_f\times n_v$ matrix $${P\!M}_{ij}=\sum_{k=1}^nw_{ik}v_{jk}+c_i={\langle{w_{i}},{v_{j}}\rangle}+c_i$$ with integral coefficients.
The order of the columns of ${P\!M}$ corresponds to an order of the vertices of $P$, and the order of the rows of ${P\!M}$ corresponds to an order of the facets of $P$. Let $\rho=(r,c)\in S_{n_f}\times S_{n_v}$ act on ${P\!M}$ via $$(\rho{P\!M})_{ij}={P\!M}_{r(i)c(j)}.$$
The maximal pairing matrix {#asubsec:max_pairing_matrix}
--------------------------
Let ${{{P\!M}^\text{max}}}$ denote the maximal lexicographic matrix (when reading row by row) obtained from ${P\!M}$ by reordering rows and columns, so that $${{{P\!M}^\text{max}}}:=\max\left\{\rho{P\!M}\mid\rho\in S_{n_f}\times S_{n_v}\right\}.$$
It can happen that ${\mathrm{Aut}\mleft({{P\!M}}\mright)}\leq S_{n_f}\times S_{n_v}$ is non-trivial, say ${\left\vert{{\mathrm{Aut}\mleft({{P\!M}}\mright)}}\right\vert}=n_s$. Then we have $n_s$ permutations $\left\{\rho_i\right\}_{i=1}^{n_s}$ such that $\rho_i{P\!M}={{{P\!M}^\text{max}}}$, and $n_s$ corresponding orders for the vertices of the polytope. Our main task is to compute ${{{P\!M}^\text{max}}}$ and $\left\{\rho_i\right\}_{i=1}^{n_s}$ from ${P\!M}$. This will be done by induction on the rows of ${{{P\!M}^\text{max}}}$.
Constructing the first row {#asubsec:first_row}
--------------------------
We begin by constructing the first row of ${{{P\!M}^\text{max}}}$.
[0.9]{}[rX]{} Line:&348 (`Aux_vNF_Line`)\
Input:&The paring matrix ${P\!M}$.\
Output:&An array of permutations giving the first row of ${{{P\!M}^\text{max}}}$.\
Set $n_s=1$ and maximise the first row of ${P\!M}$, i.e. find a permutation $c_1\in S_{n_v}$ such that ${P\!M}_{1c_1(i)}\leq{P\!M}_{1c_1(j)}$, $j\leq i$:
$n_s\gets 1$ $(r_1,c_1)\gets (1_{S_{n_f}},1_{S_{n_v}})$ $m\gets{\mathrm{IndexOfMax}\mleft\{{{P\!M}_{1i}\mid i\geq j}\mright\}}$ $c_1\gets c_1(j\, m+j-1)$ $b\gets{P\!M}_1$
Suppose we have computed the first $k-1$ lines, $n_s$ of which could be chosen to be the first row of ${{{P\!M}^\text{max}}}$ (i.e. up to reordering of the facets they are maximal among other lines and equal to the reference line, denoted $b$). Then we have integers $1\leq k_i\leq k-1<n_f$ with corresponding permutations $\rho_i=(r_i,c_i)\in S_{n_f}\times S_{n_v}$, $i=1,\ldots,n_s$, and a reference line defined by $b:={P\!M}_{k_1}$ such that: $${P\!M}_{k_ic_i(j)}=b_{c_1(j)},\qquad i=1,\ldots,n_s, j=1,\ldots,n_v.$$ Set $r_i=(1\, k_i)$ to be the permutation which moves the line in question to the first row of ${P\!M}$. Now we consider the $k$-th row of ${P\!M}$. Find the maximal element $\max_j\{{P\!M}_{kj}\}$, say ${P\!M}_{km}$, and let $c_{n_s+1}=(1\, m)$. We compare this against the reference line. If ${P\!M}_{kc_{n_s+1}(1)}<b_{c_1(1)}$ then continue with the next line (or stop if we are at the last line), otherwise continue constructing $c_{n_s+1}$. If $\max_{j>1}\{{P\!M}_{kc_{n_s+1}(j)}\}={P\!M}_{kc_{n_s+1}(m)}$ then let $c_{n_s+1}\mapsto c_{n_s+1}\,(2\, m)$ and verify that ${P\!M}_{kc_{n_s+1}(2)}<b_{c_1(2)}$; if this inequality fails to hold then continue with the next element.
If the line $k$ is not less than the reference line $b$ then we set $r_{n_s+1}=(1\, k)$ and have two cases to consider:
1. If ${P\!M}_{kc_{n_s+1}(j)}=b_{c_1(j)}$, $j=1,\ldots,n_v$, then we have a new case of symmetry. We set $k_{n_s+1}:=k$ and increment the number of symmetries $n_s$.
2. Otherwise we have found a (lexicographically) bigger row and so obtain a new reference line. We set $b:={P\!M}_k$, $k_1:=k$, and $\rho_1:=(r_{n_s+1},c_{n_s+1})$, and reset the number of symmetries $n_s$.
$(r_{n_s+1},c_{n_s+1})\gets (1_{S_{n_f}},1_{S_{n_v}})$ $m\gets{\mathrm{IndexOfMax}\mleft\{{{P\!M}_{kc_{n_s+1}(j)}\mid j\geq 1}\mright\}}$ $c_{n_s+1}\gets c_{n_s+1}(1\, m)$ $d\gets{P\!M}_{kc_{n_s+1}(1)}-b_{c_1(1)}$ [**continue**]{} $m\gets{\mathrm{IndexOfMax}\mleft\{{{P\!M}_{kc_{n_s+1}(j)}\mid j\geq i}\mright\}}$ $c_{n_s+1}\gets c_{n_s+1}(i\, m+i-1)$ $d\gets{P\!M}_{kc_{n_s+1}(i)}-b_{c_1(i)}$ [**break**]{} [**continue**]{}$r_{n_s+1}\gets r_{n_s+1}(1\, k)$ $n_s\gets n_s+1$ $(r_1,c_1)\gets (r_{n_s+1},c_{n_s+1})$ $n_s\gets 1$ $b\gets{P\!M}_k$
Computing the restricted automorphism group, step I {#asubsec:aut_step_1}
---------------------------------------------------
Once the first row of ${{{P\!M}^\text{max}}}$ has been constructed, it imposes restrictions on any future column permutations: they must fix the first row.
[0.9]{}[rX]{} Line:&376 (`Aux_vNF_Line`)\
Input:&The first line of the maximal pairing matrix.\
Output:&The array $S$ capturing its automorphism group.\
Suppose that the row is equal to blocks of $a_i$’s, each of size $n_i$ , $i=1,\ldots,k$, where $\sum_{i=1}^k n_i=n_v$: $$\left(\begin{array}{ccc|ccc|c|ccc}
a_1&\ldots&a_1&a_2&\ldots&a_2&\ldots&a_k&\ldots&a_k
\end{array}\right).$$
It is clear that if we had such a row, the only permutations of columns allowed in the construction of later rows will be those factoring through $S_{n_1}\times S_{n_2}\times\ldots\times S_{n_k}$. The symmetry of this row is encoded in an array $S$ such that if $S(i)=j$ and $S(S(i))=S(j)=h$ then the index $i$ is in the block delimited by the indices $j$ and $h$ (depending on whichever is greater). We represent $S$ as an array $$\begin{array}{r}
\left(\begin{array}{cccc|cccc|c}
n_1&1&\ldots&1&n_1+n_2&n_1+1&\ldots&n_1+1&\ldots\phantom{xxxxxxxx}\end{array}\right.\\
\left.\begin{array}{|cccc}
n_v&1+\sum_{i=1}^{k-1}n_i&\ldots&1+\sum_{i=1}^{k-1}n_i\end{array}\right)
\end{array}$$
\[aex:first\_S\] The symmetries of the row $\left(5\ 5\ 5\ 5\ 4\ 3\ 3\ 2\ 2\ 2\ 1\ 0\ 0\right)$ are encoded by the array $$S=\left(\begin{array}{cccc|c|cc|ccc|c|cc}
4&1&1&1&5&7&6&10&8&8&11&13&12
\end{array}\right).$$
When $S=\left(1\ 2\ \ldots\ n_v\right)$ the columns are fixed and we may only permute the rows. The computation of $S$ is summarised in the following pseudo-code:
$S\gets\left(1\ 2\ \ldots\ n_v\right)$ $S(i)\gets S(i-1)$ $S^2(i)\gets S(S(i))+1$ $S(i)\gets i$
Constructing the $k$-th row {#asubsec:kth_row}
---------------------------
Proceeding by induction on the rows, we construct the remaining rows of ${{{P\!M}^\text{max}}}$.
[0.9]{}[rX]{} Line:&289 (`Aux_vNF_Line`)\
Input:&${P\!M}$, the permutations $\{p_i\}_{i=1}^{n_s}$, and the array $S$.\
Output:&The $k$-th line of the maximal pairing matrix.\
Assume we have computed the first $l-1<n_f-1$ rows of ${{{P\!M}^\text{max}}}$ and the associated symmetry array $S$ (notice that the last row of ${{{P\!M}^\text{max}}}$ need not be computed as it is completely determined), together with $n_s$ distinct permutations $\rho_i=(r_i,c_i)\in S_{n_f}\times S_{n_v}$ such that $${{P\!M}_{kj}^\text{max}}={P\!M}_{r_i(k)c_i(j)}\qquad\text{ for all }1\leq j\leq n_v, 1\leq k<l, 1\leq i\leq n_s.$$
We have to consider each configuration given by the permutations $\left\{\rho_i\right\}_{i=1}^{n_s}$. For each configuration we generally obtain $n_\rho$ ways to construct the line $l$, moreover some constructions might give a smaller line, hence $n_s$ will have to be updated as we proceed. Let $\tilde{n}_{s}$ record the initial value of $n_s$.
First consider the case $k=\tilde{n}_{s}$. We will construct a candidate line for the $l$-th row of ${{{P\!M}^\text{max}}}$; this will be our reference line against which the other cases will be compared. If a greater candidate is found, all the preceding computations will have to be deleted and redone with the new candidate. If a given case lead to a smaller line than the reference, it will have to be deleted.
Initially set the local number of symmetries, $n_\rho$, to zero and initialise the permutation $\tilde{\rho}_{n_\rho}=\rho_k$. We start with the line $\tilde{r}_{n_\rho}(l)$ by finding the maximal element of the first symmetry block. Suppose that $$\max\left\{{P\!M}_{\tilde{r}_{n_\rho}(l)\tilde{c}_{n_\rho}(i)}\mid 1\leq i\leq S(1)\right\} = {P\!M}_{\tilde{r}_{n_\rho}(l)\tilde{c}_{n_\rho}(m)}.$$ Then we update $\tilde{c}_{n_\rho}$ to $\tilde{c}_{n_\rho}\,(1\, m)$. This maximal value is saved in the reference line which we denote $l_r$ (if it were already constructed, $k<\tilde{n}_{s}$, we move straight to the tests below). We increment $n_\rho$ by one to reflect this new candidate, initialise $\tilde{\rho}_{n_\rho}=p_{k}$, and proceed to consider the maximal entries in the first symmetry block for other lines $r_{k}(s)$, $s=l+1,\ldots,n_f$.
Inductively, suppose we have considered $s-1$ lines where $n_\rho$ of them have a maximal element in the first symmetry block equal to the one of the reference line $l_r(1)$, and the others have smaller values. We also have $\tilde{r}_{n_\rho}=r_{k}$ from the initialisation. Consider the line $\tilde{r}_{n_\rho}(s)$ and find its maximal element in $1,\ldots,S(1)$ as above, updating $\tilde{c}_{n_\rho}$. Now if ${P\!M}_{\tilde{r}_{n_\rho}(s)\tilde{c}_{n_\rho}(1)}<l_r(1)$ then proceed to the case $s+1$, if possible. Otherwise $\tilde{r}_{n_\rho}\mapsto\tilde{r}_{n_\rho}\,(l\, s)$ and there are two possibilities: if ${P\!M}_{\tilde{r}_{n_\rho}(s)\tilde{c}_{n_\rho}(1)}=l_r(1)$ then increase the number of symmetries $n_\rho\mapsto n_\rho+1$ and move to $s+1$, after initialising the new permutation $\tilde{\rho}_{n_\rho}=\rho_{k}$; if ${P\!M}_{\tilde{r}_{n_\rho+1}(s)\tilde{c}_{n_\rho+1}(1)}>l_r(1)$ then redefine the first element of the reference line $l_r(1):={P\!M}_{\tilde{r}_{n_\rho}(s)\tilde{c}_{n_\rho}(1)}$, update the first permutation $\tilde{\rho}_0=\tilde{\rho}_{n_\rho}$, and reset $n_\rho=1$ ready for the next permutation $\tilde{\rho}_{n_\rho}=\rho_k$.
$c\gets 1$ $n_\rho\gets 0$ $ccf\gets cf$ $(\tilde{r}_{n_\rho},\tilde{c}_{n_\rho})\gets (r_k,c_k)$ $\tilde{c}_{n_\rho}\gets\tilde{c}_{n_\rho}(c\, j)$ $l_r(1)\gets{P\!M}_{\tilde{r}_{n_\rho}(s)\tilde{c}_{n_\rho}(1)}$ $\tilde{r}_{n_\rho}\gets\tilde{r}_{n_\rho}(l\, s)$ $n_\rho\gets n_\rho+1$ $ccf\gets 1$ $(\tilde{r}_{n_\rho},\tilde{c}_{n_\rho})\gets (r_k,c_k)$ $d\gets{P\!M}_{\tilde{r}_{n_\rho}(s)\tilde{c}_{n_\rho}(1)}-l_r(1)$ [**continue**]{} $\tilde{r}_{n_\rho}\gets\tilde{r}_{n_\rho}(l\, s)$ $n_\rho\gets n_\rho+1$ $(\tilde{r}_{n_\rho},\tilde{c}_{n_\rho})\gets (r_k,c_k)$ $l_r(1)\gets{P\!M}_{\tilde{r}_{n_\rho}(s)\tilde{c}_{n_\rho}(1)}$ $cf\gets 0$ $(\tilde{r}_1,\tilde{c}_1)\gets (\tilde{r}_{n_\rho},\tilde{c}_{n_\rho})$ $n_\rho\gets 1$ $(\tilde{r}_{n_\rho},\tilde{c}_{n_\rho})\gets (r_k,c_k)$ $n_s\gets k$ $\tilde{r}_{n_\rho}\gets \tilde{r}_{n_\rho}(l\, s)$
Note that the initial value of the *comparison flag* $cf$ is $0$. This indicates that the reference line has not been initialised; it is also reset to zero when a greater candidate is found. We will see later how $cf$ is updated.
We need to construct other elements of $l_r$. Inductively, suppose we are constructing the entry $i$ of $l_r$ and we have $n_\rho$ symmetries with corresponding permutations $\tilde{\rho}_{j}$, $j=0,\ldots,n_\rho-1$. If $n_\rho=0$ we move to the next configuration $k-1$ after having updated the symmetries accordingly, i.e. we do not save the current configuration. Otherwise, start with the last $j=n_\rho-1$. Determine where the corresponding block of symmetry ends for $i$ by looking at the maximum of $S(i)$ and $S^{2}(i)$, which we will call $h$. Then compute $$\max\left\{{P\!M}_{\tilde{r}_{j}(l)\tilde{c}_{j}(\lambda)}\mid i\leq\lambda\leq h\right\} ={P\!M}_{\tilde{r}_{j}(l)\tilde{c}_{j}(m)}$$ and update $\tilde{c}_{j}\mapsto\tilde{c}_{j}\,(i\, m)$ . This value is saved in the reference line $l_r(i)$. Then we consider (inductively) any cases of symmetry with $j<n_\rho-1$ and compute the $i$-th entry in the same manner as above: if ${P\!M}_{\tilde{r}_{j}(l)\tilde{c}_{j}(i)}=l_r(i)$ then continue with the next $j$; if ${P\!M}_{\tilde{r}_{j}(l)\tilde{c}_{j}(i)}<l_r(i)$ then the current case is removed and we update $n_\rho\mapsto n_\rho-1$; finally if ${P\!M}_{\tilde{r}_{j}(l)\tilde{c}_{j}(i)}>l_r(i)$ then all cases previously considered are irrelevant, so we let $n_\rho=j+1$ and the reference line is updated $l_r(i)={P\!M}_{\tilde{r}_{j}(l)\tilde{c}_{j}(i)}$.
$h\gets S(c)$ $ccf\gets cf$ $h\gets S(h)$ $s\gets n_\rho$ $s\gets s-1$ $\tilde{c}_s\gets \tilde{c}_s(cj)$ $l_r(c)\gets{P\!M}_{\tilde{r}_s(l)\tilde{c}_s(c)}$ $ccf\gets 1$ $d\gets{P\!M}_{\tilde{r}_s(l)\tilde{c}_s(c)}-l_r(c)$ $n_\rho\gets n_\rho-1$ $(\tilde{r}_s,\tilde{c}_s)\gets (\tilde{r}_{n_\rho},\tilde{c}_{n_\rho})$ $l_r(c)\gets{P\!M}_{\tilde{r}_s(l)\tilde{c}_s(c)}$ $cf\gets 0$ $n_\rho\gets s+1$ $n_s\gets k$
Updating the set of permutations {#asubsec:update_perms}
--------------------------------
The last step in the construction of the line $l$ is to organise the new symmetries for a given case $k$.
[0.9]{}[rX]{} Line:&333 (`Aux_vNF_Line`)\
Input:&The permutations $\{p_i\}_{i=1}^{n_s}$ and the newly computed $\{\tilde{\rho}_i\}_{i=0}^{n_\rho-1}$.\
Output:&The updated set $\{p_i\}_{i=1}^{n_s}$.\
Recall that $\tilde{n}_{s}$ denotes the number of symmetries we had before performing the computations for the line $l$ of ${{{P\!M}^\text{max}}}$, and $n_s\leq\tilde{n}_s$ represents the updated number symmetries. Our current construction of the line $l$ may well introduce new symmetries, so-called *local symmetries*, of which there are $n_\rho$. We can have $n_\rho=0$, in which case all the configurations in the case $k$ lead to a smaller candidate for $l$. When $n_\rho>0$ the local symmetries are represented by the set $\{\tilde{\rho}_i\}_{i=0}^{n_\rho-1}$ of new permutations.
We now update the array of all permutations. If $n_s>k$ we set $\rho_k=\rho_{n_s}$; we want the set of permutations $\{\rho_i\}_{i=1}^{n_s}$ to be updated so that the only cases which need to be considered are those with index $i<k$. Since we are appending $n_\rho$ new permutations at end for the indices $i\geq n_s$, so $n_s\rightarrow n_s+n_\rho-1$. If $n_\rho=0$ then nothing is appended and $n_s$ decreases by one as required. Finally, we update the comparison flag $cf$ to reflect the current number of symmetries.
$n_s\gets n_s-1$ $(r_k,c_k)\gets (r_{n_s+1},c_{n_s+1})$ $cf\gets n_s+n_\rho$ $(r_{n_s+1},c_{n_s+1})\gets (\tilde{r}_s,\tilde{c}_s)$ $n_s\gets n_s+1$
Computing the restricted automorphism group, step II {#asubsec:aut_step_2}
----------------------------------------------------
Once a new row of ${{{P\!M}^\text{max}}}$ has been compute we need to update $S$ to reflect the symmetries of this row. This is done by restricting the blocks previously delimited by $S$ to reflect any additional constraints imposed by the row.
Continuing Example \[aex:first\_S\], suppose that the second row of the candidate ${{{P\!M}^\text{max}}}$ has been computed, and that the two rows are given by $$\left(\begin{array}{ccccccccccccc}
5&5&5&5&4&3&3&2&2&2&1&0&0\\
4&3&3&3&3&2&2&2&1&0&0&0&0
\end{array}\right).$$ The corresponding array $S$ is $$\left(\begin{array}{c|ccc|c|cc|c|c|c|c|cc}
1&4&2&2&5&7&6&8&9&10&11&13&12
\end{array}\right).$$
[0.9]{}[rX]{} Line:&376 (`Aux_vNF_Line`)\
Input:&The newly computed upper block of the maximal pairing matrix.\
Output:&The updated array $S$ capturing the automorphism group of the matrix.\
$c\gets 1$ $s\gets S(c)+1$ $S(c)\gets c$ $c\gets c+1$ $S(c)\gets S(c-1)$ $S^2(c)\gets S^2(c)+1$ $S(c)\gets c$ $c\gets c+1$
Computing the normal form of the polytope {#asubsec:normal_form}
-----------------------------------------
Inductively, we have obtained $n_s$ permutations $\left\{\rho_i=(r_i,c_i)\right\}_{i=1}^{n_s}$ such that $\rho_i{P\!M}={{{P\!M}^\text{max}}}$. We are really only interested in the permutations of the columns, since they correspond to permutations of the vertices of $P$. The [[<span style="font-variant:small-caps;">Palp</span>]{}]{} algorithm computes a new order for the columns of ${{{P\!M}^\text{max}}}$ based on the following: the maximum coefficient in the column; the sum of the coefficients in the column; and the relative position of the column in ${{{P\!M}^\text{max}}}$. Let $\rho_c\in S_{n_v}$ denote this column permutation.
[0.9]{}[rX]{} Line:&216 (`New_pNF_Order`)\
Input:&The maximal pairing matrix ${{{P\!M}^\text{max}}}$.\
Output:&The column permutation $\rho_c\in S_{n_v}$.\
${{{P\!M}^\text{max}}}\gets p_1{P\!M}$ $p_c\gets 1_{S_{n_v}}$ ${M^\text{max}}\gets\left\{\max_{1\leq i\leq n_f}\left\{{{P\!M}_{ij}^\text{max}}\right\}\mid 1\leq j\leq n_v\right\}$ ${S^\text{max}}\gets\left\{\sum_{1\leq i\leq n_f}{{P\!M}_{ij}^\text{max}}\mid 1\leq j\leq n_v\right\}$ $k\gets i$ $k\gets j$ ${M^\text{max}}\gets\mathrm{SwapRow}({M^\text{max}},i,k)$ ${S^\text{max}}\gets\mathrm{SwapRow}({S^\text{max}},i,k)$ $p_c\gets p_c(i\, k)$
Given the column permutations $\rho_c$ and $c_i$, $i=1\ldots,n_s$, we obtain a permutation of the vertices of $P$, and hence of the columns of the vertex matrix $V$. We let $V_i$ denote this reordered vertex matrix. The remaining freedom – the action of ${\mathrm{GL}}_n({\mathbb{Z}})$ corresponding to the choice of lattice basis – is removed by computing the Hermite normal form $H(V_i)$.
[0.9]{}[rX]{} Line:&134 (`GLZ_Make_Trian_NF`)\
Input:&A matrix with integer coefficents.\
Output:&The Hermite normal form of the matrix.\
The [[<span style="font-variant:small-caps;">Palp</span>]{}]{} normal form is simply the minimum amongst the $H(V_i)$.
[0.9]{}[rX]{} Line:&399 (`Aux_Make_Triang`)\
Input:&The column permutations $\rho_c$ and $\{c_i\}_{i=1}^{n_s}$, and the vertex matrix $V$.\
Output:&The normal form.\
Calculating the maximum pairing matrix {#apx:matrix_isomorphism}
======================================
Let $M$ be an $n_r\times n_c$ matrix. Recall that we define an action of $\sigma=(\sigma_r,\sigma_c)\in S_{n_r}\times S_{n_c}$ on the rows and columns of $M$ via $(\sigma M)_{ij}:=M_{\sigma_r(i),\sigma_c(j)}$, and that we call two matrices $M$ and $M'$ isomorphic if there exists some permutation $\sigma\in S_{n_r}\times S_{n_c}$ such that $\sigma(M)=M'$. We begin by briefly describing one approach to determining when two matrices are isomorphic.
Given a matrix $M$, we associate a bipartite graph $G(M)$ with $n_r+n_c$ vertices, where the vertices $v_i$, $v_{n_r+j}$ are connected by an edge $E_{ij}$ for all $1\leq i\leq n_r$, $1\leq j\leq n_c$. Each edge $E_{ij}$ is labelled with the corresponding value $M_{ij}$. The vertices $v_i$, $1\leq i\leq n_r$, are labelled with one colour, whilst the vertices $v_{n_r+j}$, $1\leq j\leq n_c$, are labelled with a second colour. This distinguishes between vertices representing rows of $M$ and vertices representing columns of $M$. Clearly two matrices $M$ and $M'$ are isomorphic if and only if the graphs $G(M)$ and $G(M')$ are isomorphic. We note also that the automorphism group ${\mathrm{Aut}\mleft({M}\mright)}\leq S_{n_r}\times S_{n_c}$ is given by the automorphism group of $G(M)$.
We now describe a recursive algorithm to compute ${{{P\!M}^\text{max}}}$ from ${P\!M}$. For readability, we shall split this algorithm into three parts, with a brief discussion preceding each part.
[0.9]{}[rX]{} Input:&A matrix ${P\!M}$.\
Output:&The maximal matrix ${{{P\!M}^\text{max}}}$.\
Throughout we set $n_r$ and $n_c$ equal to, respectively, the number of rows and the number of columns of the input matrix ${P\!M}$. A vector $s$ of length $n_c$ is used to represent the permitted permutations of the columns of ${P\!M}$. Initially $s$ is defined as $$s=(a,\ldots,a)\in{\mathbb{Z}}^{n_c},$$ where $a:=1+\max{P\!M}_{ij}$ is larger than any entry of the matrix ${P\!M}$. At each step of the recursion, the value of $n_c$ remains unchanged, but the value of $n_r$ will decrease by one as a row of ${P\!M}$ is removed from consideration. The vector $s$ will be modified to reflect the symmetries of the previously steps; two coefficients $s_j$ and $s_k$ are equal if and only if the columns $j$ and $k$ can be exchanged without affecting the computations so far. By construction $s$ will always satisfy:
1. either $s_j=s_{j+1}$ or $s_j=s_{j+1} + 1$, for each $1\leq j<n_c$;
2. $s_{n_c}= a$.
The first stage is to calculate the maximum possible row ${R^\text{max}}$ of ${P\!M}$, where each row is sorted in decreasing order. Once done, we update the vector $s$ to reflect the possible column permutations that will leave ${R^\text{max}}$ unchanged.
$\tilde{R}_i\gets\mathrm{Sort}_\geq\{(s_j,{P\!M}_{ij})\mid 1\leq j\leq n_c\}$ $R_i\gets (\tilde{R}_{ij2}\mid 1\leq j\leq n_c)$ ${R^\text{max}}\gets \max\left\{R_i\mid 1\leq i\leq n_r\right\}$ $s'\gets s$ $s'_k\gets s'_k+1$
Next we collect together all non-isomorphic ways of writing ${P\!M}$ with ${R^\text{max}}$ as the first row. These possibilities are recorded in the set $\mathcal{M}$.
$\mathcal{M}\gets\{\}$ $M\gets\mathrm{SwapRow}({P\!M},1,i)$ $T\gets\mathrm{Sort}_\geq\{(s_j,M_{1j},j)\mid 1\leq j\leq n_c\}$ $\tau\gets\text{permutation in $S_{n_c}$ sending $j$ to $T_{j3}$}$ $M\gets\tau(M)$ $\tilde{M}\gets M$ $\tilde{M}_1\gets s'$ $\mathcal{M}\gets \mathcal{M}\cup\{M\}$
When all possible symmetries of the columns have been exhausted, the vector $s'$ will be equal to the sequence $$(a+n_c-1,a+n_c-2,\ldots,a).$$ If this is the case, then ${{{P\!M}^\text{max}}}$ is the maximum matrix in $\mathcal{M}$, once the rows have been placed in decreasing order. If there remain symmetries to explore, then we recurse on each of the matrices in $\mathcal{M}$ using the new permutation vector $s'$; ${{{P\!M}^\text{max}}}$ is given by the largest resulting matrix.
${{{P\!M}^\text{max}}}\gets{R^\text{max}}$ ${{{P\!M}^\text{max}}}\gets\max\left\{\mathrm{SortRows}_\geq(M)\mid M\in\mathcal{M}\right\}$ $\mathcal{M'}\gets\{\}$ $M'\gets\mathrm{RemoveRow}(M,1)$ $M'\gets(\text{recurse with $PM\gets M'$ and $s\gets s'$})$ $\mathcal{M'}\gets\mathcal{M'}\cup\{M'\}$ ${{{P\!M}^\text{max}}}\gets\mathrm{VerticalJoin}({R^\text{max}},\max\mathcal{M'})$
[MdlBW09]{}
Gavin Brown, Jaros[ł]{}aw Buczy[ń]{}ski, and Alexander M. Kasprzyk, *Convex polytopes and polyhedra*, Handbook of Magma Functions, Edition 2.16, November 2009, available online at [`http://magma.maths.usyd.edu.au/`](http://magma.maths.usyd.edu.au/magma/handbook/convex_polytopes_and_polyhedra).
Wieb Bosma, John Cannon, and Catherine Playoust, *The [M]{}agma algebra system. [I]{}. [T]{}he user language*, J. Symbolic Comput. **24** (1997), no. 3-4, 235–265, Computational algebra and number theory (London, 1993).
David Bremner, Mathieu Dutour Sikiri[ć]{}, and Achill Sch[ü]{}rmann, *Polyhedral representation conversion up to symmetries*, Polyhedral computation, CRM Proc. Lecture Notes, vol. 48, Amer. Math. Soc., Providence, RI, 2009, pp. 45–71.
J[ü]{}rgen Bokowski, G[ü]{}nter Ewald, and Peter Kleinschmidt, *On combinatorial and affine automorphisms of polytopes*, Israel J. Math. **47** (1984), no. 2-3, 123–130.
Gavin Brown and Alexander M Kasprzyk, *The [G]{}raded [R]{}ing [D]{}atabase*, online, access via [`http://grdb.lboro.ac.uk/`](http://grdb.lboro.ac.uk/).
David Bremner, Mathieu Dutour Sikiri[ć]{}, Dmitrii V. Pasechnik, Thomas Rehn, and Achill Sch[ü]{}rmann, *Computing symmetry groups of polyhedra*, [`arXiv:1210.0206 [math.CO]`](http://arxiv.org/abs/1210.0206).
Maximilian Kreuzer, *[PALP]{}++ project proposal*, unfinished draft, available online at [`http://hep.itp.tuwien.ac.at/\simwww/palp++.pdf`](http://hep.itp.tuwien.ac.at/~www/palp++.pdf), September 2010.
Maximilian Kreuzer and Harald Skarke, *Classification of reflexive polyhedra in three dimensions*, Adv. Theor. Math. Phys. **2** (1998), no. 4, 853–871.
[to3em]{}, *Complete classification of reflexive polyhedra in four dimensions*, Adv. Theor. Math. Phys. **4** (2000), no. 6, 1209–1230.
Volker Kaibel and Alexander Schwartz, *On the complexity of polytope isomorphism problems*, Graphs Combin. **19** (2003), no. 2, 215–230.
Maximilian Kreuzer and Harald Skarke, *[PALP]{}, a package for analyzing lattice polytopes with applications to toric geometry*, Computer Phys. Comm. **157** (2004), 87–106.
Brendan D. McKay, *The [Nauty]{} graph automorphism software*, online, access via [`http://cs.anu.edu.au/\simbdm/nauty/`](http://cs.anu.edu.au/~bdm/nauty/).
[to3em]{}, *Practical graph isomorphism*, Proceedings of the [T]{}enth [M]{}anitoba [C]{}onference on [N]{}umerical [M]{}athematics and [C]{}omputing, [V]{}ol. [I]{} ([W]{}innipeg, [M]{}an., 1980), vol. 30, 1981, pp. 45–87.
C. Mears, M. Garcia de la Banda, and M. Wallace, *On implementing symmetry detection*, Constraints **14** (2009), no. 4, 443–477.
Mikkel [Ø]{}bro, *An algorithm for the classification of smooth [F]{}ano polytopes*, [`arXiv:0704.0049v1 [math.CO]`](http://arxiv.org/abs/0704.0049), classifications available from [`http://grdb.lboro.ac.uk/`](http://grdb.lboro.ac.uk/).
Jean-Francois Puget, *Automatic detection of variable and value symmetries*, Principles and Practice of Constraint Programming (Peter van Beek, ed.), vol. 3709, Springer, 2005, pp. 475–489.
W. A. Stein et al., *[S]{}age [M]{}athematics [S]{}oftware*, available online at [`http://www.sagemath.org/`](http://www.sagemath.org/).
[^1]: Users of [[[Magma]{}]{}]{} can freely view and edit the package code. The relevant files are contained in the subdirectory `package/Geometry/ToricGeom/polyhedron/`.
[^2]: Gonshaw’s implementation is available from [`http://trac.sagemath.org/sagetrac/ticket/13525`](http://trac.sagemath.org/sage_trac/ticket/13525).
[^3]: Smooth Fano polytope number $13$ in the Graded Ring Database [@GRDb].
[^4]: Smooth Fano polytope number $1930$ in the Graded Ring Database [@GRDb].
[^5]: Smooth Fano polytope number $1854$ in the Graded Ring Database [@GRDb].
[^6]: [[<span style="font-variant:small-caps;">Palp</span>]{}]{} $1.1$, updated November $2$, $2006$. [`http://hep.itp.tuwien.ac.at/\simkreuzer/CY/palp/palp-1.1.tar.gz`](http://hep.itp.tuwien.ac.at/~kreuzer/CY/palp/palp-1.1.tar.gz)
|
---
abstract: 'The IceCube Observatory is a kilometer-cube neutrino telescope under construction at the South Pole and planned to be completed in early 2011. When completed it will consist of 5,160 Digital Optical Modules (DOMs) which detect Cherenkov radiation from the charged particles produced in neutrino interactions and by cosmic ray initiated atmospheric showers. IceCube construction is currently 90% complete. A selection of the most recent scientific results are shown here. The measurement of the anisotropy in arrival direction of galactic cosmic rays will also be presented and discussed.'
author:
- |
P. DESIATI$^*$ for the IceCube Collaboration$^\dag$\
IceCube Research Center, University of Wisconsin,\
Madison, WI 53703, U.S.A.\
$^*$E-mail: [desiati@icecube.wisc.edu]{}\
$^\dag$ [http://icecube.wisc.edu]{}
title: Neutrino Astrophysics and Galactic Cosmic Ray Anisotropy in IceCube
---
Introduction {#sec:intro}
============
The IceCube Observatory is a km$^3$ neutrino telescope designed to detect astrophysical neutrinos with energy above 100 GeV. IceCube observes the Cherenkov radiation from charged particles produced in neutrino interactions.
The quest for understanding the mechanisms that shape the high energy Universe is taking many paths. Gamma ray astronomy is providing a series of prolific experimental observations, such the detection of TeV $\gamma$ rays from point-like and extended sources, along with their correlation to observations at other wavelengths. These observations hold the clues about the origin of cosmic rays and the possible connection to shock acceleration in Supernova Remnants (SNR), Active Galactic Nuclei (AGN) or Gamma Ray Bursts (GRB). Supernov[æ]{} are believed to be the sources of galactic cosmic rays, nevertheless the $\gamma$ ray observations from SNR still do not provide us with a definite and direct evidence of proton acceleration. The competing inverse Compton scattering of directly accelerated electrons may significantly contribute to the observed $\gamma$ ray fluxes, provided that the magnetic field in the acceleration region does not exceed 10 $\mu$G [@abdo].
Ultra High Energy Cosmic Rays (UHECR) astronomy, has the potential to hold the key to a breakthrough in astroparticle physics. The identification of sources of cosmic rays could provide a unique opportunity to probe the hadronic acceleration models currently hypothesized. On the other hand, cosmic ray astronomy is only possible at energies in excess of 10$^{19}$ eV, where the cosmic rays are believed to be extragalactic and point back to their sources. TeV $\gamma$ rays from those sources are likely absorbed during their propagation between the source and the observer : at $\sim$10 TeV $\gamma$ rays have a propagation length of about 100 Mpc, while at $\sim$100 GeV $\gamma$ rays can propagate much deeper through the Universe.
If the extra-galactic sources of UHECR are the same as the sources of $\gamma$ rays, then hadronic acceleration is the underlying mechanism and high energy neutrinos are produced by charged pion decays as well. Neutrinos would provide an unambiguous evidence for hadronic acceleration in both galactic and extragalactic sources, and are the ideal cosmic messengers, since they can propagate through the Universe undeflected and with practically no absorption. But the same reason that makes neutrinos ideal messengers makes them also difficult to detect.
The discovery of the anisotropy in arrival direction of the galactic cosmic rays has also triggered particular attention recently. The origin of galactic cosmic ray anisotropy is still unknown. The structure of the local interstellar magnetic field within 1 pc is likely to have an important role in shaping the large angular scale features of the observed anisotropy. Nevertheless it is possible to argue that the anisotropy might be originated by a combination of astrophysical phenomena, such as the distribution of nearby recent supernova explosions [@erlykin]. The observation of galactic cosmic ray anisotropy at different energy and angular scales has, therefore, the potential to reveal the connection between cosmic rays and shock acceleration in supernovae.
At the same time, there seems to be clear observational evidence for the existence of dark matter in the Universe, even if its nature remains unknown. A variety of models predict the existence of a class of non-relativistic particles called Weakly Interacting Massive Particles (WIMP). These particles could be gravitationally condensed within dense regions of matter (such as the Sun or the galactic halo) and could provide a visible source for indirect detection via neutrino generation through annihilation. Neutrino telescopes are powerful tools to indirectly test spin-dependent WIMP-nucleon scattering cross section, provided the models for matter distribution and WIMP annihilation rate are taken into account.
In section §\[sec:ic\] the IceCube Observatory apparatus functionality and calibration are described. Selected physics analyses results are summarized in §\[sec:phys\] : the determination of the atmospheric muon neutrino energy spectrum (§\[ssec:atm\]), the search for astrophysical neutrinos from diffused and point sources, and from Gamma Ray Bursts (§\[ssec:astro\]), the indirect search for dark matter (§\[ssec:dm\]), and the anisotropy in cosmic rays arrival direction (§\[ssec:anyso\]).
The IceCube Observatory {#sec:ic}
=======================
The IceCube Observatory (see Fig. \[fig:icecube\]) currently consists of 4,740 DOMs deployed in 79 vertical strings (60 DOMs per string) between 1,450 m and 2,450 m depth below the Geographic South Pole. At the beginning of 2011 IceCube will be completed with 86 strings and 5,160 DOMs. The surface array IceTop with 81 stations, each consisting of two tanks with frozen clean water with two DOMs each, will provide the measurement of the spectrum and mass composition of cosmic rays at the knee and up to about 10$^{18}$ eV. The Deep Core sub-array, consisting of 6 densely instrumented strings and located at the bottom-center of IceCube, is capable of pushing the neutrino energy threshold to about 10 GeV. The surrounding IceCube instrumented volume can be used to veto the background of cosmic ray induced through-going muon bundles to enhance the detection of down-going neutrinos within the Deep Core volume. The veto rejection power can reach 10$^5$.
0.2cm
AMANDA was the first and largest neutrino telescope before the construction of IceCube. Using analog technology it significantly contributed to the advance of neutrino astrophysics searches and it was de-commissioned in May 2009.
The basic detection component of IceCube is the DOM : it hosts a 10-inch Hamamatsu photomultiplier tube (PMT) and its own data acquisition circuitry enclosed in a pressure-resistant glass sphere, making it an autonomous data collection unit. The DOMs detect, digitize and timestamp the signals from optical Cherenkov radiation photons. Their main board data acquisition (DAQ) is connected to the central DAQ in the IceCube Laboratory at the surface, where the global trigger is determined [@dom].
The detector calibration is one of the major efforts aimed at characterizing its response and to reduce systematic uncertainties at the physics analysis level. Each PMT is tested in order to characterize its response and to measure the voltage yielding a specific gain [@pmt]. In the operating neutrino telescope the current gain is about 10$^7$ and the corresponding dark noise rate is about 500 Hz. Time calibration is maintained throughout the array by regular transmission to the DOMs of precisely timed analog signals, synchronized to a central GPS-disciplined clock. This procedure has a resolution of less than 2 nsec. The LEDs on the flasher boards instrumented in the DOMs, are used to measure the photo-electron (p.e.) transit time in the PMT for the reception of large light pulses between neighboring DOMs. This delay time is given by light travel time from the emitter to the receiver, by light scattering in ice and by the electronics signal processing. The RMS of this delay is also less than 2 nsec. Waveform sampling amplitude and time binning calibration is periodically performed in each DOM and used to extract the number of detected p.e. with an uncertainty of less than 10%. Higher level calibrations are meant to correlate the number of detected p.e. to the energy of physics events that trigger the detector. Instrumented devices, such as the flasher boards, are used to illuminate the detector with 400 nm wavelength photons (corresponding to the wavelength yielding the highest detection sensitivity), simulating a real electron-neutrino interaction, or cascade, inside the detector. A complete Monte Carlo simulation chain is used to relate the known number of injected photons with the energy scale of the artificial cascade. The energy resolution depends on the event topology (track-like versus cascade-like) and on its containment inside the instrumented volume. Monte Carlo simulations provide the necessary fluctuations implicit by the topology and containment of the physics events. The ice optical properties are the most fundamental calibration determination that allows us to know how photons propagate through the ice and, therefore, how to relate the number of detected p.e. to the energy of the physics events. Due to the antarctic glaciological history, the optical properties depend on depth and they have been measured in the past using AMANDA in-situ calibration lasers [@amanda], in the depth range between 1,400 m and 2,000 m. The optical properties down to 2,450 m, the depth of IceCube instrumentation, are extrapolated from ice core observations in other location of the antarctic continent, and a new campaign of extended in-situ measurements is currently being carried out.
Physics Results {#sec:phys}
===============
If the DOMs that detect Cherenkov photons satisfy specific trigger conditions, an event is formed and recorded by the surface DAQ. An on-line data filtering at the South Pole reduces the event volume to about 10% of the trigger rate, based on a series of filter algorithms aimed to select events based on directionality, topology and energy. The filter allows us to transfer data via satellite to the northern hemisphere for prompt physics analyses.
Atmospheric neutrinos {#ssec:atm}
---------------------
99.999% of the events that trigger IceCube are muons produced by the impact of primary cosmic rays in the atmosphere. Only a small fraction of the detected events ($\sim$10$^{-5}$) are muon events produced by atmospheric neutrinos. In order to reject all the down-ward muon bundle background only up-ward events are generally selected, assuming specific event selections provide well reconstructed events. In the 40 string instrumented IceCube (IceCube-40) about 30-40% of the up-ward events survive the selection, with a background contamination of less than 1% [@warren].
0.2cm
The energy estimation resolution of these atmospheric $\nu_{\mu}$ induced events is of the order of 0.3 in Log of neutrino energy, and a regularized unfolding technique is used to determine the energy spectrum. Fig. \[fig:atmo\] shows the preliminary unfolded energy spectrum of the 17,682 atmospheric neutrinos detected by IceCube-40 between zenith angle of 97$^{\circ}$ and 180$^{\circ}$. The figure also shows other measurements performed by other experiments, including AMANDA [@john; @kirsten]. IceCube has detected the highest energy atmospheric neutrinos (about 250 TeV where a significant fraction of neutrinos are expected to arise from the decay of heavy mesons with charm quarks). IceCube allowed us to extend the global measured spectrum up to 6 orders of magnitudes in energy. For the first time the precision of this measurement is providing a powerful tool to test the high energy hadronic interaction models that govern our present knowledge of the cosmic ray induced extensive air showers.
Search for astrophysical neutrinos {#ssec:astro}
----------------------------------
Atmospheric neutrinos represent an irreducible background for the search of high energy astrophysical neutrinos. If hadronic acceleration is the underlying source of high energy cosmic rays and $\gamma$ rays observations, we expect that unresolved sources of cosmic rays over cosmological times have also produced enough neutrinos to be detected as a diffuse flux. Since shock acceleration is expected to provide an $\sim$E$^{-2}$ energy spectrum, harder than the $\sim$E$^{-3.7}$ of the atmospheric neutrinos, the diffuse flux is expected to dominate at high energy.
0.2cm
Fig. \[fig:diff\] shows the AMANDA experimental upper limit and the preliminary IceCube-40 sensitivity for an E$^{-2}$ diffuse muon neutrino spectrum ($\nu_{\mu}+\bar{\nu}_{\mu}$). One year of IceCube-40 is about 5 times more sensitive than 3 years of AMANDA and its sensitivity is lower than the Waxman-Bahcall neutrino bound. This means that IceCube is potentially approaching the discovery of the origin of cosmic rays.
In the Ultra High Energy range (i.e. above $\sim$10$^6$ GeV or UHE) IceCube is placing upper limits that are still more than about one order of magnitude above the predicted flux of neutrinos from UHECR interactions on the microwave photons (or GZK neutrinos [@gzk]). The complete IceCube Observatory might be able to reach the discovery level within the next 5-8 years.
If the observed $\gamma$ rays from galactic and extra-galactic point and extended sources are from neutral pion decays in hadronic acceleration sites or from cosmic ray interaction with molecular clouds, the charged pions could produce enough neutrinos that can be detected by a km$^3$ neutrino telescope.
0.2cm
Fig. \[fig:point\] shows, on the left, the sensitivity (90% CL) of IceCube for the full-sky search of steady point sources of E$^{-2}$ muon neutrinos as a function of declination. The extension of the point source search to the southern hemisphere is made possible by rejecting background events by five orders of magnitude with a high energy event selection. This makes the southern hemisphere still dominated by high energy muon bundles and it yields a poorer neutrino detection sensitivity because of the high energy selection. Nevertheless this opens IceCube to a full-sky coverage and provides a coverage complement to the neutrino telescopes in the Mediterranean. On the right of Fig. \[fig:point\] is the sky-map of statistical significance from the full sky search of IceCube-40. No significant localized excess is observed. The sensitivity is to be interpreted as the median upper limit we expect to observe from individual sources across the sky. If we test specific sources (see left panel of Fig. \[fig:pred\]) we see that the full IceCube (about twice as sensitive as IceCube-40) will be able to discover neutrinos from individual point sources in about 3-5 years, depending on the location in the sky.
0.2cm
The search for neutrinos from transient galactic and extra-galactic sources is also being pursued. In particular, the right panel in Fig. \[fig:pred\] shows the upper limits (90% CL) for the model-dependent search of prompt neutrinos from GRBs in the northern hemisphere with AMANDA [@grbama], IceCube-22 [@grb22] and IceCube-40. For each detector configuration, the list of GRBs detected during the corresponding physics runs were collected and the predicted neutrino flux calculated based on the $\gamma$ ray spectrum [@guetta]. The corresponding average neutrino spectrum was used to search for neutrinos detected within the so-called $T_{90}$ time window (i.e. the time in which 5% to 95% of the fluence is recorded). The right panel of Fig. \[fig:pred\] also shows the Waxman-Bahcall (WB) predicted average spectrum from GRBs [@wbgrb] and the average GRB spectrum corresponding to the 2009-2008 time period of IceCube-40 physics runs. The preliminary IceCube-40 upper limit is below the WB spectrum, which indicates that IceCube is becoming very sensitive and could potentially discover neutrinos in coincidence with GRBs within the next few years.
Search for dark matter {#ssec:dm}
----------------------
Non-baryonic cold dark matter in the form of weakly interacting massive particles (WIMPs) is one of the most promising solutions to the dark matter problem [@rubin]. The minimal supersymmetric extension of the Standard Model (MSSM) provides a natural WIMP candidate in the lightest neutralino $\tilde{\chi}^0_1$ [@drees]. This particle is weakly interacting only and, assuming R-parity conservation, is stable and can therefore survive today as a relic from the Big Bang. A wide range of neutralino masses, m$_{\tilde{\chi}^0_1}$, from 46 GeV [@amsler] to a few tens of TeV [@gilmore] is compatible with observations and accelerator-based measurements. Within these bounds it is possible to construct models where the neutralino provides the needed relic dark matter density.
0.2cm
Relic neutralinos in the galactic halo may be gravitationally attracted by the Sun and accumulate in its center, where they can annihilate each other and produce standard model particles, such as neutrinos. This provides an indirect channel detection of this type of dark matter, provided the WIMP density and velocity distribution and the neutralino annihilation rate models are taken into account. The left panel of Fig. \[fig:dm\] shows the upper limits (90% CL) on the muon flux for IceCube-22 [@ic22dm] merged to the AMANDA upper limit at the low energy end [@ackermann], along with other indirect observations. The limits on the annihilation rate can be converted into limits on the spin-dependent $\sigma^{SD}$ and spin-independent $\sigma^{SI}$ neutralino-proton cross-section (as shown on the right panel of Fig. \[fig:dm\]). This conversion allows us a comparison with the direct search experiments. Since capture in the Sun is dominated by $\sigma^{SD}$ , indirect searches are expected to be competitive in setting limits on this quantity. Assuming equilibrium between the capture and annihilation rates in the Sun, the annihilation rate is directly proportional to the cross-section. Fig. \[fig:dm\] also shows the predicted sensitivity (90% CL) of the combined IceCube-86 and the Deep Core dense instrumentation that allows us to significantly lower the energy threshold, and consequently increase sensitivity to low neutralino mass. In indirect searches WIMPs would accumulate in the Sun over a long period and therefore sample dark matter densities in a large volume of the galactic halo. This progressive gravitational accumulation is sensitive to low WIMP velocities, while direct detection recoil experiments are more sensitive to higher velocities, making indirect searches a good complement to the direct ones.
Neutrino telescopes can also test the dark matter self-annihilation cross section $<\sigma_A v>$ (averaged over the dark matter velocity distribution), making them complementary to $\gamma$ ray measurements. If the lepton excess observed by Fermi [@fermi2], H.E.S.S. [@hess] and PAMELA [@pamela] is interpreted as a dark matter self-annihilation signal in the galactic dark matter halo [@meade], the leptophilic dark matter in the TeV mass range provides the most compatible model. Since the dark matter halo column density is larger toward the direction of the Galactic Center (GC), neutrinos from WIMP self-annihilation in the halo are expected to have a large angular scale anisotropy with an excess in the direction of the GC (see left panel of Fig. \[fig:halo\]). The dark matter density distribution in the Milky Way has different shapes depending on the model [@carsten]. The expected neutrino flux from dark matter self-annihilation is proportional to the square of the dark matter density integrated along the line of sight for a given angular distance from the Galactic center. The differential neutrino flux for a WIMP of mass m$_{\chi}$ depends on the halo density profile, the neutrino production multiplicity, and on self-annihilation cross section $<\sigma_A v>$ [@yuksel]. The search for an excess of neutrinos in the direction of the GC for different annihilation channels (assuming 100% branching ratio for each of them), allows us to probe the allowed range of $<\sigma_A v>$ for the corresponding channels in a model-independent manner (see right panel of Fig. \[fig:halo\]), and provide a direct comparison to similar results from $\gamma$ ray experiments.
0.2cm
IceCube’s reach can be significantly improved by directly looking at the GC (visible from the southern hemisphere). This search was performed with the IceCube 40-string dataset, by using down-ward neutrinos interacting inside the detector volume, and the preliminary upper limits (90% CL) on the self-annihilation cross-section are shown on the right of Fig \[fig:halo\]. While such an analysis is already able to put significantly better constraints, a large scale anisotropy would provide a more distinct discovery signal.
Cosmic ray anisotropy {#ssec:anyso}
---------------------
Galactic cosmic rays are found to have an energy dependent large angular scale anisotropy in arrival direction distribution with amplitude of about $10^{-4}-10^{-3}$. The first comprehensive observation of such an anisotropy was provided by a network of muon telescopes sensitive to sub-TeV cosmic ray energies and located at different latitudes [@nagashima; @hall]. More recently, an anisotropy was also observed in the multi-TeV range by the Tibet AS$\gamma$ array [@amenomori], ARGO YBJ [@argo], Super-K [@guillian], and by MILAGRO [@abdo2]. And the first observation in the southern hemisphere is being reported by IceCube [@anisotropy] for a median cosmic ray energy of about 20 TeV.
0.2cm
The left panel of Fig. \[fig:anisotropy\] shows the relative intensity in arrival direction of the cosmic rays, obtained by normalizing each $\sim$3$^{\circ}$ declination band independently. On the top is the relative intensity map obtained from the 4.6 billion events collected by IceCube-22 [@anisotropy] and on the bottom the preliminary map obtained from the 12 billion events collected by IceCube-40. The two maps show the same anisotropy features and they both appear to be a continuation of the observed modulation in the northern hemisphere. The right panel of Fig. \[fig:anisotropy\] shows the preliminary modulation of the relative intensity in arrival direction of the cosmic rays projected into right ascension for IceCube-40 (black symbols). In order to verify whether the observed sidereal anisotropy (i.e. in equatorial coordinates), has, in one full year, some spurious modulation derived from the interference between possible yearly-modulated daily variations, the same analysis was performed using the anti-sidereal time frame (a non-physical time defined by switching the sign of the transformation from universal to sidereal time) [@farley]. The real feature in the sidereal time is expected to be scrambled in the anti-sidereal time. The anti-sidereal modulation (shown on the right of Fig. \[fig:anisotropy\] in red symbols) appears to be relatively flat with an amplitude that is the same order of magnitude of statistical errors, suggesting that no significant spurious effect is present. If the relative intensity is measured, over one full year, as a function of the angular distance from the Sun in right ascension, we expect an excess in the direction of motion of the Earth around the Sun (at $\sim$ 270$^{\circ}$) and a minimum in the opposite direction. This is what is observed (see the green symbols on the right of Fig. \[fig:anisotropy\]).
0.2cm
With the same techniques used in $\gamma$ ray detection, it is possible to estimate the event intensity sky-map (the background) with a pre-defined angular scale averaging, and determine the residual by subtracting the background from the actual map. With this method MILAGRO, in an attempt to estimate the background without $\gamma$-hadron separation, discovered two significant localized regions of cosmic rays [@abdo3], also observed by ARGO YBJ [@vernetto]. The same medium-scale anisotropy measurement was performed with IceCube for the first time in the southern hemisphere and the combined MILAGRO-IceCube-40 significance sky-map is shown in Fig. \[fig:residual\], where only the anisotropy features with angular scale smaller than $\sim$30$^{\circ}$ are visible. The different event statistics between MILAGRO (with 220 billion events with median energy of $\sim$ 1 TeV) and IceCube-40 (with 12 billion events with median energy of $\sim$ 20 TeV) does not allow the comparison of the two hemispheres on a statistical base. Nevertheless, there seems to be some indication that the small scale features observed in the two hemispheres might be part of a larger scale structure.
The origin of the galactic cosmic ray anisotropy is still unknown. However there might be multiple superimposed causes, depending on the cosmic ray energy and the angular scale of the anisotropy. The large scale structure in the 10-100 TeV range might be a local fluctuation caused by a nearby supernova (within 1,000 pc) that exploded within the last 100,000 years or so [@erlykin]. On the other hand, the structure of the local Inter-Stellar Medium (ISM) magnetic field well within 1 pc might likely have an important role [@battaner]. The strongest and most localized of the MILAGRO excess regions has triggered astrophysical interpretations, invoking Geminga pulsar as a possible source [@astro1; @astro2]. A strong anisotropy of the Magneto-Hydro Dynamic turbulence in the ISM, could cause a superposition of the large scale anisotropy (perhaps generated by a nearby SNR) with a beam of cosmic rays focused along the local magnetic field direction, depending on the turbulence scale [@astro3]. However, the localized nature of the hottest MILAGRO excess region suggests a local origin. Neutron monitor data also seem to indicate that the broad excess toward the direction of the heliotail[^1] (the so-called tail-in excess), which includes the MILAGRO localized excess regions, is likely generated within the heliotail itself [@neutron]. In particular the tail-in excess and its small angular scale structure, is suggestive of acceleration via magnetic reconnection in the solar magnetotail. Reconnection is generated by magnetic polarity reversals due to the 11-year solar cycles compressed by the solar wind in the magnetotail. The maximum energy of protons that can be accelerated through this process is estimated to be about 10 TeV. Up to this energy scale a localized excess might be observable in the direction of the acceleration sites [@helio]. This is the energy at which MILAGRO observes a cut-off for the localized regions.
Acknowledgements
================
[We acknowledge the support from the following agencies: U.S. National Science Foundation-Office of Polar Programs, U.S. National Science Foundation-Physics Division, University of Wisconsin Alumni Research Foundation, U.S. Department of Energy, and National Energy Research Scientific Computing Center, the Louisiana Optical Network Initiative (LONI) grid computing resources; Swedish Research Council, Swedish Polar Research Secretariat, Swedish National Infrastructure for Computing (SNIC), and Knut and Alice Wallenberg Foundation, Sweden; German Ministry for Education and Research (BMBF), Deutsche Forschungsgemeinschaft (DFG), Research Department of Plasmas with Complex Interactions (Bochum), Germany; Fund for Scientific Research (FNRS-FWO), FWO Odysseus programme, Flanders Institute to encourage scientific and technological research in industry (IWT), Belgian Federal Science Policy Office (Belspo); Marsden Fund, New Zealand; Japan Society for Promotion of Science (JSPS); the Swiss National Science Foundation (SNSF), Switzerland; A. Kappes and A. Groß acknowledges support by the EU Marie Curie OIF Program; J. P. Rodrigues acknowledges support by the Capes Foundation, Ministry of Education of Brazil.]{}
[99]{} Abdo A.A., et al., [*Astrophys. J.*]{} [**658**]{} L33 (2007) Erlykin A.D. & Wolfendale A.W., [*Astrop. Phys.*]{} [**25**]{}, 183 (2006) Bertone G., Hooper D., & Silk J., [*Phys. Rept.*]{} [**405**]{}, 279 (2005) Abbasi R. et al., [*Phys. Rev. Lett.*]{} [**102**]{}, 201302 (2009) Abbasi et al., [*Phys. Rev. D*]{} [**81**]{}, 057101 (2010) Abbasi R., et al., [*Nucl. Instrum. and Methods A*]{} [**601**]{}, 294 (2009) Abbasi R., et al., [*Nucl. Instrum. and Methods A*]{} [**618**]{} 139 (2010) Andrés E., et al., [*J. Geophys. Res*]{} [**111**]{}, D13203 (2006) Abbasi R., et al.: to be submitted Daum K., et al., [*Zeitschrift für Physik C*]{} [**66**]{}, 417 (1995) Gonzalez-Garcia C., M. Maltoni & Rojo J., [*J. High Energy Phys.*]{} [**10**]{}, 75 (2006) Abbasi R. et al., [*Phys. Rev. D*]{} [**79**]{}, 102005 (2009) Abbasi R., et al., [*Astrop. Phys.*]{}, [**34-1**]{}, 48 (2010) Gaisser T.K. et al., [*Phys. Rev. D*]{} [**70**]{}, 023006 (2004) Fiorentini G., Naumov V.A. & Villante F.L., [*Phys. Lett. B*]{} [**510**]{}, 173 (2001) Honda M., et al., [*Phys. Rev. D*]{} [**75**]{}, 043006 (2007) Enberg R., Reno M.H. & Saercevic I., [*Phys. Rev. D*]{} [**78**]{}, 043005 (2008) Achterberg A., et al., [*Phys. Rev. D*]{} [**76**]{}, 042008 (2007) Waxman E. & Bahcall J.N., [*Phys. Rev. D*]{} [**59**]{}, 023002 (1998) Stecker F.W., [*Phys. Rev. D*]{} [**72**]{}, 107301 (2005) Becker J.K., Biermann, P.L. & Rhode W., [*Astrop. Phys.*]{} [**23**]{}, 355 (2005) Becker J.K., et al., [*Astrop. Phys.*]{} [**28**]{}, 98 (2007) Razzaque S., Mészaros P. & Waxman E., [*Phys. Rev. D*]{} [**68**]{}, 083001 (2003) Mucke A., et al., [*Astropart. Phys.*]{} [**18**]{}, 593 (2003) Anchordoqui L.A., et al., [*Phys. Rev. D*]{} [**76**]{}, 123008 (2007) Abbasi R., et al., [*Astrophys. J.*]{} [**701**]{}, L47 (2009) Abbasi R., et al., [*Phys. Rev. Lett.*]{} [**103**]{}, 221102 (2009) Morlino, et al.: [*Astropart. Phys.*]{} [**31**]{} 376 (2009) Halzen F., et al.: arXiv:0902.1176 Reimer, et al.,Êin [*AIP Conf. Proc.*]{} [**1085**]{} 427 (2009) Achterberg A., et al., [*Astrophys. J.*]{} [**674**]{}, 357 (2008) Abbasi R., et al., [*Astrophys. J.*]{} [**710**]{}, 346 (2010) Guetta D., et al., [*Astropart. Phys.*]{} [**20**]{}, 429 (2004) Waxman, E. & Bahcall, J.N., [*Phys. Rev. Lett.*]{} [**78**]{}, 2292 (1997) Rubin V. & Ford W.K., [*Astrophys. J.*]{} [**159**]{}, 379 (1970) Drees M. & Nojiri M.M., [*Phys. Rev. D*]{} [**47**]{}, 376 (1993) Amsler C., et al., [*Phys. Lett. B*]{} [**667**]{}, 1 (2008) Gilmore R.C., [*Phys. Rev. D*]{} [**76**]{}, 043520 (2007) Ahmed Z., et al., [*Phys. Rev. Lett.*]{} [**102**]{}, 011301 (2009) Angle J., et al., [*Phys. Rev. Lett.*]{} [**100**]{}, 021303 (2008) Ambrosio M., et al., [*Phys. Rev. D*]{} [**60**]{}, 082002 (1999) Desai S., et al., [*Phys. Rev. D*]{} [**70**]{}, 083523 (2004) Ackermann M., et al., [*Astropart. Phys.*]{} [**24**]{}, 459 (2006) Abbasi R., et al., [*Phys. Rev. Lett.*]{} [**102**]{}, 201302 (2009) Behnke E., et al., [*Science*]{} [**319**]{}, 933 (2008) Lee H.S., et al., [*Phys. Rev. Lett.*]{} [**99**]{}, 091301 (2007) Abdo A.A., et al., [*Phys. Rev. Lett.*]{} [**102**]{}, 181101 (2009) Aharonian C.F., [*Astron. Astrophys.*]{} [**508**]{} 561 (2009) Adriani O., et al., [*Nature*]{} [**458**]{}, 607 (2009) Meade P., Papucci M., Strumia A. & Volansky T., [*Nucl. Phys. B*]{} [**831**]{} 178 (2010) Rott C., et al.: proceedings of the CCAPP Symp., OSU, (Columbus, OH 2009), arXiv:0912.5183 Yuksel H., et al., [*Phys. Rev. D*]{} [**76**]{}, 123506 (2007) Nagashima K., Fujumoto K. & Jacklyn R.M., [*J. Geophys. Res.*]{} [**103**]{}, 17429 (1998) Hall D.L., et al., [*J. Geophys. Res.*]{} [**104**]{}, 6737 (1999) Amenomori A., et al., [*Science*]{} [**314**]{}, 439 (2006) Zhang J.L. et al.: proceedings of the 31$^{st}$ ICRC, Łódź (Poland, 2009) Guillian G. et al., [*Phys.Rev. D*]{} [**75**]{}, 062003 (2007) Abdo A.A. et al., [*Astrophys. J.*]{} [**698**]{}, 2121 (2009) Abbasi R. et al.: [*Astrophys. J.*]{} [**718**]{} L194 (2010) Farley, F., et al., [*Physical Society. A.*]{} [**67**]{}, 996 (1954) Abdo A.A., et al., [*Phys. Rev. Lett.*]{} [**101**]{}, 221101 (2008) Vernetto S., et al., proceedings of the 31$^{st}$ ICRC, Łódź (Poland, 2009) Battaner E. et al., [*Astrophys. J.*]{} [**703**]{} L90 (2009) Salvati M. & Sacco B., [*Astron. & Astrophys.*]{} [**485**]{} 527 (2008) Drury L.O’C. & Aharonian F.A., [*Astropart. Phys.*]{} [**29**]{}, 420 (2008) Malkov M.A., et al., [*Astrophys. J.*]{} [**721**]{} 750 (2010) Karapetyan G.G., [*Astropart. Phys.*]{} [**33**]{}, 146 (2010) Lazarian A. & Desiati P., [*Astrophys. J.*]{} [**722**]{} 188 (2010)
[^1]: The part of the heliosphere opposite to the direction of the interstellar wind
|
---
abstract: |
In the beginning, the synchrotron radiation (SR) was studied by classical methods using the Liénard-Wiechert potentials of electric currents. Subsequently, quantum corrections to the obtained classical formulas were studied, considering the emission of photons arising from electronic transitions between spectral levels, described in terms of the Dirac equation. In this paper, we consider an intermediate approach, in which electric currents generating the radiation are considered classically, whereas the quantum nature of the radiation is taken into account exactly. Such an approximate approach may be helpful in some cases, it allows one to study the one-photon and multi-photon radiation without complicating calculations using corresponding solutions of the Dirac equation. We construct exact quantum states of the electromagnetic field interacting with classical currents and study their properties. By their help, we calculate a probability of photon emission by classical currents and obtain relatively simple formulas for the one-photon and multi-photon radiation. Using the specific circular electric current, we calculate the corresponding SR. We discuss a relation of obtained results with known before, for example, with the Schott formula, with the Schwinger calculations, with one-photon radiation of scalar particles due to transitions between Landau levels, and with some previous results of calculating the two-photon SR.
*Keywords*: Synchrotron radiation, multiphoton radiation
author:
- |
V. G. Bagrov$^{1,2}$[^1], D. M. Gitman$^{3,4}%
$[^2], A. A. Shishmarev$^{2}$[^3] and A. J. D. Farias Jr$^{4}$[^4]\
$^{1}$ *Department of Physics, Tomsk State University, Lenin Prospekt 36, 634050, Tomsk, Russia;*\
$^{2}$ *Institute of High Current Electronics, SB RAS, Akademichesky Ave. 4, 634055, Tomsk, Russia;*\
$^{3}$ *P.N. Lebedev Physical Institute, 53 Leninskiy prospect, 119991 Moscow, Russia;*\
$^{4}$ *Institute of Physics, University of São Paulo,*\
*Rua do Matão, 1371, CEP 05508-090, São Paulo, SP, Brazil*
title: 'Quantum states of electromagnetic field interacting with a classical current and their applications to radiation problems.'
---
Introduction\[S1\]
==================
As a rule, a motion of charged particles in external electromagnetic fields is accompanied by electromagnetic radiation. The most important examples, at the same time related to the present work, are synchrotron (SR) and cyclotron (CR) radiations of charged particles in a magnetic field. The phenomenon of SR was discovered approximately 70 years ago [@Elder]. A large number of works have been devoted to their theoretical description, both within the framework of classical and quantum theory. In both cases, various approximate methods and limiting cases were considered. In classical electrodynamics the electromagnetic field created by an arbitrary electric four-current is described by potentials of Liénard-Wiechert (LW) [@Landau; @Jackson]. It turns out that SR can be described sufficiently precise in the framework of the classical theory (using LW potentials). Schott was the first to obtain a successful formula for the angular distribution of power emitted in SR by a particle moving in a circular orbit [@Schott]. An alternative derivation of classical formulas describing the properties of SR and their deep analysis, especially for high-energy relativistic electrons, was given by Schwinger [@Schwi49]. Nevertheless, quantum effects may play an important role in the SR and CR. In particular, effects of a backreaction related to a photon radiation, aspects of discrete structure of energy levels of electrons in the magnetic field, and spin properties of charged particles are ignored by the classical theory. The essence of quantum corrections to classical results was first pointed out in Ref. [@Schwi54]. In quantum theory, radiation rate of energy of a charge particle in course of quantum transitions was calculated using exact solutions of the Schrödinger (nonrelativistic case), Klein-Gordon (spinless case) or Dirac (relativistic case) equations with a magnetic field [@SokolovTernov]. Using his source theory [@Schwi] Schwinger had presented an original derivation of similar results [@Schwi73]. Besides, the quantum treatment revealed a completely new effect of self-polarization of electrons and positrons moving in uniform and constant magnetic field [@SokTerSelfPol]. We note that in the latter works only a one-photon radiation in course of quantum transitions was taken into account. However, there are evidences that a multi-photon emission can contribute significantly to the SR, see for example [@MultiPhot; @DualPhot]. For electromagnetic fields exceeding the critical Schwinger field $H_{0}=m^{2}c^{3}/e\hbar,$ nonlinear phenomena of quantum electrodynamics begin to play a prominent role. Moreover, at fields comparable with the critical field one can observe nonlinear quantum effects caused by ultrarelativistic particles with high enough momenta. Some examples of such effects (of the orders of $\alpha$, $\alpha^{2}$ ($\alpha$ is a fine-structure constant) in the interaction with the radiation field) are the one-photon emission by electrons ($e\rightarrow
e\gamma$, $\alpha$), the pair production by photons ($\gamma\rightarrow
e^{+}e^{-}$,$\ \alpha$), electron scattering accompanied by the pair production ($e\rightarrow ee^{+}e^{-},$ $\alpha^{2}$), two photon emission process ($e\rightarrow e2\gamma$, $\alpha^{2}$) etc. If an incident particle has a momentum $p\sim\left( H_{0}/H\right) m$, then the probability of the processes is notable.
It should be noted a significant complexity of calculating even the one-photon radiation using solutions of the above mentioned quantum equations. There is an opportunity to simplify these calculations considering in the same relatively simple manner the multi-photon radiation taking the quantum nature of the irradiated field into account exactly, but considering the particle current classically. This means that we neglect the back reaction of the radiation to the current that generates this radiation. Such an approximation may be justified in some cases, for example, for high density electron beams. From the technical point of view, this means that calculating electromagnetic radiation induced by classical electric currents, we have to work with exact quantum states of the electromagnetic field interacting with classical currents. Such an approach is considered in the present work. For these purposes, we first construct exact quantum states of the electromagnetic field interacting with classical currents and study their properties. Then by their help, we calculate a probability of photons emission by a classical current from the vacuum initial state (i.e., from the state without initial photons). Then we obtain relatively simple formulas for one-photon and multi-photon radiation. Using the specific circular electric current, we calculate the corresponding SR. We discuss a relation of obtained results with known before, for example, with the Schott formula, with the Schwinger calculations, with one-photon radiation of scalar particles due to transitions between Landau levels, and with some known results of calculating the two-photon SR. Less important technical details are placed in the Appendix.
Quantum states of the radiation field interacting with a classical current\[S2\]
================================================================================
Here we consider the quantized electromagnetic field interacting with a classical current $J_{\mu}\left( x\right) $, see Refs. [@Heitl36; @Schweber; @48; @AkhiBer81; @GitTy90]. In the Coulomb gauge this system is described by a Hamiltonian $\hat{H}$ which consists of two terms, a Hamiltonian of free transversal photons $\hat{H}_{\mathrm{\gamma}}$ and an interaction Hamiltonian $\hat{H}_{\mathrm{int}}$: $$\begin{aligned}
& \hat{H}=\hat{H}_{\mathrm{\gamma}}+\hat{H}_{\mathrm{int}},\ \hat
{H}_{\mathrm{\gamma}}=c\hbar\sum_{\lambda=1,2}\int d\mathbf{k}k_{0}\hat
{c}_{\mathbf{k}\lambda}^{\dag}\hat{c}_{\mathbf{k}\lambda}\ ,\nonumber\\
& \hat{H}_{\mathrm{int}}=\frac{1}{c}\int\left[ J_{i}\left( x\right)
\hat{A}^{i}\left( \mathbf{r}\right) +\frac{1}{2}J_{0}\left( x\right)
A^{0}\left( x\right) \right] d\mathbf{r}\ . \label{2.1}%\end{aligned}$$ Here $\hat{A}^{i}\left( \mathbf{r}\right) $ are operators (in the Schrödinger representation) of vector potentials of the transversal electromagnetic field,$$\begin{aligned}
& \hat{A}^{i}\left( \mathbf{r}\right) =\sqrt{4\pi c\hbar}\sum
\limits_{\lambda=1}^{2}\int d\mathbf{k}\left[ \hat{c}_{\mathbf{k}\lambda
}f_{\mathbf{k}\lambda}^{i}\left( \mathbf{r}\right) +\hat{c}_{\mathbf{k}%
\lambda}^{\dag}f_{\mathbf{k}\lambda}^{i\ast}\left( \mathbf{r}\right)
\right] ,\ i=1,2,3\ ,\label{2.2}\\
& f_{\mathbf{k}\lambda}^{i}\left( \mathbf{r}\right) =\frac{\exp\left(
i\mathbf{kr}\right) }{\sqrt{2k_{0}\left( 2\pi\right) ^{3}}}\epsilon
_{\mathbf{k}\lambda}^{i},\ k_{0}=\left\vert \mathbf{k}\right\vert ,
\label{2.3}%\end{aligned}$$ where $\epsilon_{\mathbf{k}\lambda}^{i}$ are the polarization vectors of the photon with wave vector $\mathbf{k}$ and polarization $\lambda=1,2$. These vectors possess the properties $$\mathbf{\epsilon}_{\mathbf{k}\lambda}\mathbf{\epsilon}_{\mathbf{k}\sigma
}^{\ast}=\delta_{\lambda\sigma},\ \mathbf{\epsilon}_{\mathbf{k}\lambda
}\mathbf{k}=0,\ \sum_{\lambda=1}^{2}\epsilon_{\mathbf{k}\lambda}^{i}%
\epsilon_{\mathbf{k}\lambda}^{j\ast}=\delta^{ij}-\frac{k^{i}k^{j}}{\left\vert
\mathbf{k}\right\vert ^{2}}\ . \label{2.5}%$$ Operators $\hat{c}_{\mathbf{k}\lambda}$ and $\hat{c}_{\mathbf{k}\lambda}%
^{\dag}$are the annihilation and creation operators of photons with a wave vector $\mathbf{k}$ and polarizations $\lambda$. These operators satisfy the commutation relations: $$\left[ \hat{c}_{\mathbf{k}\lambda},\hat{c}_{\mathbf{k}^{\prime}%
\lambda^{\prime}}^{\dag}\right] =\delta_{\lambda\lambda^{\prime}}%
\delta\left( \mathbf{k-k}^{\prime}\right) ,\ \left[ \hat{c}_{\mathbf{k}%
\lambda},\hat{c}_{\mathbf{k}^{\prime}\lambda^{\prime}}\right] =\left[
\hat{c}_{\mathbf{k}\lambda}^{\dag},\hat{c}_{\mathbf{k}^{\prime}\lambda
^{\prime}}^{\dag}\right] =0\ . \label{2.6}%$$ Using Eqs. (\[2.2\])-(\[2.5\]) one can verify that the operator $\hat
{A}^{i}\left( \mathbf{r}\right) $ satisfy the condition $\operatorname{div}%
\mathbf{\hat{A}}\left( \mathbf{r}\right) =0.$ We note that in the Coulomb gauge $A^{0}\left( x\right) $ is a $c$-valued scalar function which satisfies the following equations:$$A^{0}\left( x\right) =\int d\mathbf{r}^{\prime}\frac{J_{0}\left(
\mathbf{r}^{\prime},t\right) }{\left\vert \mathbf{r}-\mathbf{r}^{\prime
}\right\vert },\ \Delta A^{0}\left( x\right) =-4\pi J_{0}\left( x\right) .
\label{2.8}%$$ Then the term $J_{0}\left( x\right) A^{0}\left( x\right) /2$ can be represented as $-2\pi J_{0}\left( x\right) \Delta^{-1}J_{0}\left( x\right)
$, and, in general case, is time dependent.
The evolution of state vectors $\left\vert \Psi\left( t\right) \right\rangle
$ of the quantized electromagnetic field is governed by the Schrödinger equation$$i\hbar\partial_{t}\left\vert \Psi\left( t\right) \right\rangle =\hat
{H}\left\vert \Psi\left( t\right) \right\rangle . \label{2.9}%$$ The general solution of Eq. (\[2.9\]) can be written in the following form[^5], $$\begin{aligned}
& \left\vert \Psi\left( t\right) \right\rangle =U\left( t\right)
\left\vert \Psi\left( 0\right) \right\rangle ,\label{2.10}\\
& U\left( t\right) =\exp\left[ -i\hbar^{-1}\hat{H}_{\mathrm{\gamma}%
}t\right] \exp\left[ -i\hbar^{-1}\hat{B}\left( t\right) \right]
,\label{2.11}\\
& \hat{B}\left( t\right) =\frac{1}{c}\int_{0}^{t}dt^{\prime}\int\left\{
J_{i}\left( x^{\prime}\right) \left[ \hat{A}^{i}\left( x^{\prime}\right)
+\frac{1}{2}\tilde{A}^{i}\left( x^{\prime}\right) \right] +\frac{1}{2}%
J_{0}\left( x^{\prime}\right) A^{0}\left( x^{\prime}\right) \right\}
d\mathbf{r}^{\prime},\ \nonumber\\
& \tilde{A}^{i}\left( x\right) =\frac{1}{\hbar c}\int_{0}^{t}dt^{\prime
}\int D_{0}\left( x-x^{\prime}\right) \delta_{\bot}^{ik}J^{k}\left(
x^{\prime}\right) d\mathbf{r}^{\prime},\nonumber\\
& \hat{A}^{i}\left( x\right) =\sqrt{4\pi c\hbar}\sum\limits_{\lambda=1}%
^{2}\int d\mathbf{k}\left[ \hat{c}_{\mathbf{k}\lambda}f_{\mathbf{k}\lambda
}^{i}\left( x\right) +\hat{c}_{\mathbf{k}\lambda}^{\dag}f_{\mathbf{k}%
\lambda}^{i\ast}\left( x\right) \right] ,\ f_{\mathbf{k}\lambda}^{i}\left(
x\right) =f_{\mathbf{k}\lambda}^{i}\left( \mathbf{r}\right) e^{-ik_{0}%
ct},\nonumber\end{aligned}$$ where $U\left( t\right) $ is an evolution operator, and $\left\vert
\Psi\left( 0\right) \right\rangle $ is an initial state of the quantized electromagnetic field at the time instant $t=0$.
The singular function $D_{0}\left( x-x^{\prime}\right) $ can be obtained from the Pauli-Jordan permutation function at $m=0$, see, for example, [@BogolubovShirkov],$$\ \square D_{0}\left( x-x^{\prime}\right) =0,\ D_{0}\left( x-x^{\prime
}\right) =4\pi c\hbar\frac{i}{\left( 2\pi\right) ^{3}}\int\frac
{d\mathbf{k}}{2k_{0}}\left[ e^{-ik\left( x-x^{\prime}\right) }-e^{ik\left(
x-x^{\prime}\right) }\right] . \label{2.12}%$$ It defines nonequal-time commutation relations for the operators $\hat{A}%
^{i}\left( x\right) $,$$\left[ \hat{A}^{i}\left( x\right) ,\hat{A}^{j}\left( x^{\prime}\right)
\right] =-i\delta_{\bot}^{ij}D_{0}\left( x-x^{\prime}\right) ,\ \delta
_{\bot}^{ij}=\delta^{ij}-\Delta^{-1}\partial^{i}\partial^{j}, \label{2.13}%$$ and is related to the retarded $D^{\mathrm{ret}}\left( x-x^{\prime}\right) $ and advanced $D^{\mathrm{adv}}\left( x-x^{\prime}\right) $ Green’s functions of the D’Alembert equations, $$\begin{aligned}
& \int_{0}^{t}dt^{\prime}D_{0}\left( x-x^{\prime}\right) =\int_{0}^{\infty
}dt^{\prime}D^{\mathrm{ret}}\left( x-x^{\prime}\right) ,\ D_{0}\left(
x-x^{\prime}\right) =D^{\mathrm{ret}}\left( x-x^{\prime}\right)
-D^{\mathrm{adv}}\left( x-x^{\prime}\right) ,\nonumber\\
& D^{\mathrm{ret}}\left( x-x^{\prime}\right) =\theta\left( t-t^{\prime
}\right) D_{0}\left( x-x^{\prime}\right) ,\ D^{\mathrm{adv}}\left(
x-x^{\prime}\right) =\theta\left( t^{\prime}-t\right) D_{0}\left(
x-x^{\prime}\right) ,\nonumber\\
& \square D^{\mathrm{ret}}\left( x-x^{\prime}\right) =\square
D^{\mathrm{adv}}\left( x-x^{\prime}\right) =\delta\left( x-x^{\prime
}\right) . \label{2.14}%\end{aligned}$$ Taking into account Eqs. (\[2.14\]), one can see that the functions $\tilde{A}^{i}\left( x\right) $ represent retarded potentials created by a classical current (see, e.g., [@Landau; @Galtsov]).
Let us verify directly that state vector (\[2.10\]) satisfies equation (\[2.9\]). Foremost, as the operator $\hat{H}_{\mathrm{\gamma}}$ is time-independent, we have$$i\hbar\partial_{t}\left[ \exp\left( -i\hbar^{-1}\hat{H}_{\mathrm{\gamma}%
}t\right) \right] =\hat{H}_{\mathrm{\gamma}}\exp\left( -i\hbar^{-1}\hat
{H}_{\mathrm{\gamma}}t\right) . \label{2.15}%$$ However, the derivative $\partial_{t}\hat{A}^{i}\left( x\right) $ does not commute with the operators $\hat{A}^{i}\left( x^{\prime}\right)
$,so when calculating the derivative $i\hbar\partial_{t}$ of the second exponent in the RHS of Eq. (\[2.11\]), one has to use the Feynman’s method of disentangling of operators [@Fey3]. Calculating the derivative $i\hbar\partial_{t}$ in such a way, we find$$\begin{aligned}
& i\hbar\partial_{t}\exp\left[ -i\hbar^{-1}\hat{B}\left( t\right) \right]
=\hat{K}\left( t\right) \exp\left[ -i\hat{B}\left( t\right) \right]
,\ \hat{K}\left( t\right) =\int_{0}^{1}dse^{-is\hbar^{-1}\hat{B}\left(
t\right) }\left[ \partial_{t}\hat{B}\left( t\right) \right]
e^{is\hbar^{-1}\hat{B}\left( t\right) },\nonumber\\
& \ \partial_{t}\hat{B}\left( t\right) =\frac{1}{c}\int\left\{
J_{i}\left( x\right) \left[ \hat{A}^{i}\left( x\right) +\frac{1}{2}%
\tilde{A}^{i}\left( x\right) \right] +\frac{1}{2}J_{0}\left( x\right)
A^{0}\left( x\right) \right\} d\mathbf{r}. \label{2.17}%\end{aligned}$$ Using the operator relation$$e^{\hat{A}}\hat{M}e^{-\hat{A}}=\hat{M}+\left[ \hat{A},\hat{M}\right]
+\frac{1}{2!}\left[ \hat{A},\left[ \hat{A},\hat{M}\right] \right]
+\ \ldots\ ,$$ we represent the integrand in the RHS of $\hat{K}\left( t\right) $ as follows:$$\begin{aligned}
& e^{-is\hbar^{-1}\hat{B}\left( t\right) }\left[ \partial_{t}\hat
{B}\left( t\right) \right] e^{is\hbar^{-1}\hat{B}\left( t\right)
}=\partial_{t}\hat{B}\left( t\right) +\left[ -is\hbar^{-1}\hat{B}\left(
t\right) ,\partial_{t}\hat{B}\left( t\right) \right] \nonumber\\
& +\frac{1}{2!}\left[ -is\hbar^{-1}\hat{B}\left( t\right) ,\left[
-is\hbar^{-1}\hat{B}\left( t\right) ,\partial_{t}\hat{B}\left( t\right)
\right] \right] +\ \ldots\ . \label{2.19}%\end{aligned}$$ Calculating the first commutator in this series, we obtain:$$\left[ \hat{B}\left( t\right) ,\partial_{t}\hat{B}\left( t\right)
\right] =\frac{1}{c^{2}}\int_{0}^{t}dt^{\prime}\int\int\left\{ J_{i}\left(
x^{\prime}\right) \left[ \hat{A}^{i}\left( x^{\prime}\right) ,\hat{A}%
^{j}\left( x\right) \right] J_{j}\left( x\right) \right\} d\mathbf{r}%
d\mathbf{r}^{\prime}. \label{2.20}%$$ The nonequal-time commutation relations for the operators $\hat{A}^{i}\left(
x\right) $ are given by Eq. (\[2.13\]). Then (\[2.20\]) takes the form$$\left[ \hat{B}\left( t\right) ,\partial_{t}\hat{B}\left( t\right)
\right] =-\frac{i}{c^{2}}\int_{0}^{t}dt^{\prime}\int d\mathbf{r}J_{j}\left(
x\right) \int d\mathbf{r}^{\prime}J_{i}\left( x^{\prime}\right)
\delta_{\bot}^{ij}D_{0}\left( x-x^{\prime}\right) . \label{2.21a}%$$ We suppose, as usual, that currents under consideration vanish at spatial infinities. In this case,$$\int d\mathbf{r}^{\prime}J_{i}\left( x^{\prime}\right) \delta_{\bot}%
^{ij}D_{0}\left( x-x^{\prime}\right) =\int d\mathbf{r}^{\prime}D_{0}\left(
x-x^{\prime}\right) \delta_{\bot}^{ij}J_{i}\left( x^{\prime}\right) .
\label{2.22a}%$$ Then, recalling the definition of $\tilde{A}^{i}\left( x\right) $ from the evolution operator (\[2.11\]), we obtain $$\left[ \hat{B}\left( t\right) ,\partial_{t}\hat{B}\left( t\right)
\right] =-\frac{i}{c}\int J_{i}\left( x\right) \tilde{A}^{i}\left(
x\right) d\mathbf{r}. \label{2.23}%$$ Since the right-hand side of Eq. (\[2.23\]) is not an operator, the only first commutator in the RHS of Eq. (\[2.19\]) survives. Substituting Eqs. (\[2.17\]), (\[2.19\]) and (\[2.23\]) into Eq. (\[2.17\]) and then integrating over $s$, we find:$$\hat{K}\left( t\right) =\frac{1}{c}\int\left[ J_{i}\left( x\right)
\hat{A}^{i}\left( x\right) +\frac{1}{2}J_{0}\left( x\right) A^{0}\left(
x\right) \right] d\mathbf{r}. \label{2.24}%$$ Using the fact that in the Coulomb gauge $$\exp\left[ -i\hbar^{-1}\hat{H}_{\mathrm{\gamma}}t\right] \hat{K}\left(
t\right) =\frac{1}{c}\int\left[ J_{i}\left( x\right) \hat{A}^{i}\left(
\mathbf{r}\right) +\frac{1}{2}J_{0}\left( x\right) A^{0}\left( x\right)
\right] d\mathbf{r}\exp\left[ -i\hbar^{-1}\hat{H}_{\mathrm{\gamma}}t\right]
, \label{2.25}%$$ and taking into account Eq. (\[2.15\]), we make sure that state vector (\[2.10\]) does satisfy equation (\[2.9\]).
It is useful to represent the evolution operator $U\left( t\right) $ as:$$\begin{aligned}
& U\left( t\right) =\exp\left[ i\phi\left( t\right) \right] \exp\left[
-i\hbar^{-1}\hat{H}_{\mathrm{\gamma}}t\right] \mathcal{D}\left( y\right)
,\label{2.26}\\
& \mathcal{D}\left( y\right) =\exp\left\{ \sum_{\lambda=1}^{2}\int
d\mathbf{k}\left[ y_{\mathbf{k}\lambda}\left( t\right) \hat{c}%
_{\mathbf{k}\lambda}^{\dag}-y_{\mathbf{k}\lambda}^{\ast}\left( t\right)
\hat{c}_{\mathbf{k}\lambda}\right] \right\} ,\label{2.27}\\
& \phi\left( t\right) =-\frac{1}{2c}\int_{0}^{t}dt^{\prime}\int\left[
J_{i}\left( x^{\prime}\right) \tilde{A}^{i}\left( x^{\prime}\right)
+J_{0}\left( x^{\prime}\right) A^{0}\left( x^{\prime}\right) \right]
d\mathbf{r}^{\prime},\nonumber\\
& y_{\mathbf{k}\lambda}\left( t\right) =-i\sqrt{\frac{4\pi}{\hbar c}}%
\int_{0}^{t}dt^{\prime}\int J_{i}\left( x^{\prime}\right) f_{\mathbf{k}%
\lambda}^{i\ast}\left( x^{\prime}\right) d\mathbf{r}^{\prime}. \label{2.27a}%\end{aligned}$$ In what follows we omit the argument $\left( t\right) $ in functions $y_{\mathbf{k}\lambda}\left( t\right) $ to make formulas more compact.
We remind some basic relations for the displacement operator $\mathcal{D}%
(\alpha)$ in the Coulomb gauge,$$\begin{aligned}
& \mathcal{D}^{\dag}(\alpha)=\mathcal{D}^{-1}(\alpha),\ |\alpha
\rangle=\mathcal{D}(\alpha)|0\rangle,\ \hat{c}_{\mathbf{k}\lambda}%
|\alpha\rangle=\alpha_{\mathbf{k}\lambda}|\alpha\rangle,\nonumber\\
& \mathcal{D}^{\dag}(\alpha)\hat{c}_{\mathbf{k}\lambda}\mathcal{D}%
(\alpha)=\hat{c}_{\mathbf{k}\lambda}+\alpha_{\mathbf{k}\lambda},\ \mathcal{D}%
^{\dag}(\alpha)\hat{c}_{\mathbf{k}\lambda}^{\dag}\mathcal{D}(\alpha)=\hat
{c}_{\mathbf{k}\lambda}^{\dag}+\alpha_{\mathbf{k}\lambda}^{\ast}. \label{2.28}%\end{aligned}$$ With their help, we obtain: $$\mathcal{D}(y)\left\vert 0\right\rangle =\exp\left( -\frac{1}{2}\sum
_{\lambda=1}^{2}\int d\mathbf{k\ }\left\vert y_{\mathbf{k}\lambda}\right\vert
^{2}\right) \exp\left( \sum_{\lambda=1}^{2}\int d\mathbf{k}\ y_{\mathbf{k}%
\lambda}c_{\mathbf{k}\lambda}^{\dag}\right) \left\vert 0\right\rangle .
\label{2.28a}%$$
Electromagnetic radiation induced by a classical current\[S3\]
==============================================================
One can use the constructed state vector (\[2.10\]) to study electromagnetic radiation induced by a classical current. For simplicity, we choose the vacuum $\left\vert 0\right\rangle $ as the initial state $\left\vert \Psi\left(
0\right) \right\rangle $ at the $t=0$ in Eq. (\[2.10\]). The time evolution of this initial state follows from the latter equation:$$\left\vert \Psi\left( t\right) \right\rangle =\exp\left[ i\phi\left(
t\right) \right] \exp\left[ -i\hbar^{-1}\hat{H}_{\mathrm{\gamma}}t\right]
\mathcal{D}\left( y\right) \left\vert 0\right\rangle . \label{3.1}%$$ Using Eq. (\[3.1\]), we can calculate a probability of photons emission.
When operating in a continuous Fock space, see. [@Schweber], a state with $N$ photons is formed by the repeated action of the photon creation operators on the vacuum $\left\vert 0\right\rangle $, and has the form:$$\left\vert \left\{ N\right\} \right\rangle =\left( N!\right) ^{-1/2}%
\prod_{i=1}^{N}\hat{c}_{\mathbf{k}_{i}\lambda_{i}}^{\dag}\left\vert
0\right\rangle , \label{8.0}%$$ where $\hat{c}_{\mathbf{k}_{i}\lambda_{i}}^{\dag}$ are creation operators of photons with wave vector $\mathbf{k}_{i}$ and polarizations $\lambda_{i}$, $\left\{ N\right\} =\left( \mathbf{k}_{1}\lambda_{1},\mathbf{k}_{2}%
\lambda_{2},\ldots,\mathbf{k}_{N}\lambda_{N}\right) $.
A probability amplitude $R\left( \left\{ N\right\} ,t\right) $ of the transition from the vacuum state $\left\vert 0\right\rangle $ to the state (\[8.0\]) for the time interval $t$ reads: $$R\left( \left\{ N\right\} ,t\right) =\exp\left[ i\phi\left( t\right)
\right] \left\langle 0\right\vert \left( N!\right) ^{-1/2}\left(
\prod_{i=1}^{N}\hat{c}_{\mathbf{k}_{i}\lambda_{i}}\right) \exp\left[
-i\hat{H}_{\mathrm{\gamma}}t\right] \mathcal{D}\left( y\right) \left\vert
0\right\rangle . \label{8.1}%$$ Using properties (\[2.28\]) and (\[2.28a\]) of the displacement operator $\mathcal{D}\left( y\right) $, and commutation relations (\[2.6\]), one can represent amplitude (\[8.1\]) as follows: $$\begin{aligned}
& R\left( \left\{ N\right\} ,t\right) =R\left( 0,t\right) \left(
N!\right) ^{-1/2}\prod_{i=1}^{N}\exp\left[ -i\left\vert \mathbf{k}%
_{i}\right\vert ct\right] y_{\mathbf{k}_{i}\lambda_{i}},\nonumber\\
& R\left( 0,t\right) =\langle0\left\vert \Psi\left( t\right)
\right\rangle =\exp\left[ i\phi\left( t\right) \right] \exp\left(
-\frac{1}{2}\sum_{\lambda=1}^{2}\int d\mathbf{k}\left\vert y_{\mathbf{k}%
\lambda}\right\vert ^{2}\right) . \label{8.2}%\end{aligned}$$ Then the corresponding differential probability $P\left( \left\{ N\right\}
,t\right) $ of such a transition (which we interpret as differential probability of the photon emission) has the form:$$\begin{aligned}
& P\left( \left\{ N\right\} ,t\right) =\left\vert R\left( \left\{
N\right\} ,t\right) \right\vert ^{2}=p\left( \left\{ N\right\} ,t\right)
P\left( 0,t\right) ,\nonumber\\
& p\left( \left\{ N\right\} ,t\right) =\left( N!\right) ^{-1}%
\prod_{i=1}^{N}\left\vert y_{\mathbf{k}_{i}\lambda_{i}}\right\vert
^{2},\nonumber\\
& P\left( 0,t\right) =|R\left( 0,t\right) |^{2}=\exp\left(
-\sum_{\lambda=1}^{2}\int d\mathbf{k}\left\vert y_{\mathbf{k}\lambda
}\right\vert ^{2}\right) , \label{8.2a}%\end{aligned}$$ where ** $P\left( 0,t\right) $ ** is the ** vacuum-to-vacuum transition probability, or the probability of a transition without any photon emission. Thus,$\ p\left( \left\{ N\right\} ,t\right) $ is the relative probability of a process in which $N$ photons with quantum numbers $\mathbf{k}_{i}\lambda_{i}$ are emitted (the relative differential probability).
One can obtain the total probability $P\left( N,t\right) $ of transition from the vacuum state $\left\vert 0\right\rangle $ to the state with $N$ arbitrary photons, summing the quantity $p\left( \left\{ N\right\}
,t\right) $ over the sets $\left\{ N\right\} $. Thus, we get[^6]: $$\begin{aligned}
& P\left( N,t\right) =\sum_{\left\{ N\right\} }P\left( \left\{
N\right\} ,t\right) =P\left( 0,t\right) p\left( N,t\right)
,\ \sum_{\left\{ N\right\} }=\prod_{i=1}^{N}\left( \sum_{\lambda_{i}}\int
d\mathbf{k}_{i}\right) ,\nonumber\\
& p\left( N,t\right) =\left( N!\right) ^{-1}\prod_{i=1}^{N}\left(
\sum_{\lambda_{i}}\int d\mathbf{k}_{i}\left\vert y_{\mathbf{k}_{i}\lambda_{i}%
}\right\vert ^{2}\right) . \label{8.10}%\end{aligned}$$
Introducing a total probability $P\left( t\right) $ of the photon emission for the time interval $t$ as follows $$P\left( t\right) =\sum_{N=1}^{\infty}P\left( N,t\right) =P\left(
0,t\right) \sum_{N=1}^{\infty}\left( N!\right) ^{-1}\prod_{i=1}^{N}\left(
\sum_{\lambda_{i}}\int d\mathbf{k}_{i}\left\vert y_{\mathbf{k}_{i}\lambda_{i}%
}\right\vert ^{2}\right) , \label{8.3}%$$ one can easily verify that the relation $P\left( 0,t\right) +P\left(
t\right) =1$ holds true.
The electromagnetic energy of $\left\{ N\right\} $ photons with given quantum numbers $\left\{ \mathbf{k}\lambda\right\} =\left( \mathbf{k}%
_{i}\lambda_{i},i=1,2,\ldots,N\right) $ depends only on their momenta $\left\{ \mathbf{k}\right\} =\left( \mathbf{k}_{i},i=1,2,\ldots,N\right) $ and does not depend on their polarizations; it is equal to$$W\left( \left\{ N\right\} \right) =\hbar c\left[ \sum_{i=1}^{N}\left\vert
\mathbf{k}_{i}\right\vert \right] . \label{8.11}%$$ Then the total electromagnetic energy $W\left( N,t\right) $ of $N$ emitted photon reads:$$W\left( N,t\right) =\sum_{\left\{ N\right\} }W\left( \left\{ N\right\}
\right) p\left( \left\{ N\right\} ,t\right) =\hbar c\left( N!\right)
^{-1}\sum_{\lambda_{1}=1}^{2}\sum_{\lambda_{2}=1}^{2}\ldots\sum_{\lambda
_{N}=1}^{2}\int d\mathbf{k}_{1}d\mathbf{k}_{2}\ldots d\mathbf{k}_{N}\left[
\sum_{j=1}^{N}\left\vert \mathbf{k}_{i}\right\vert \right] \prod_{i=1}%
^{N}\left\vert y_{\mathbf{k}_{i}\lambda_{i}}\right\vert ^{2}. \label{8.12}%$$ It is easy to demonstrate (see Appendix) that $W\left( N,t\right) $ can be represented as$$\begin{aligned}
& W\left( N,t\right) =\frac{A}{\left( N-1\right) !}\left( \sum
_{\lambda=1}^{2}\int d\mathbf{k}\left\vert y_{\mathbf{k}\lambda}\right\vert
^{2}\right) ^{N-1},\nonumber\\
& A=\hbar c\sum_{\lambda=1}^{2}\int d\mathbf{k}k_{0}\left\vert y_{\mathbf{k}%
\lambda}\right\vert ^{2},\ k_{0}=\left\vert \mathbf{k}\right\vert .
\label{8.13}%\end{aligned}$$
Finally, we calculate the total energy $W\left( t\right) $ of emitted photons:$$W\left( t\right) =\sum_{N=1}^{\infty}W\left( N,t\right) . \label{8.14}%$$ The sum (\[8.14\]) can be calculated exactly, taking into account Eq. (\[8.13\]), $$W\left( t\right) =A\exp\sum_{\lambda=1}^{2}\int d\mathbf{k}\left\vert
y_{\mathbf{k}\lambda}\right\vert ^{2}. \label{8.15}%$$
One-photon radiation by a circular current\[S3.1\]
==================================================
Here we study one-photon radiation from the vacuum induced by a specific circular current. Here we are interested in calculating one-photon radiation, that is why we will discuss a probability of the appearance of one photon with given quantum numbers $\mathbf{k}$ and $\lambda=1,2$. Thus, we consider a transition amplitude from the state (\[3.1\]) to the final state of the form (\[8.0\]) with $N=1$.Using (\[8.12\]), we write one-photon emission as$$W\left( 1,t\right) =\hbar c\sum_{\lambda=1}^{2}\int d\mathbf{k}%
k_{0}\left\vert y_{\mathbf{k}\lambda}\right\vert ^{2},\ k_{0}=\left\vert
\mathbf{k}\right\vert . \label{3.12}%$$
Let us consider a circular current formed by electrons moving perpendicularly to an external uniform and constant magnetic field $\mathbf{H}=\left(
0,0,H\right) $ with the velocity $\mathbf{v}$ along a circular trajectory of the radius $R$. Such a current has the following form [@SokolovTernov]:$$\begin{aligned}
& J_{0}\left( x\right) =q\delta^{\left( 3\right) }\left( \mathbf{r}%
-\mathbf{r}\left( t\right) \right) ,\ \mathbf{J}\left( x\right)
=q\mathbf{\dot{r}}\left( t\right) \delta^{\left( 3\right) }\left(
\mathbf{r}-\mathbf{r}\left( t\right) \right) ,\nonumber\\
& \mathbf{r}\left( t\right) =\left( R\cos\omega t,R\sin\omega t,0\right)
,\ \mathbf{v}\left( t\right) =\mathbf{\dot{r}}\left( t\right) =\omega
R\left( -\sin\omega t,\cos\omega t,0\right) , \label{4.1}%\end{aligned}$$ where $q=-e$, $e>0$ is the electron charge, $\omega=eH/mc$ is the cyclotron frequency. We disregard the backreaction of the radiation, i.e., we suppose that the current is maintained in its original form during the time interval $\Delta t=t$.
Functions $y_{\mathbf{k}\lambda}$ (\[2.26\]) for the current (\[4.1\]) have the form:$$\begin{aligned}
& y_{\mathbf{k}\lambda}=iq%
%TCIMACRO{\dint \limits_{0}^{t}}%
%BeginExpansion
{\displaystyle\int\limits_{0}^{t}}
%EndExpansion
dt^{\prime}\frac{\mathbf{v}\left( t^{\prime}\right) \mathbf{\epsilon
}_{\mathbf{k}\lambda}^{\ast}}{\sqrt{\hbar ck_{0}\left( 2\pi\right) ^{2}}%
}\exp\left\{ i\left[ k_{0}ct^{\prime}-\mathbf{kr}\left( t^{\prime}\right)
\right] \right\} ,\label{4.4}\\
& \mathbf{k}=\left( k_{\perp}\cos\varphi,k_{\perp}\sin\varphi,k_{\Vert
}\right) ,\ k_{\perp}=k_{0}\sin\theta,\ k_{\Vert}=k_{0}\cos\theta.
\label{4.5}%\end{aligned}$$ Here $\varphi$ is the angle between the $x$ axis and the projection of the vector $\mathbf{k}$ onto the $xy$ plane, and $\theta$ is the angle between the $z$ axis and $\mathbf{k}$. Thus,$$W\left( 1,t\right) =\frac{\hbar c}{\left( 2\pi\right) ^{2}}\int
d\mathbf{k}\ k_{0}\left\vert \int dt^{\prime}\mathbf{J}\left( x^{\prime
}\right) \mathbf{\epsilon}_{\mathbf{k}\lambda}^{\ast}\exp\left[
ik_{0}ct^{\prime}-\mathbf{kr}\left( t^{\prime}\right) \right] \right\vert
^{2}. \label{9.4}%$$ Then $$\begin{aligned}
& \exp\left[ -i\mathbf{kr}\left( t^{\prime}\right) \right] =\exp\left[
-ik_{\perp}R\sin\tau\right] ,\nonumber\\
& \exp\left( ik_{0}ct^{\prime}\right) =\exp\left[ ick_{0}\omega
^{-1}\left( \varphi-\pi/2\right) \right] \exp\left( ick_{0}\omega^{-1}%
\tau\right) ,\nonumber\\
& \mathbf{v}\left( \tau\right) =\omega R\left[ \cos\left( \tau
+\varphi\right) ,\sin\left( \tau+\varphi\right) ,0\right] ,\nonumber\\
& \tau=\tau_{\mathrm{i}}+\omega t^{\prime},\ \tau_{\mathrm{i}}=\pi
/2-\varphi,\
%TCIMACRO{\dint \limits_{0}^{t}}%
%BeginExpansion
{\displaystyle\int\limits_{0}^{t}}
%EndExpansion
dt^{\prime}\rightarrow%
%TCIMACRO{\dint \limits_{\tau_{\mathrm{i}}}^{\tau_{\mathrm{i}}+\omega t}}%
%BeginExpansion
{\displaystyle\int\limits_{\tau_{\mathrm{i}}}^{\tau_{\mathrm{i}}+\omega t}}
%EndExpansion
\omega^{-1}d\tau\ . \label{4.8}%\end{aligned}$$
In the case under consideration, we chose linear polarization vectors $\mathbf{\epsilon}_{\mathbf{k}\lambda}$ as: $$\begin{aligned}
& \mathbf{\epsilon}_{\mathbf{k}1}=\left( \cos\varphi\cos\theta,\sin
\varphi\cos\theta,-\sin\theta\right) ,\ \mathbf{\epsilon}_{\mathbf{k}%
2}=\left( -\sin\varphi,\cos\varphi,0\right) ,\nonumber\\
& \mathbf{\epsilon}_{\mathbf{k}1}\mathbf{\epsilon}_{\mathbf{k}1}%
=\mathbf{\epsilon}_{\mathbf{k}2}\mathbf{\epsilon}_{\mathbf{k}2}%
=1,\ \mathbf{\epsilon}_{\mathbf{k}1}\mathbf{\epsilon}_{\mathbf{k}%
2}=\mathbf{\epsilon}_{\mathbf{k}1}\mathbf{k}=\mathbf{\epsilon}_{\mathbf{k}%
2}\mathbf{k}=0. \label{4.11}%\end{aligned}$$ One can easily verify that the following relations hold:$$\mathbf{v}\left( t^{\prime}\right) \mathbf{\epsilon}_{\mathbf{k}1}^{\ast
}=\omega R\cos\theta\cos\tau,\ \mathbf{v}\left( t^{\prime}\right)
\mathbf{\epsilon}_{\mathbf{k}2}^{\ast}=\omega R\sin\tau. \label{4.12}%$$ Now it follows from Eqs. (\[4.4\]) that $$\begin{aligned}
& y_{\mathbf{k}1}=\frac{iqR\cos\theta}{\sqrt{k_{0}\left( 2\pi\right)
^{2}\hbar c}}Y_{\mathbf{k}}\left( \varphi\right)
%TCIMACRO{\dint \limits_{\tau_{\mathrm{i}}}^{\tau_{\mathrm{i}}+\omega t}}%
%BeginExpansion
{\displaystyle\int\limits_{\tau_{\mathrm{i}}}^{\tau_{\mathrm{i}}+\omega t}}
%EndExpansion
d\tau\exp\left( ick_{0}\omega^{-1}\tau\right) \cos\tau\exp\left(
-ik_{\perp}R\sin\tau\right) ,\nonumber\\
& y_{\mathbf{k}2}=\frac{iqR}{\sqrt{k_{0}\left( 2\pi\right) ^{2}\hbar c}%
}Y_{\mathbf{k}}\left( \varphi\right)
%TCIMACRO{\dint \limits_{\tau_{\mathrm{i}}}^{\tau_{\mathrm{i}}+\omega t}}%
%BeginExpansion
{\displaystyle\int\limits_{\tau_{\mathrm{i}}}^{\tau_{\mathrm{i}}+\omega t}}
%EndExpansion
d\tau\exp\left( ick_{0}\omega^{-1}\tau\right) \sin\tau\exp\left(
-ik_{\perp}R\sin\tau\right) ,\nonumber\\
& Y_{\mathbf{k}}\left( \varphi\right) =\exp\left[ ick_{0}\omega
^{-1}\left( \varphi-\pi/2\right) \right] . \label{4.13}%\end{aligned}$$
At this stage, we utilize a well-known plane wave expansion of the Bessel functions $j_{n}\left( x\right) $ (see, e.g., [@SokolovTernov]), $$\begin{aligned}
& \exp\left( -ik_{\perp}R\sin\tau\right) =\sum_{n=-\infty}^{+\infty}%
j_{n}\left( k_{\perp}R\right) \exp\left( -in\tau\right) ,\nonumber\\
& \sin\tau\exp\left( -ik_{\perp}R\sin\tau\right) =i\sum_{n=-\infty
}^{+\infty}j_{n}^{\prime}\left( k_{\perp}R\right) \exp\left( -in\tau
\right) ,\nonumber\\
& \cos\tau\exp\left( -ik_{\perp}R\sin\tau\right) =\sum_{n=-\infty}%
^{+\infty}\frac{n}{k_{\perp}R}j_{n}\left( k_{\perp}R\right) \exp\left(
-in\tau\right) . \label{4.14}%\end{aligned}$$ Using (\[4.14\]) in Eqs. (\[4.13\]), we obtain:$$\begin{aligned}
& y_{\mathbf{k}1}=i\frac{qR\cos\theta}{\sqrt{k_{0}\left( 2\pi\right)
^{2}\hbar c}}Y_{\mathbf{k}}\left( \varphi\right) \sum_{n=-\infty}^{+\infty
}\frac{nj_{n}\left( k_{\perp}R\right) }{k_{\perp}R}F_{\mathbf{k}}^{n}\left(
\varphi,t\right) ,\nonumber\\
& y_{\mathbf{k}2}=-\frac{qR}{\sqrt{k_{0}\left( 2\pi\right) ^{2}\hbar c}%
}Y_{\mathbf{k}}\left( \varphi\right) \sum_{n=-\infty}^{+\infty}j_{n}%
^{\prime}\left( k_{\perp}R\right) F_{\mathbf{k}}^{n}\left( \varphi
,t\right) ,\nonumber\\
& F_{\mathbf{k}}^{n}\left( \varphi,t\right) =%
%TCIMACRO{\dint \limits_{\tau_{\mathrm{i}}}^{\tau_{\mathrm{i}}+\omega t}}%
%BeginExpansion
{\displaystyle\int\limits_{\tau_{\mathrm{i}}}^{\tau_{\mathrm{i}}+\omega t}}
%EndExpansion
d\tau\exp\left[ i\left( ck_{0}\omega^{-1}-n\right) \tau\right] ,
\label{4.15}%\end{aligned}$$ we can rewrite (\[4.15\]) as follows:$$\begin{aligned}
& y_{\mathbf{k}1}=\frac{iq\cot\theta}{\sqrt{k_{0}^{3}\left( 2\pi\right)
^{2}\hbar c}}Y_{\mathbf{k}}\left( \varphi\right) \sum_{n=-\infty}^{+\infty
}nj_{n}\left( k_{\perp}R\right) F_{\mathbf{k}}^{n}\left( \varphi,t\right)
,\nonumber\\
& y_{\mathbf{k}2}=-\frac{qR}{\sqrt{k_{0}\left( 2\pi\right) ^{2}\hbar c}%
}Y_{\mathbf{k}}\left( \varphi\right) \sum_{n=-\infty}^{+\infty}j_{n}%
^{\prime}\left( k_{\perp}R\right) F_{\mathbf{k}}^{n}\left( \varphi
,t\right) . \label{4.17}%\end{aligned}$$ Now, we can calculate the corresponding probabilities $\left\vert
y_{\mathbf{k}\lambda}\right\vert ^{2}$, $$\begin{aligned}
& \ \left\vert y_{\mathbf{k}1}\right\vert ^{2}=\frac{q^{2}}{\hbar c}%
\frac{\cot^{2}\theta}{k_{0}^{3}\left( 2\pi\right) ^{2}}\left\vert
\sum_{n=-\infty}^{+\infty}nj_{n}\left( k_{\perp}R\right) F_{\mathbf{k}}%
^{n}\left( \varphi,t\right) \right\vert ^{2},\nonumber\\
& \ \left\vert y_{\mathbf{k}2}\right\vert ^{2}=\frac{q^{2}}{\hbar c}%
\frac{R^{2}}{k_{0}\left( 2\pi\right) ^{2}}\left\vert \sum_{n=-\infty
}^{+\infty}j_{n}^{\prime}\left( k_{\perp}R\right) F_{\mathbf{k}}^{n}\left(
\varphi,t\right) \right\vert ^{2}. \label{4.18}%\end{aligned}$$
The radiated energy (\[3.12\]) has to be calculated in the following manner:$$\begin{aligned}
& W\left( 1,t\right) =W_{1}\left( 1,t\right) +W_{2}\left( 1,t\right)
,\nonumber\\
& W_{1}\left( 1,t\right) =\hbar c\int d\mathbf{k}k_{0}\left\vert
y_{\mathbf{k}1}\right\vert ^{2}=\int d\mathbf{k}\frac{q^{2}\cot^{2}\theta
}{k_{0}^{2}\left( 2\pi\right) ^{2}}\left\vert \sum_{n=-\infty}^{+\infty
}nj_{n}\left( k_{\perp}R\right) F_{\mathbf{k}}^{n}\left( \varphi,t\right)
\right\vert ^{2},\nonumber\\
& W_{2}\left( 1,t\right) =\hbar c\int d\mathbf{k}k_{0}\left\vert
y_{\mathbf{k}2}\right\vert ^{2}=\int d\mathbf{k}\frac{q^{2}R^{2}}{\left(
2\pi\right) ^{2}}\left\vert \sum_{n=-\infty}^{+\infty}j_{n}^{\prime}\left(
k_{\perp}R\right) F_{\mathbf{k}}^{n}\left( \varphi,t\right) \right\vert
^{2}. \label{4.20}%\end{aligned}$$ Note that the functions $F_{\mathbf{k}}^{n}\left( \varphi,t\right) $ can be represented as: $$F_{\mathbf{k}}^{n}\left( \varphi,t\right) =\omega\exp\left[ -i\left(
ck_{0}\omega^{-1}-n\right) \varphi\right] \exp\left[ i\frac{\pi}{2}\left(
ck_{0}\omega^{-1}-n\right) \right]
%TCIMACRO{\dint \limits_{0}^{t}}%
%BeginExpansion
{\displaystyle\int\limits_{0}^{t}}
%EndExpansion
dt^{\prime}\exp\left[ i\left( ck_{0}-n\omega\right) t^{\prime}\right] .
\label{4.21}%$$ Using the well-known integral representation of Kronecker’s delta function $$\oint d\varphi\exp\left[ i\left( n-n^{\prime}\right) \varphi\right]
=2\pi\delta_{nn\prime}, \label{4.22}%$$ we can transform the quantities $W_{1}\left( 1,t\right) $ and $W_{2}\left(
1,t\right) $ as follows:$$\begin{aligned}
& W_{1}\left( 1,t\right) =q^{2}\omega^{2}\sum_{n=-\infty}^{+\infty}\int
_{0}^{\infty}\frac{dk_{0}}{2\pi}\int_{0}^{\pi}\sin\theta d\theta\ \cot
^{2}\theta\ n^{2}j_{n}^{2}\left( k_{\perp}R\right) \left\vert \int_{0}%
^{t}dt^{\prime}\ \exp\left[ i\left( ck_{0}-n\omega\right) t^{\prime
}\right] \right\vert ^{2},\nonumber\\
& W_{2}\left( 1,t\right) =q^{2}\omega^{2}R^{2}\sum_{n=-\infty}^{+\infty
}\int_{0}^{\infty}\frac{dk_{0}}{2\pi}\int_{0}^{\pi}\sin\theta d\theta
\ k_{0}^{2}\ j_{n}^{\prime2}\left( k_{\perp}R\right) \left\vert \int_{0}%
^{t}dt^{\prime}\ \exp\left[ i\left( ck_{0}-n\omega\right) t^{\prime
}\right] \right\vert ^{2}. \label{4.23}%\end{aligned}$$ Then the the energy $W\left( 1,t\right) $ reads:$$W\left( 1,t\right) =\frac{q^{2}\omega^{2}}{2\pi}\sum_{n=-\infty}^{+\infty
}\int_{0}^{\infty}\!dk_{0}\!\int_{0}^{\pi}\sin\theta d\theta\left[
n^{2}\ j_{n}^{2}\left( k_{\perp}R\right) \cot^{2}\theta+k_{0}^{2}%
\ R^{2}\ j_{n}^{\prime2}\left( k_{\perp}R\right) \right] \left\vert
\int_{0}^{t}dt^{\prime} \exp\left[ i\left( ck_{0}-n\omega\right) t^{\prime
}\right] \right\vert ^{2}\!\!. \label{4.24}%$$
Derivation of the Schott formula
--------------------------------
Let us study the time behavior of the energy $W\left( 1,t\right) $ of the one-photon emission (\[4.24\]). One can see that at $t\rightarrow\infty$ this quantity as a function of time is not well defined. However, a real physical meaning has the rate $w\left( t\right) $ of the energy emission, which is the time derivative of $W\left( 1,t\right) $,$$\begin{aligned}
& w\left( t\right) =\partial_{t}W\left( 1,t\right) =\frac{q^{2}\omega
^{2}}{2\pi}\sum_{n=-\infty}^{+\infty}K\left( t\right) \int_{0}^{\infty
}dk_{0}\int_{0}^{\pi}\sin\theta\left[ n^{2}j_{n}^{2}\left( k_{\perp
}R\right) \cot^{2}\theta+k_{0}^{2}R^{2}j_{n}^{\prime2}\left( k_{\perp
}R\right) \right] d\theta,\nonumber\\
& K\left( t\right) =\frac{\partial}{\partial t}\left\vert \int_{0}%
^{t}dt^{\prime}\ \exp\left[ i\left( ck_{0}-n\omega\right) t^{\prime
}\right] \right\vert ^{2}. \label{4.32}%\end{aligned}$$ To compare with the Schott result, we have to consider $w\left( t\right) $ as $t\rightarrow\infty.$ In fact the problem is reduced to calculating the $\lim_{t\rightarrow\infty}K\left( t\right) .$ This limit can be easily calculated,$$\lim_{t\rightarrow\infty}K\left( t\right) =\lim_{t\rightarrow\infty}%
\frac{2\sin\left( ck_{0}-n\omega\right) t}{ck_{0}-n\omega}=2\pi\delta\left(
ck_{0}-n\omega\right) , \label{4.34}%$$ see, e.g., [@SokolovTernov]. Taking Eq. (\[4.34\]) into account and the fact that the delta-function in the RHS of Eq. (\[4.34\]) vanishes for negative $n$, we obtain: $$\lim_{t\rightarrow\infty}w\left( t\right) =\frac{q^{2}\omega^{2}}{c}%
\sum_{n=1}^{+\infty}n^{2}\int_{0}^{\pi}\sin\theta\left[ j_{n}^{2}\left(
\frac{n\omega R}{c}\sin\theta\right) \cot^{2}\theta+\frac{\omega^{2}R^{2}%
}{c^{2}}j_{n}^{\prime2}\left( \frac{n\omega R}{c}\sin\theta\right) \right]
d\theta. \label{4.35}%$$ The result (\[4.35\]) reproduces literally the Schott formula for the rate of the energy radiation by a classical current.
Schwinger calculations of the one-photon radiation
--------------------------------------------------
Schwinger in his work [@Schwi49] considered classical SR, using the method is based on an examination of the energy transfer rate from the electron to the electromagnetic field. Later in Ref. [@Schwi54] he calculated the quantum corrections of the first order in $\hbar$ to the classical formula, taking into account the quantum nature of the radiating particle but neglecting its spin properties. In 1973 he reexamined the problem, utilizing the source theory to obtain the quantum expression for the spectral distribution of the radiated power [@Schwi73].
In [@Schwi49], he presented several different distributions of the instantaneous power. Among them is expression for the power radiated into a unit solid angle about the direction $\mathbf{n}=\left( \cos\varphi\cos
\theta,\sin\varphi\cos\theta,\sin\theta\right) $ and contained in a unit angular frequency interval about the frequency $ck_{0}$, $$\begin{aligned}
& P\left( \mathbf{n},k_{0}\right) =\sum_{n=1}^{\infty}\delta\left(
ck_{0}-n\omega\right) P_{n}\left( \mathbf{n}\right) ,\nonumber\\
& P_{n}\left( \mathbf{n}\right) =\frac{\omega^{2}R}{c^{2}}\frac{q^{2}}%
{2\pi}n^{2}\left[ \frac{\omega^{2}R^{2}}{c^{2}}j_{n}^{\prime2}\left(
\frac{n\omega R}{c}\cos\theta\right) +\frac{\sin^{2}\theta}{\cos^{2}\theta
}j_{n}\left( \frac{n\omega R}{c}\cos\theta\right) \right] . \label{6.1}%\end{aligned}$$ The total radiated power can be calculated as$$P=\int_{0}^{\infty}cdk_{0}\int P\left( \mathbf{n},k_{0}\right) d\Omega.
\label{6.2}%$$ Considering the high-frequency radiation,$$1-\frac{\omega^{2}R^{2}}{c^{2}}\ll1,\ \theta\ll1,\ n\gg1, \label{6.3}%$$ and using the connection between the Airy and Bessel functions, Schwinger obtained an alternative representation for his result in the form$$\begin{aligned}
& P_{n}\left( \mathbf{n}\right) =\frac{q^{2}\omega}{6\pi^{2}R}n^{2}\left(
1-\omega^{2}R^{2}/c^{2}+\theta^{2}\right) ^{2}\left[ K_{2/3}^{2}\left(
\zeta\right) +\frac{\theta^{2}K_{1/3}^{2}\left( \zeta\right) }{1-\omega
^{2}R^{2}/c^{2}+\theta^{2}}\right] ,\nonumber\\
& \zeta=\frac{n}{n_{c}}\left( \frac{1-\omega^{2}R^{2}/c^{2}+\theta^{2}%
}{1-\omega^{2}R^{2}/c^{2}}\right) ^{3/2}, \label{6.4}%\end{aligned}$$ and $n_{c}$ is a critical harmonic number [@Schwi49]. Note that formal difference in angular distribution between (\[6.1\]) and (\[4.35\]) appear due to different notation and does not lead to any differences in final values.
In Ref. [@Schwi54] he considered the quantum corrections of the first order in $\hbar$ to the classical formula, taking into account the quantum nature of the radiating electron. In his consideration he neglected the spin properties as at this level of accuracy, the spin degrees of freedom play no role for unpolarized particles. The first-order in $\hbar$ correction to the classical formula (\[6.2\]) can be obtained from the classical expression for the differential radiation probability $\left( ck_{0}\right)
^{-1}P\left( \mathbf{n},ck_{0}\right) $ [@Schwi54] by making the substitution $$ck_{0}\rightarrow ck_{0}\left( 1+\frac{\hbar ck_{0}}{E}\right) .
\label{6.12}%$$ The total radiated power with the first order quantum corrections obtained by Schwinger reads $$w=\frac{2}{3}\omega\frac{q^{2}}{R}\left( \frac{E}{mc^{2}}\right) ^{4}\left[
1-\sqrt{3}\frac{55}{16}\frac{\hbar}{mcR}\left( \frac{E}{mc^{2}}\right)
^{2}+O\left( \hbar^{2}\right) \right] . \label{6.13}%$$
In Ref. [@Schwi73] Schwinger considered the radiation of a spinless charged particle in the homogeneous magnetic field, and obtained the spectral distribution of the radiated power $w\left( k_{0}\right) $ (here $c=\hbar
=1$) in the form$$w\left( k_{0}\right) =\frac{ck_{0}q^{2}}{\pi m}\frac{m^{2}}{E^{2}}\left\{
\int_{0}^{\infty}\frac{dx}{x}\left( 1+2x^{2}\right) \sin\left[ \frac
{ck_{0}}{\omega}\left( \frac{m}{E}\right) ^{3}\left( x-\frac{x^{3}}%
{3}\right) \right] -\frac{1}{2}\pi\right\} ,\ x=\frac{1}{2}\omega t\frac
{E}{m}. \label{6.14}%$$ According to the author, Eq. (\[6.14\]) in the classical limit reproduces the Schott formula.
Note that the formulas (\[7.1\]) and (\[6.14\]) include both the corrections due to electron recoil and the effects of quantization of the electromagnetic field. As for the comparison with our result, the angular distributions coincide with the Schott formula and are not affected by quantum corrections.
One-photon radiation of scalar particles due to transitions between Landau levels
---------------------------------------------------------------------------------
When presenting the results obtained by other authors, we use the same system of units that was utilized in the cited articles.
There is a different approach to calculation of radiation of the spinless charged particle in due to one-photon transitions between the energy levels presented in Ref. [@scalar]. These calculations are based on the exact solutions of the Klein-Gordon equation in the uniform magnetic field (the Furry picture approach). The spectral angular distribution of the radiated power in this approach has the form$$\begin{aligned}
& w=\frac{27}{16\pi^{2}}w_{0}\xi^{2}\varepsilon_{0}^{-5/2}\int_{0}^{\infty
}dy\int_{0}^{\pi}\frac{\sin\theta d\theta}{\left( 1+\xi y\right) ^{3}}%
y^{2}\left[ \varepsilon^{2}K_{2/3}^{2}\left( z_{0}\right) +\varepsilon
\cos\theta K_{1/3}^{2}\left( z_{0}\right) \right] ,\nonumber\\
& w_{0}=\frac{8}{27}\frac{q^{2}m^{2}c^{2}}{\hbar^{2}},\ \xi=\frac{3}{2}%
\frac{e\hbar H}{m^{2}c^{3}}\frac{E}{mc^{2}},\ \varepsilon_{0}=\left(
\frac{mc^{2}}{E}\right) ^{2},\nonumber\\
& z_{0}=\frac{y}{2}\left( \frac{\varepsilon}{\varepsilon_{0}}\right)
^{3/2},\ \varepsilon=1-\frac{\omega^{2}R^{2}}{c^{2}}\sin^{2}\theta
,\ E=\frac{mc^{2}}{\sqrt{1-\omega^{2}R^{2}/c^{2}}}, \label{7.1}%\end{aligned}$$ where $K_{n}\left( z_{0}\right) $ are Airy functions, and $E$ is the electron energy. Unfortunately, no representation of the (\[7.1\]) in terms of the Bessel functions is given by the authors; however, it is claimed that Eq. (\[7.1\]) in the limit$\hbar\rightarrow0$ reproduces the classical result.
Two-photon radiation
====================
The probability $p\left( 2,t\right) $ and the energy $W\left( 2,t\right) $ of the two-photon radiation for a circular current (\[4.1\]) have the form:$$\begin{aligned}
& p\left( 2,t\right) =\frac{\alpha^{2}}{\left( 2\pi\right) ^{2}}\left\{
\int\frac{d\mathbf{k}}{2k_{0}}\left[ k_{0}^{-2}F_{1}\left( \mathbf{k}%
,t\right) \cot^{2}\theta+R^{2}F_{2}\left( \mathbf{k},t\right) \right]
\right\} ^{2},\nonumber\\
& W\left( 2,t\right) =\frac{\alpha^{2}\hbar c}{\left( 2\pi\right) ^{2}%
}\left\{ \int d\mathbf{k}\left[ k_{0}^{-2}F_{1}\left( \mathbf{k},t\right)
\cot^{2}\theta+R^{2}F_{2}\left( \mathbf{k},t\right) \right] \right\}
\nonumber\\
& \times\left\{ \int\frac{d\mathbf{k}^{\prime}}{k_{0}^{\prime}}\left[
k_{0}^{\prime-2}F_{1}\left( \mathbf{k}^{\prime},t\right) \cot^{2}%
\theta^{\prime}+R^{2}F_{2}\left( \mathbf{k}^{\prime},t\right) \right]
\right\} , \label{9.1}%\end{aligned}$$ where$$F_{1}(\mathbf{k},t)=\left\vert \sum_{n=-\infty}^{+\infty}nj_{n}\left(
k_{\perp}R\right) F_{\mathbf{k}}^{n}\left( \varphi,t\right) \right\vert
^{2},\ F_{2}(\mathbf{k},t)=\left\vert \sum_{n=-\infty}^{+\infty}j_{n}^{\prime
}\left( k_{\perp}R\right) F_{\mathbf{k}}^{n}\left( \varphi,t\right)
\right\vert ^{2}. \label{9.3}%$$
It is useful to compare our results with the calculations of two-photon radiation presented in other works. In the Ref. [@Zhuk76], it was considered the bremsstrahlung of relativistic electrons in the so-called approximation of soft photons (the total energy of emitted photons is much less than the energy of a relativistic electron). Our initial assumption, that the classical current $J(x)$ remains unchanged, despite the radiation losses matches with this approximation. The authors of Ref. [@Zhuk76] had used the expression for the instantaneous spectral distribution of the radiation energy of an electron using the Liénard-Wiechert potentials. In such a way they have obtained the total electromagnetic energy of the one-photon radiation. If the electric current in the latter quantity is taken in the form (\[4.1\]), it coincides with our result $W\left( 1,t\right) $ given by Eq. (\[4.24\]). Then the probability of emitting a photon is defined by the authors as $p\left( \left\{ 1\right\} ,t\right) =W\left( \{1\},t\right)
/\left( \hbar ck_{0}\right) $ \[here $W\left( \{1\},t\right) $ is the integrand of $W\left( 1,t\right) $\] and the probability $p\left( \left\{
N\right\} ,t\right) $ of emitting $\left\{ N\right\} $ soft photons in a narrow range of angles along the electron motion direction reads: $$p\left( \left\{ N\right\} ,t\right) =\prod_{i=1}^{N}p\left(
1_{\mathbf{k}_{i}\lambda_{i}},t\right) =\prod_{i=1}^{N}\left\vert
y_{\mathbf{k}_{i}\lambda_{i}}\right\vert ^{2}. \label{9.5}%$$ According to the authors, “when integrating in a finite interval of frequencies and directions, one must to introduce a factor $\left( N!\right)
^{-1}$ that takes into account the identity of the photons”.Thus, they arrive to our result (\[8.2a\]), which contains such factor for any momenta $\mathbf{k}$ without heuristic prescriptions. It is easy to verify that using the same approximation of the small difference between the angles $\varphi_{1}$ and $\varphi_{2}$ of photons emitted, $\Delta\varphi=\left(
\varphi_{1}-\varphi_{2}\right) \ll1$, we obtain from Eq. (\[8.2a\]) for the probability of the two-photon radiation the following result:$$p\left( 2,t\right) =\frac{25}{24}\alpha^{2}\omega\gamma\Delta\varphi
,\ \gamma=\left( 1-\omega^{2}R^{2}/c^{2}\right) ^{-1/2}. \label{9.6}%$$ It coincides with the one of the work [@Zhuk76].
It should be noted that in Refs. [@DualPhot] and [@MultiPhot], the authors calculated two-photon synchrotron emission, considering electron transitions between Landau levels by the help of the corresponding solutions of the Dirac equation. In the approximation accepted in the work [@Zhuk76] they derived corrections to Eq. (\[9.6\]) of the order $\hbar$ due to the quantum nature of the electron and due to its spin.
Concluding remarks\[S5\]
========================
As was said in the Introduction, in the beginning, the SR was studied by classical methods using the Liénard-Wiechert potentials of electric currents. Subsequently, it became clear that in some cases, quantum corrections to classical results may be important. These corrections were studied, considering the emission of photons arising from electronic transitions between spectral levels, described in terms of the Dirac equation. In this paper, we have considered an intermediate approach, in which electric currents generating the radiation are treated classically, whereas the quantum nature of the radiation is taken into account exactly. Such an approximate approach allows one to study the one-photon and multi-photon radiation without complicating calculations using corresponding solutions of the Dirac equation. We have constructed exact quantum states (\[2.10\]) of the electromagnetic field interacting with classical currents and studied their properties. By their help, we have calculated a probability of photon emission by classical currents from the vacuum initial state and obtained relatively simple general formulas for the one-photon and multi-photon radiation. Using the specific circular electric current, we have calculate the corresponding one-photon and two-photon SR. It was demonstrated that the emitted single-photon power per unit time in the limit $t\rightarrow\infty$ coincides with the classical expression obtained by Schott. This is not strange, since Schott’s result was already semi-classical, since he treated the electromagnetic field in terms of the Maxwell’s equations. It is well known that, see e.g. [@AkhiBer81], in fact, the Maxwell equations can be interpreted as the Schrödinger equation for a single photon, the absence of the Planck constant $\hbar$ in these equations as well as in the Schott formula is associated with the masslessness of the photon. The consideration of the electromagnetic radiation in a semiclassical manner, using Maxwell’s equations, often allows one to study quantum effects the of radiation [@comparison]. Schwinger’s calculations of SR contain $\hbar$ since he used elements of QFT that take into account quantum character of electron motion and in the limit $\hbar\rightarrow0$ lead to the Schott result. The same situation takes place with calculations of the SR radiation of a spinless charged particle due to transitions between energy levels with one-photon emission presented in Ref. [@scalar]. The proposed approach provides an opportunity to separate the effects of radiation associated with the quantum nature of the electromagnetic field from the effects caused by the quantum nature of the electron. The calculation of multiphoton corrections is significantly simplified compared, for example, with the approach described in [@MultiPhot; @DualPhot; @Zhuk76], where a two-photon correction to the radiation of an electron moving in a circular orbit in a constant uniform magnetic field is calculated within the framework of the Furry picture. Finally, it becomes possible to study the initial states of the system other than the vacuum initial state (the state without initial photons). Using these state vectors, the probabilities $p\left( N,t\right) $ (\[8.10\]) and the energy $W\left( N,t\right) $ (\[8.13\]) of $N$ photon radiation induced by classical currents are derived. The latter quantity can be summed exactly representing the total energy $W\left( t\right) $ (\[8.15\]) of emitted photons. The obtained results can be used for the systematic study of the multiphoton SR.
Acknowledgements
================
Bagrov acknowledges support from Tomsk State University Competitiveness Improvement Program. Gitman is supported by the Grant No. 2016/03319-6, Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), and permanently by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq). The work of Shishmarev was supported by the Russian Foundation for Basic Research (RFBR), project number 19-32-60010.
Appendix {#appendix .unnumbered}
========
Here we show that the sum (\[8.14\]) can be calculated analytically with the help of representation (\[8.12\]). We start at the definition of $W\left(
N,t\right) $ from Eq. (\[8.12\]),$$W\left( N,t\right) =\hbar c\left( N!\right) ^{-1}\sum_{\lambda_{1}=1}^{2}\sum_{\lambda_{2}=1}^{2}\ldots\sum_{\lambda_{N}=1}^{2}\int d\mathbf{k}_{1}d\mathbf{k}_{2}\ldots d\mathbf{k}_{N}\left[ \sum_{j=1}^{N}\left\vert
\mathbf{k}_{j}\right\vert \right] \prod_{i=1}^{N}\left\vert y_{\mathbf{k}_{i}\lambda_{i}}\right\vert ^{2}. \label{ap3.1}$$ We first consider the term with $j=1$. In the entire integrand (\[ap3.1\]), only the factor $\left\vert \mathbf{k}_{1}\right\vert
\left\vert y_{\mathbf{k}_{1}\lambda_{1}}\right\vert ^{2}$ depends on $\lambda_{1}$ and$\mathbf{k}_{1}$. Therefore, everything except the factor $\left\vert \mathbf{k}_{1}\right\vert \left\vert y_{\mathbf{k}_{1}\lambda_{1}}\right\vert ^{2}$ can be taken out from the signs of the sum over $\lambda_{1}$ and the integral over $d\mathbf{k}_{1}$. Since the indices $i$ are dumb (the limits of all summations and integrations are the same), we can cyclically shift their numbering ($i\rightarrow i-1$, i.e., $2\rightarrow
1$, $3\rightarrow2$, …, $N\rightarrow N-1$, $1\rightarrow N$). We do the same with each term from the sum $j=2,3,4,\ldots,N-1$. Now it’s obvious that the sum over $j$ in (\[ap3.1\]) degenerates into a factor $N$, and the quantity $W\left( N,t\right) $ takes the form:$$W\left( N,t\right) =\frac{\hbar c}{\left( N-1\right) !}\sum_{\lambda
_{1}=1}^{2}\sum_{\lambda_{2}=1}^{2}\ldots\sum_{\lambda_{N}=1}^{2}\int
d\mathbf{k}_{1}d\mathbf{k}_{2}\ldots d\mathbf{k}_{N}\left\vert \mathbf{k}_{N}\right\vert \prod_{i=1}^{N}\left\vert y_{\mathbf{k}_{i}\lambda_{i}}\right\vert ^{2}. \label{ap3.4}$$ It is easy to see that Eq. (\[ap3.4\]) can be written as: $$W\left( N,t\right) =\frac{\hbar c}{\left( N-1\right) !}\sum_{\lambda
_{N}=1}^{2}\int d\mathbf{k}_{N}\left\vert \mathbf{k}_{N}\right\vert \left\vert
y_{\mathbf{k}_{N}\lambda_{N}}\right\vert ^{2}\prod_{i=2}^{N}\left[
\sum_{\lambda_{i}=1}^{2}\int d\mathbf{k}_{i}\left\vert y_{\mathbf{k}_{i}\lambda_{i}}\right\vert ^{2}\right] . \label{ap3.5}$$ Finally, getting rid of dumb indices, we obtain:$$\begin{aligned}
& W\left( N,t\right) =\frac{\hbar cA}{\left( N-1\right) !}\left[
\sum_{\lambda=1}^{2}\int d\mathbf{k}\left\vert y_{\mathbf{k}\lambda
}\right\vert ^{2}\right] ^{N-1},\nonumber\\
& A=\sum_{\lambda=1}^{2}\int d\mathbf{k}k_{0}\left\vert y_{\mathbf{k}\lambda
}\right\vert ^{2},\ k_{0}=\left\vert \mathbf{k}\right\vert . \label{ap3.6}$$ The total energy $W\left( t\right) $ reads:$$W\left( t\right) =\sum_{N=1}^{\infty}W\left( N,t\right) =\hbar
cA\sum_{N=1}^{\infty}\left[ \left( N-1\right) !\right] ^{-1}\left[
\sum_{\lambda=1}^{2}\int d\mathbf{k}\left\vert y_{\mathbf{k}\lambda
}\right\vert ^{2}\right] ^{N-1}. \label{ap3.7}$$ The sum over $N$ can be reduced to an exponent by the change $N=M-1$. Thus, we justify Eq. (\[8.15\]).
[99]{}
F. R. Elder, A. M. Gurevitch, R. V. Langmuir and A. C. Pollock, Phys. Rev. **71**, 829 (1947).
L. D. Landau and E. M. Lifshitz, *The classical theory of fields*, ** (Pergamon Press, Oxford, 1971).
J. D. Jackson, *Classical Electrodynamics*, 3rd Edition (J. Wiley & Sons, New York, 1998).
G. A. Schott, Phil. Mag. **13**, 657 (1907); Ann. Phys. **329**, 635 (1907); *Electromagnetic Radiation* (Cambrige University Press, Cambrige, 1912).
J. Schwinger, Phys. Rev. **75** (12), 1912 (1949).
J. Schwinger, Proc. Nat. Acad. Sci. U. S. **40**, 132 (1954).
A. A. Sokolov, I. M. Ternov, Sov. Phys. JETP **4**, 396 (1957); *Synchrotron Radiation* (Academic Verlag, Berlin, 1968); *Radiation from relativistic electrons*[ (American Institute of Physics, New York, 1986)]{}.
J. Schwinger, *Particles, Sources, and Fields*, Vol. 1 (1970) Vol. 2 (1973) (Addison-Wesley).
J. Schwinger, Phys. Rev. D **7** (6), 1696 (1973).
A. A. Sokolov, I. M. Ternov, *Proc. Int. Conf. on High Energy Accelerators,* 21 (1963); Dokl. Akad. Nauk USSR **153**, 1053 (1963).
A. A. Sokolov, A. M. Voloshenko, V. Ch. Zhukovskii and Yu. G. Pavlenko, Sov. Phys. Journ. **9**, 46 (1976).
A. A. Sokolov, A. M. Voloshenko, V. Ch. Zhukovskii and Yu. G. Pavlenko, Russ. Phys. Journ. **19**, 1139 (1976).
W. Heitler, *The Quantum Theory of Radiation,* (Oxford Univ. Press, London, 1936).
S. Schweber, *An Introduction to Relativistic Quantum Field Theory* (Harper & Row, New York, 1961).
N. N. Bogoliubov and D. V. Shirkov, *Introduction to the Theory of Quantized Fields*, 3-rd ed. (John Wiley & Sons, New York, 1980).
A. I. Akhiyeser and V. B. Berestetskii, *Quantum Electrodynamics* (Science, Moscow, 1981).
D. M. Gitman and I. V. Tyutin, *Canonical quantization of fields with constraints* (Nauka, Moscow, 1986); *Quantization of Fields with Constraints* (Springer-Verlag, Berlin, 1990).
V. G. Bagrov, D. M. Gitman, V. A. Kuchin, *External field in quantum electrodynamics and coherent states*, in *Actual problems of Theoretical physics*, M. V. Lomonosov Moscow State University, Moscow, Russia, pp. 334-342 (1976); Sov. Phys. Journ. **4**, 152 (1974).
V. G. Bagrov, D. M. Gitman, A. D. Levin, J. Russ. Laser. Res. **32**, 317 (2011).
N. N. Bogoliubov, D. V. Shirkov, *Quantum Fields* (Nauka, Moscow 1980).
D. V. Galtsov, Yu. V. Gratz and V. Ch. Zhukovsky, *Classical fields* (Moscow State University Press, Moscow, 1991).
R. P. Feynman, Phys. Rev. **84**, 108 (1951).
R. J. Glauber, Phys. Rev. **84**, 1 (1951).
*Radiation Theory of Relativistic Particles*, Editor: V. A. Bordovitsyn, (Fizmatlit, Moscow, 2002); V. G. Bagrov, Izv. VUZov. Fizika **5**, 121 (1965).
A. M. Voloshchenko, V. Ch. Zhukovskii, and Yu. G. Pavlenko, Moscow University Physics Bulletin **31**, 42 (1976).
E. T. Jaynes, F. W. Cummings, Proc. IEEE **51**, 89 (1963).
[^1]: bagrov@phys.tsu.ru
[^2]: gitman@if.usp.br
[^3]: a.a.shishmarev@mail.ru
[^4]: a.jorgedantas@gmail.com
[^5]: See Refs. [@GitBagKuch; @BagGitLev].
[^6]: It should be noted that Glauber [@glauber] derived the total probability $P\left( N,t\right) $ by his own method, however, did not consider its application for the radiation problem.
|
---
abstract: '[We study the collective excitations in a relativistic fluid with an anomalous $U(1)$ current. In $3+1$ dimensions at zero chemical potential, in addition to ordinary sound modes we find two propagating modes in presence of an external magnetic field. The first one which is a transverse degenerate mode, propagates with a velocity proportional to the coefficient of gravitational anomaly; this is in fact the Chiral Alfvén wave recently found in [@Yamamoto:2015ria]. Another one is a wave of density perturbation, namely a chiral magnetic wave (CMW). The velocity dependence of CMW on the chiral anomaly coefficient is well known. We compute the dependence of CMW’s velocity on the coefficient of gravitational anomaly as well. We also show that the dissipation splits the degeneracy of CAW. At finite chiral charge density we show that in general there may exist five chiral hydrodynamic waves. Of these five waves, one is the CMW while the other four are mixed Modified Sound-Alfvén waves. It turns out that in propagation transverse to the magnetic field no anomaly effect appear while in parallel to the magnetic field we find sound waves become dispersive due to anomaly. ]{}'
author:
- Navid Abbasi
- Ali Davody
- Kasra Hejazi
- Zahra Rezaei
title: Hydrodynamic Waves in an Anomalous Charged Fluid
---
Introduction {#1}
============
Study of fluids with broken parity symmetry has attracted much attention recently. Parity may be broken in the system due to the presence of either an external magnetic field or a rotation in the fluid. Currents along the direction of an external magnetic field discussed earlier in [@Vilenkin:1980fu], has been recently argued to be realized in heavy-ion collisions [@Kharzeev:2007jp; @Fukushima:2008xe]. This is termed as the chiral magnetic effect, CME. Analogously, chiral vortical effect, CVE, is related to currents in the direction of a rotation axis [@Vilenkin:1979ui]. The vorticity term which is responsible for this effect in the fluid constitutive current has been discovered in the context of gauge-gravity duality [@Erdmenger:2008rm; @Banerjee:2008th].
The long time missed vorticity term seems to be in contradiction with existence of a positive divergence entropy current. However, because the parity violating terms like vorticity violate time-reversal as well, one may expect their associated transport coefficients to be non-dissipative. Considering the latter fact, Son and Soruka showed that the vorticity term is not only allowed by symmetries, but also is required by the triangle anomalies and the second law of thermodynamics [@Son:2009tf]. They computed the coefficients of both CME and CVE terms in terms of the anomaly coefficient at non-zero chemical potential ($\mu \ne 0$). These so-called anomalous transport coefficients vanish at zero chemical potential. The non-vanishing contribution to anomalous transport at $\mu=0$ was firstly observed in [@Bhattacharya:2011tra] and then was computed in [@Neiman:2010zi; @Landsteiner:2011cp] by considering the mixed gravitational anomaly in $3+1$ dimensions.
The issue has been also developed through other approaches[^1]. For example, a new kinetic theory containing such effects has been derived from the underlying quantum field theory [@Son:2012wh; @Stephanov:2012ki]. It has been shown that the Berry monopole is responsible for the CME and CVE [@Stephanov:2012ki; @Chen:2012ca]. Chiral magnetic effect has been also studied in the context of lattice field theory [@Buividovich:2009wi; @Buividovich:2010tn].
Non-dissipative character of anomalous transport has been discussed by some authors. Apart from explanations based on symmetry [@Son:2009tf; @Kharzeev:2011ds], it has been recently illustrated with an example in the context of gauge-gravity duality. Computing the drag force exerted on a heavy quark moving in a general parity violating fluid [^2], the authors of [@Rajagopal:2015roa] have found a particular setting in which the CME- or CWE-induced current flows past the heavy quark without exerting any drag force on it.
On the other hand, a usual way to study the transport phenomena is to investigate the long wave-length fluctuations around equilibrium state of the fluid. Associatively, non-dissipative nature of the anomalous transport coefficients may be better understood via studying the hydrodynamic excitations in the chiral fluid. So in this paper we consider a fluid of chiral particles, i.e. single right-handed fermions, and compute the spectrum of its collective excitations to first order in derivative expansion.
Let us recall that in a parity preserving fluid in $3+1$ dimensions the only collective modes are the two ordinary sound modes. However when taking into account the effect of dissipation, one finds four hydrodynamic modes in a charged fluid at zero chemical potential [@Kovtun:2012rj]. Of these modes, two are the dissipating sound modes while the other two are pure shear modes. In [@Abbasi:2015nka] we showed that in presence of an external magnetic field, one of the latter shear modes would split into two new shear modes. As a result, one finds that the dissipation, when accompanying with presence of magnetic field, excites all five possible hydrodynamic modes corresponding to five microscopic conserved charges in the system.
In the current paper we study the hydrodynamic excitations in a parity violating fluid in presence of a background magnetic field. Our study includes two parts. Firstly, we consider a system at zero chemical potential in $3+1$ dimensions and compute the hydrodynamic modes in the absence of dissipation. we find four distinguished modes as it follows: two longitudinal sound modes and two chiral modes. The appearance of chiral modes are due to presence of chiral anomaly as well as the gravitational anomaly. Of these two, the chiral wave with a velocity proportional to the gravitational anomaly coefficient is the so-called Chiral Alfvén, recently found by Yamamoto too[@Yamamoto:2015ria]; this mode is a wave of momentum fluctuations. Another chiral wave that we obtain, is nothing but a CMW. The dependence of CMW velocity on the chiral anomaly is well known. We find the dependence of CMW velocity on the gravitional anomaly coefficient as well. To do so, we use the anomalous transport coefficients including the effect of gravitational anomaly as well as the chiral anomaly effects, in Landau frame.
One may expect in a dissipative chiral fluid one of the above-mentioned four modes to be split into two dissipative waves. We show that this actually happens for the Alfvén wave. In summary, in 3+1 dimensions, five distinguished hydrodynamic modes may be excited in a dissipative chiral fluid: two dissipating sound modes and three dissipating chiral waves.
Another part of our results is related to the hydrodynamic waves in a chiral fluid at finite density. In reality, such fluid might exist above the electroweak phase transition where the $SU(2) \times U_Y(1)$ symmetry is not broken. Since the hypermagnetic field associated with the $U_Y(1)$ couples differently to right- and left-handed electrons, the high- temperature plasma there is chiral [@Giovannini:1997eg]. We will explore hydrodynamic fluctuations in such plasma. We will show that in this regime, sound modes specifically, will be modified remarkably. They become mixture of longitudinal and transverse waves which one may refer to as the modified sound waves. Depending on the relative situation of magnetic field and wave vector, we carefully compute hydrodynamic modes in different cases.
As it is well known, the chiral anomaly is present in even space-time dimensions. In $1+1$ dimensions, the anomalous transport has been discussed in the context of effective field theory. Using the second law of thermodynamics, the authors of [@Dubovsky:2011sk] derived a formula for the only anomalous transport coefficient in $1+1$ dimensions in terms of thermodynamic functions. The same relation has been also obtained in [@Jain:2012rh] from the partition function.
To complete our discussion, we also compute the hydrodynamic fluctuations of a non-dissipative chiral fluid in $1+1$ dimensions. In addition to two ordinary sound waves, we find a new propagating wave; the so-called “one-and-a-halfth sound” which was previously found in [@Dubovsky:2011sk] from the effective field theory method. Compared to the earlier results, we explicitly compute the velocity of first mode in terms of anomaly coefficient[^3].
The paper is organized as it follows: In section \[sec2\] we give a brief review of the parity odd fluid dynamics in $3+1$ dimension. We continue the topic by studying a neutral chiral fluid in \[sec3\]. We first compute the hydro modes and their amplitudes and then physically interpret them. In \[sec4\], we first study a charged fluid with no anomaly. Then for different relative situations of wave vector and magnetic field, we compute the hydrodynamic modes of a chiral charged fluid. In \[sec5\], we study the effect of anomaly on the collective excitations of a 1+1 dimension chiral fluid. After mentioning some follow up questions in \[App\], we give some comments about the collective excitations in a parity violating fluid in 2+1 dimensions.
Parity violating fluid in $3+1$ dimensions {#sec2}
===========================================
Let us recall that the parity violating terms in $3+1$ dimensions have been shown to be associated with triangle anomaly of chiral currents. In presence of a background gauge field $A_{\mu}$, the equations of hydrodynamic for a normal fluid with a conserved charge, with $U(1)^3$ anomaly, take the form: $$\begin{split}
\partial_{\mu}T^{\mu \nu}=&\,F^{\nu \lambda} J_{\lambda}\\
\partial_{\mu} J^{\mu}=&\, \mathcal{C} E_{\mu} B^{\mu}
\end{split}$$ where we have defined the electric and magnetic field in the rest frame of the fluid as $B^{\mu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}u_{\nu}F_{\alpha \beta},\,\,\,\,\,\,E^{\mu}=\,F^{\mu \nu}u_{\nu}$ [@Son:2009tf].
The energy-momentum tensor and the chiral current are $$\label{TJ}
\begin{split}
T^{\mu \nu}=& \,(\epsilon+p) u^{\mu} u^{\nu}+ p \,\eta^{\mu \nu} +\tau^{\mu \nu}\\
J^{\mu}=& \,n u^{\mu} +\nu^{\mu}.
\end{split}$$ Here the thermodynamic parameters $\epsilon(\mu,T)$, $p(\mu,T)$ and $n(\mu,T)$ are the values of energy density, pressure and charge density respectively in an equilibrium state. The equilibrium state is specified with $$u^{\mu}=(1,0,0,0),\,\,T=\text{Const.},\,\,\mu=\text{Const.},\,\,\boldsymbol{B}=\boldsymbol{0}$$ with the pressure $\bar{p}=\bar{p}(\mu, T)$ satisfying: $$\begin{aligned}
d\bar{p}&= \bar{s} dT+ \bar{n} d\mu\\
\bar{\epsilon}+\bar{p}&=\bar{s} T+\bar{n} \mu.\,\,\,\,
\end{aligned}$$
In the Landau-Lifshitz frame where $u_{\mu}\tau^{\mu \nu}=0$ and $u_{\mu} \nu^{\mu}=0$ [@Landau; @Bhattacharya:2011tra] we may write $$\begin{aligned}
\label{disspart}
\tau^{\mu \nu}=-\eta P^{\mu\alpha}P^{\nu\beta}\left(\partial_{\alpha}u_{\beta}+\partial_{\beta}u_{\alpha} \right)-\left(\zeta-\frac{2}{3}\eta\right) P^{\mu \nu} \partial.u\\
\nu^{\mu}= - \sigma T P^{\mu \nu} \partial_{\nu}\left(\frac{\mu}{T}\right)+\sigma E^{\mu}+\xi \omega^{\mu}+\,\xi_B B^{\mu}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\end{aligned}$$ Here $\omega^{\mu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}u_{\nu}\partial_{\alpha}u_{\beta}$ is the vorticity and the $\xi$ and $\xi_{B}$ are the anomalous transport coefficients corresponding to CVE and CME [@Son:2009tf; @Neiman:2010zi] $$\begin{aligned}
\label{xi}
\xi=\mathcal{C}\mu^2\left(1-\frac{2}{3} \frac{\bar{n} \mu}{\bar{\epsilon}+\bar{p}}\right)+\mathcal{D} T^2\left(1- \frac{2\bar{n} \mu}{\bar{\epsilon}+\bar{p}}\right)\\\label{xiB}
\xi_B=\mathcal{C}\mu\left(1-\frac{1}{2} \frac{\bar{n} \mu}{\bar{\epsilon}+\bar{p}}\right)-\frac{\mathcal{D}}{2} \frac{\bar{n} T^2}{\bar{\epsilon}+\bar{p}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\end{aligned}$$ where $\mathcal{C}$ and $\mathcal{D}$ are the coefficients of chiral anomaly and gravitational anomaly respectively, as [@Gao:2012ix; @Golkar:2012kb; @Jensen:2012kj] $$\mathcal{C}=\frac{1}{4 \pi^2},\,\,\,\,\,\,\,\mathcal{D}=\frac{1}{12}.$$ Let us note that firstly in [@Son:2009tf], Son and Surowka computed the chirally anomaly contributions to $\xi$ and $\xi_{B}$, namely the statements in front of $\mathcal{C}$, and then the authors of [@Landsteiner:2011cp] generalized [@Son:2009tf] by computing the contribution of gravitaional anomaly, namely $\mathcal{D}$ terms.
Before ending this subsection let us briefly discuss how one can consider the background electromagnetic field consistent with the hydrodynamic expansion. Note that we take the strength of $A_{\mu}$ of the same order of the temperature and the chemical potential, so $A_{\mu}\sim O(\partial^0)$ and $F^{\mu \nu}\sim O(\partial)$.
Let us recall that we are interested in the problem of how different wave-lengths behave in presence of an external magnetic magnetic field. It is natural to assume that the value of this magnetic field is constant. However this assumption becomes problematic when one considers the values of wave-lengths that are not of the same order as length-scale corresponding to the magnetic field ($\ell_{B}$). The reason is that when freely studying the wave-lengths much larger than $\ell_{B}$, additional not-necessarily-small contributions would arise which are not captured by the hydrodynamic expansion. It means that the assumption that $F^{\mu\nu}$ is of first order will be no longer valid when it is constant. Therefore, our study is restricted to the values of wave-lengths that are of same scale as $\ell_{B}$.
Our strategy is to study the problem following the method used in [@Buchbinder:2008dc]. We consider a restricted interval of wave-lengths, which contains those of the same order as the $\ell_{B}$. We study only the values of wave-lengths inside this so-called window. We then request the magnetic field to approach zero as the wave-vector tends to zero. To proceed one may consider the following relation between the magnetic field and the wave-vector [@Buchbinder:2008dc]: $$\boldsymbol{B} = \tilde{\alpha} \,\boldsymbol{k}.$$ However, in contrast to [@Buchbinder:2008dc], $\tilde{\alpha}$ is a matrix here and thus the magnetic field and the wave-vector are not in general parallel.
In the next subsection, we compute the hydrodynamic modes around the equilibrium state and show that how the simultaneous presence of dissipative effects and the anomalies may lead to excite three dissipating chiral waves in the medium.
Zero Chemical Potential {#sec3}
=======================
In order to study the hydrodynamic fluctuations, we should take the hydrodynamic equations and linearize them around the equilibrium state. Instead of five usual hydrodynamic variables, namely one temperature field $T(x)$, one chemical potential field $\mu(x)$ and three components of velocity field $u^{\mu}(x)$, we would rather to take the following set of variables as the hydrodynamic variables: $\phi_a=\left(T^{00}(x), T^{0i}(x),J^0(x)\right)$. The importance of this choice is that, to each of these hydrodynamic variables, a quantum operator corresponds. In the two following subsections, we compute the hydrodynamic modes around the equilibrium state firstly for the zero chemical potential case and then we physically interpret our results.
Hydrodynamic Modes
------------------
To first order in linear fluctuations, the super field $\phi_a$ may be written as $\label{variables}
\phi_a=\,\big(\phi_0,\phi_i,\phi_5\big)=\,\big(\bar{\epsilon}+\delta \epsilon,\frac{}{}\pi_i,\, n\big)
$ where $\pi_i=\frac{v_i}{\bar{\epsilon}+\bar{p}}$ and $i=1,2,3$. In terms of spacial Fourier transformed field $\phi_a(t, \boldsymbol{k})$, the linearized hydrodynamic equations are written as $$\label{linearized3+1}
\begin{split}
& \partial_{t} \delta \epsilon +\, i k^{j} \pi_{j}=0\\
& \partial_{t}\pi_{i}+\,i k_{i} v_s^2 \delta \epsilon + \mathcal{M}_{i j } \pi_{j}=\,-i D F_{i m} k^{m} n \,+\\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,F^{im}\left(\frac{\sigma}{\bar{w}} F_{mj}+i\frac{ \xi}{2\bar{w}} \epsilon_{m l j}k^l\right)\pi^j\\
& \partial_{t} n+ \,\left(k^2 D-\frac{i}{2}\left(\frac{\partial \xi_B}{\partial n}\right)_{\epsilon}\epsilon^{ijm}F_{ij}k_m\right) n +\frac{i \sigma}{\bar{w}} k_{j} F^{j k} \pi_{k}=0\\
\end{split}$$ Here $\mathcal{M}_{i j }=\gamma_{\eta}(\boldsymbol{k}^2 \delta_{i j}- k_{i} k_{j})+ \gamma_{s} k_{i}k_{j}$. Note that in the above equations, the anomalous transport coefficients have to be evaluated at zero chemical potential. While $\xi_B$ vanishes at $\mu=0$, its fluctuations will no longer vanish at the same limit $$\begin{aligned}
\label{put}
\xi &=&\,\mathcal{D}T^2\\
\left( \frac{\partial\xi_{B}}{\partial n} \right)_{\epsilon}&=&\,\frac{\mathcal
{C}}{\chi}-\frac{\mathcal{D}}{2}\frac{T^2}{\bar{w}}.
\end{aligned}$$ In what follows we first compute the hydrodynamic modes in terms of $\xi$ and $\frac{\partial \xi_B}{\partial n}$ and then, we re-express them in terms of the anomaly coefficients.
In order to find hydrodynamic modes we use the super field notation $\phi_a$ and rewrite the linearized equations (\[linearized3+1\]) as $$\label{eqsuperfield}
\partial_{t} \phi_{a}(t, \boldsymbol{k})+\,M_{a b}(\boldsymbol{k})\, \phi_{b}(t, \boldsymbol{k})=0$$ by introducing
$$M_{a b}=
\left( {\begin{array}{ccc}
0 & i k_j & 0\\
i k^{i} v_{s}^2 & \mathcal{M}^{i}_{ j}-{\sigma}{\bar{w}}\big(B_j B^i- \boldsymbol{B}^2\delta_j^i\big)+\frac{ i \xi}{2\bar{w}}\big(\boldsymbol{B}.\boldsymbol{k}\delta_j^i-B_j k^i \big) & -i D \epsilon^{inm}B_m k_n \\
0 & -\frac{i \sigma}{\bar{w}}\epsilon_{jnm}B^n k^{m} & \boldsymbol{k}^2 D+ i \left(\frac{\partial \xi_B}{\partial n}\right)_{T} \boldsymbol{B}.\boldsymbol{k}
\end{array} } \right).$$
Hydrodynamic modes as the poles of response functions may be found via solving the following equation: $$\det\left(-i \omega\delta_{ab}+\frac{}{} M_{ab}(\boldsymbol{k})\right)=0.$$ Doing so, we find four hydrodynamic modes in the absence of dissipative effects $$\begin{aligned}
\omega_{1,2}^{(0)}(\boldsymbol{k})&=&\,\pm v_{s} k\\
\label{first alfven}
\omega_{3,4}^{(0)}(\boldsymbol{k})&=&-\frac{\xi}{2\bar{w}} \boldsymbol{B}.\boldsymbol{k}\\
\label{first CMW}
\omega_{5}^{(0)}(\boldsymbol{k})&=& \left(\frac{\partial \xi_B}{\partial n}\right)_{\epsilon} \boldsymbol{B}.\boldsymbol{k}.\end{aligned}$$ So far, we have only computed the dispersion relation of hydrodynamic modes to zero order in derivative expansion, namely $\omega^{(0)}$. Just like a non-chiral fluid, we see that there exist two ordinary sound modes here. However, due to the effect of anomalies, two other hydrodynamic modes may propagate in a chiral fluid. The first one, which is itself a degenerate mode, namely $\omega_{3,4}$, was recently found by Yamamoto too. This mode which is referred to as the Chiral Alfvén wave in [@Yamamoto:2015ria], propagates with a velocity that goes to zero with the coefficient of gravitational anomaly (see table(\[tabelmodes3+1\])). As was noted in [@Yamamoto:2015ria], in an incompressible fluid, the Chiral Alfvén wave would be a transverse wave (We discus about this point in next subsection.). The last mode, namely $\omega_{5}$, is a kind of well-known Chiral Magnetic Waves. In [@Kharzeev:2010gd], Kharzeev and Yee showed that the coupling between the density waves of the electric and chiral charges leads to existence of a new type of gapless excitations in the plasma of chiral fermions. They called the latter as the Chiral Magnetic Wave. As they emphasized in [@Kharzeev:2010gd], the CMW could also exist in a fluid of single right handed fermions; the mode $\omega_{5}$ given above is exactly the CMW they pointed out (See next subsection.).
Mode Analysis
-------------
Using relations \[put\], we have re-expressed modes found in previous subsection in table(\[tabelmodes3+1\]). In addition, we have written the normalized amplitudes of fluctuations in the last column of the table. Note that $\delta \phi_{n}$ is the amplitude corresponding to the mode $\omega_{n}(k)$; in position space it may be written as $$\delta \phi_{n}(k,\omega_{n}) e^{i \boldsymbol{k.x}- i \omega_{n}(k)t}.$$ We know sound mode is the propagation of energy and momentum fluctuations. Since the vectorial propagating component of sound amplitudes $\delta \phi_{1,2}$ is in the direction of wave vector $\boldsymbol{k}$, sound would be a longitudinal mode expectedly. The situation is different with Chiral Alfén Waves. Both $\delta \phi_{3}$ and $\delta \phi_{4}$ are fully vector type fluctuations i.e. wave of momentum fluctuations. In contrast to sound modes, both of them are transverse [^4]: $$\begin{split}
\boldsymbol{k.}\big(\hat{\boldsymbol{B}} \times \hat{\boldsymbol{k}}\big)&=0\\
\boldsymbol{k.}\big(\hat{\boldsymbol{B}}-(\hat{\boldsymbol{B}}.\hat{\boldsymbol{k}}) \hat{\boldsymbol{k}}\big)&=0.
\end{split}$$
The fifth mode in the table(\[tabelmodes3+1\]) is the wave of scalar fluctuations, i.e. a density wave (CMW). This is why we refer to this mode as the Chiral Magnetic Wave. It is worth mentioning that the dependence of CMW’s velocity on the chiral anomaly coefficient, is the same for both single right-handed (our case) and mixed chirality ([@Kharzeev:2010gd] case) plasmas. Our result shows that the velocity of CMW may also depend on the gravitational anomaly coefficient, if one takes into account the effect of gravitational anomaly in computing the anomalous transport coefficients.
[|l|c|c|]{} Type of mode & Dispersion relation & Amplitude\
sound & $\omega_{1,2}^{(0)}(\boldsymbol{k})=\,\pm v_{s} k$ & $\delta \phi_{1,2}(k,\omega_{1,2}) = \left( \mp \frac{1}{\sqrt{\beta _1}},\hat{\boldsymbol{k}},0 \right)
$\
& &\
&&\
Chiral Alfvén &$\omega_{3,4}^{(0)}(\boldsymbol{k})=\,-\frac{\mathcal{D}}{2}\frac{T^2}{\bar{w}} \boldsymbol{B}.\boldsymbol{k}$ & $\delta \phi_{3}(k,\omega_3) = \left( 0,\hat{\boldsymbol{B}} \times \hat{\boldsymbol{k}},0 \right)$\
& & $\delta \phi_{4}(k,\omega_4) = \left( 0,\hat{\boldsymbol{B}}-(\hat{\boldsymbol{B}}.\hat{\boldsymbol{k}}) \hat{\boldsymbol{k}},0 \right)$\
& &\
CMW & $ \omega_{5}^{(0)}(\boldsymbol{k})=\,\left(\frac{\mathcal{C}}{\chi}-\frac{\mathcal{D}}{2}\frac{T^2}{\bar{w}}\right)\boldsymbol{B}.\boldsymbol{k}$ & $\delta \phi_{5}(k,\omega_{5}) = \left( 0,\boldsymbol{0},1 \right)$\
In table (\[tabelmodes3+1diss\]) we have re-expressed the modified dispersion relation of the above modes when considering also the dissipative effects. Just analogous to what was found in [@Abbasi:2015nka] for a neutral fluid, dissipation splits the degeneracy of CAWs here. Interestingly, the chiral Alfvén is split into two chiral waves: one is a dissipating chiral Alfvén wave and another is a dissipating mixed CMW/Alfvén wave. It should be also noted that the CMW changes to a mixed CMW/Alfvén due to dissipation.
[|l|c|c|]{} Type of mode & Dispersion relation & $\sigma=\eta=\zeta=0$\
& &\
sound & $\omega_{1,2}(\boldsymbol{k})=\,\pm v_{s} k-\,\frac{i}{2}\left(\boldsymbol{k}^2\gamma_{s}+\,\frac{\sigma}{ \bar{w}}\boldsymbol{B}^2 \sin^2 \theta \right)$&$\omega_{1,2}^{(0)}(\boldsymbol{k})$\
& &\
& &\
Alfvén & $\omega_{3}(\boldsymbol{k})=-\frac{\mathcal{D}}{2}\frac{T^2}{\bar{w}} \boldsymbol{B}.\boldsymbol{k}-i\left(\boldsymbol{k}^2 \gamma_{\eta}+\frac{\sigma}{ \bar{w}}\boldsymbol{B}^2 \cos^2 \theta
\right)$ & $\omega_{3}^{(0)}(\boldsymbol{k})$\
& &\
mixed Alfvén/CMW & $\omega_{4,5}(\boldsymbol{k})=
\left(\frac{\mathcal{C}}{2\chi}-\frac{\mathcal{D}}{2}\frac{T^2}{\bar{w}}\right)\boldsymbol{B}.\boldsymbol{k}-\frac{i}{2}\left( \boldsymbol{k}^2 (D+\gamma_{\eta})+ \frac{\sigma}{\bar{w}}\,\boldsymbol{B}^2 \right) $ & $\omega_{4}^{(0)}(\boldsymbol{k})$\
& $\pm\frac{1}{2}\sqrt{\left(i \boldsymbol{k}^2(D-\gamma_{\eta})- i \frac{\sigma}{\bar{w}}\,\boldsymbol{B}^2- \frac{\mathcal{C}}{\chi}\boldsymbol{B}.\boldsymbol{k}\right)^2- \frac{4 D \sigma}{\bar{w}}\boldsymbol{B}^2\,\boldsymbol{k}^2\sin^2 \theta}$ & $\omega_{5}^{(0)}(\boldsymbol{k})$\
Physical interpretation
-----------------------
In [@Yamamoto:2015ria], it has been discussed how an external magnetic field provides the restoring force needed for propagation of Chiral Alfvén waves in an anomalous charged fluid. We mention some more detail on how the Lorentz force can make dissipation at the same time that it plays the role of a restoring force on the chiral current. We also discuss in this set up only $\delta \phi_4$ propagates.
Let us consider a chiral Alfvén wave in an incompressible fluid. As it can be seen in table \[tabelmodes3+1diss\], the magnetic field contributes through two terms in the expression of $\omega_3$: In the first term it leads to propagation of a chiral wave while in the last term it makes the propagating chiral wave dissipates.
In order to understand how the magnetic field makes the role of a restoring force at the same time that it forces the wave to dissipate, we consider the following set up: let us take the magnetic field in the positive $z$-direction and consider the perturbation of fluid momentum[^5] in the positive $y$-direction, $\boldsymbol{\pi}=\pi_y(z)\hat{\boldsymbol{y}}$ [@Yamamoto:2015ria]. As one expects, the momentum perturbation induces an Ohmic current as $\boldsymbol{J}_{\sigma}=\frac{\sigma}{\bar{w}} \boldsymbol{\pi} \boldsymbol{\times} \boldsymbol{B}$. In addition, since the fluid is chiral, the local vortical current $\boldsymbol{J}_{\omega}=\mathcal{D} T^2 \boldsymbol{\omega}$ is induced too. In presence of a magnetic field, the above currents receive Lorentz forces $\boldsymbol{F}_{\sigma}$ and $\boldsymbol{F}_{\omega}$ respectively.
In Fig.\[fig:CAW\] we have illustrated the Lorentz forces exerted on an element of the fluid at the origin in two different situations. In the left panel we assume $\pi_y > 0$ and $\partial_z \pi_y > 0$. We also take the fluid incompressible, i.e. $\boldsymbol{\nabla}.\boldsymbol{v}=0$, so the wave-vector $\boldsymbol{k}$ points the negative $z$-direction (See $\omega^{\text{nd}}_3(\boldsymbol{k})$.). For this local fluid momentum, the vorticity, $\boldsymbol{\omega}=\boldsymbol{\nabla}\times \boldsymbol{v}$, points along the negative $x$-direction. The resultant currents $\boldsymbol{J}_{\omega}$ and $\boldsymbol{J}_{\sigma}$ have been shown in the figure. At the bottom part of the figure, we have illustrated the fluid element as a point oscillating on the $y$-axis near the origin, with the Lorentz forces exerted on it. At the situation considered above, the momentum of the element is increasing. This is due to both shape of the momentum profile and the direction of wave propagation; so at this moment, the fluid element behaves like an oscillator going towards the center of oscillation from the left hand side of it. As it can be clearly seen in the figure, $\boldsymbol{F}_{\omega}$ plays the role of the restoring force while $\boldsymbol{F}_{\sigma}$ makes the momentum of the element dissipates.
Let us consider another situation in which $\boldsymbol{F}_{\omega}$ and $\boldsymbol{F}_{\sigma}$ act on the element in the same direction. To proceed, in the right panel of Fig.\[fig:CAW\] we take $\pi_y > 0$ and $\partial_z \pi_y <0$. Compared to the left panel case, the direction of vorticity is reversed here. As a result, $\boldsymbol{J}_{\omega}$ and $\boldsymbol{F}_{\omega}$ are reversed too. Since the momentum of the element is decreasing now, it behaves like an oscillator going away from the center. Although both $\boldsymbol{F}_{\omega}$ and $\boldsymbol{F}_{\sigma}$ act in the same direction, the former is a restoring force pointing to the center and the latter is a friction-like force pointing opposite to the velocity.
![Configuration of the vectors ${\bm B}$, ${\bm \pi}$, ${\bm \omega}$, ${\bm J}_{\sigma}$, ${\bm J}_{\omega}$, ${\bm F}_{\sigma}$ and ${\bm F}_{\omega}$ for propagtion of $\delta\phi_4$[]{data-label="fig:CAW"}](figs.pdf)
In the set up considered above, we took the wave vector along the direction of magnetic field. From the table (\[tabelmodes3+1\]), it is obvious that the CAW with amplitude $\delta \phi_3$ will not be excited in this situation ($\hat{\boldsymbol{B}} \times \hat{\boldsymbol{k}}=0$). However the amplitude $\delta \phi_4$ propagates. The reason is as follows; consider $\theta$ as the angle between the wave vector and the direction of magnetic field. In order to study the $\hat{\boldsymbol{B}}\parallel \hat{\boldsymbol{k}}$ case in the expression of $\delta \phi_4$, we take the wave vector fixed in the space and so rotate the magnetic field around it that $\theta \rightarrow 0$. Having $$\lim_{\theta \rightarrow 0} \big(\hat{\boldsymbol{B}}-(\hat{\boldsymbol{B}}.\hat{\boldsymbol{k}}) \hat{\boldsymbol{k}}\big)=\,\lim_{\theta \rightarrow 0}\big(\boldsymbol{\mathcal{R}(\theta)}\hat{\boldsymbol{k}}- \boldsymbol{1} \cos \theta \hat{\boldsymbol{k}}\big)=\,\boldsymbol{1}$$ with $\mathcal{R}(\theta)$ the rotational matrix, simply confirms why when $\hat{\boldsymbol{B}}\parallel \hat{\boldsymbol{k}}$, the amplitude $\delta \phi_4$ propagates.
An interesting point about $\delta \phi_3$ is that even if the magnetic field was transverse to the wave vector, this mode could not propagate. The reason for that is in this case no restoring force exists to make $\delta \phi_3$ propagate. Mathematically, it is obvious that in this limit ($\boldsymbol{B}.\boldsymbol{k}\rightarrow 0$), $\omega_3$ vanishes . It means that $\delta \phi_3$ may propagate in every magnetic field except in directions either parallel or transverse to it.
In summary, while there could exist only two propagating wave in a normal fluid (sound modes), a neutral chiral fluid may have five hydrodynamic waves. To excite the three new waves, an external magnetic field is needed. The external magnetic field affects on an anomalous fluid through two ways: 1) providing the restoring force for propagation of chiral waves, 2) making dissipation via inducing Ohmic currents.
Non-vanishing Chemical Potential {#sec4}
================================
Now, let us consider a fluid of single right-handed fermions at finite density, namely at finite chiral chemical potential. Up to first order in derivative expansion, the linearized equations would be at most to second order in derivatives and take the following form: $$\begin{aligned}
\label{eqmotionchemic}
&\partial_t \delta \epsilon + \partial_i \pi_i = 0, \nonumber\\
&\partial_t \pi_i +\beta_1 \partial_i \delta\epsilon +\beta_2 \partial_i \delta n - \frac{\bar{n}}{\bar{w}} \, \epsilon^{ijl} \pi^j B^l - \frac{\xi}{2\bar{w}}\left( B^l\partial_l \pi^i - B^l \partial^i \pi_l \right) = 0,\\
&\partial_t \delta n +\frac{\xi_B}{\bar{w}} B_i \partial_t \pi_i + \frac{\bar{n}}{\bar{w}} \; \partial_i \pi_i + B_i \;\left[ \left(\frac{\partial \xi_B}{\partial \epsilon}\right)_n \partial_i \,\delta \epsilon+ \left(\frac{\partial \xi_B}{\partial n}\right)_\epsilon \partial_i \, \delta n \right] = 0, \nonumber\end{aligned}$$ where $\bar{w}=\bar{\epsilon}+\bar{p}$ is the value of equilibrium enthalpy density. We have also used: $$\begin{aligned}
\delta p &= \beta_1 \delta\epsilon+ \beta_2 \delta n, \\
\delta \xi_B &= \left(\frac{\partial \xi_B}{\partial \epsilon}\right)_n \delta \epsilon+ \left(\frac{\partial \xi_B}{\partial n}\right)_\epsilon\delta n.
\end{aligned}$$ In momentum space equations \[eqmotionchemic\] may be written as: $$\begin{gathered}
- i\omega \delta \epsilon + i k_i \pi_i = 0, \nonumber\\
+\beta_1 ik_i \delta\epsilon -i\omega \pi_i - \frac{\bar{n}}{\bar{w}} \, \epsilon^{ijl} \pi_j B^l - \frac{\xi}{2\bar{w}}\left( B^l ik_l \pi_i - B^l ik_i \pi_l \right) +\beta_2 ik_i \delta n = 0,\\
B_i \; \left(\frac{\partial \xi_B}{\partial \epsilon}\right)_n ik_i \,\delta \epsilon + \frac{\bar{n}}{\bar{w}} \; ik_i \pi_i -\frac{\xi_B}{\bar{w}} B_i i\omega \pi_i - i\omega \delta n + B_i \left(\frac{\partial \xi_B}{\partial n}\right)_\epsilon ik_i \, \delta n = 0. \nonumber\end{gathered}$$ Due to presence of a term including $\omega\pi$, it would not be possible to restate the equations above in the matrix form analogous to \[eqsuperfield\]. Instead, we may express them as: $$\label{eq:lin_mat}
M_{ab}(\boldsymbol{k} , \omega) \delta \phi_a (\boldsymbol{k} , \omega) = 0,$$ with $M_{ab}$ given as:
$$M_{ab} =
\left( {\begin{array}{ccc}
- i\omega & i k_j & 0 \\
\beta_1 ik^i & -i\omega \delta^i_j -i \frac{\xi}{2\bar{w}} \left(\boldsymbol{B} \cdot \boldsymbol{k} \delta^i_j - B_j k^i \right) - \frac{\bar{n}}{\bar{w}} \epsilon^i \,_{jl}B^l & \beta_2 ik_i \\
\left(\frac{\partial \xi_B}{\partial \epsilon}\right)_n i \boldsymbol{B} \cdot \boldsymbol{k} & \frac{\bar{n}}{\bar{w}} i k_j - \frac{\xi_B}{\bar{w}} i\omega B_j & -i\omega + \left(\frac{\partial \xi_B}{\partial n}\right)_\epsilon i \boldsymbol{B} \cdot \boldsymbol{k}
\end{array} } \right).$$
For to have a non-trivial answer, it is necessary to request $$\label{eq:det_M}
\det M_{ab}=0.$$ This equation simply gives the dispersion relation of hydrodynamic excitations. In next subsections, we firstly study the hydrodynamic modes of a charge fluid at finite chemical potential in presence of a background magnetic field and in the absence of global anomalies. Since then we enter anomalies and study the effect of quantum anomalies on the hydrodynamic regime of a charged fluid at finite chiral chemical potential.
The hydrodynamic modes in the absence of anomalies
--------------------------------------------------
The equations that yield the hydrodynamic modes and their associated amplitudes in this regime become: $$\label{eq:no_anomaly_det_M}
\begin{aligned}
\left( M_{ab}(\boldsymbol{k} , \omega)\big|_{\xi=0,\,\xi_B=0} \right) \delta \phi_a (\boldsymbol{k} , \omega) = 0,\\
\det\left( M_{ab}\big|_{\xi=0,\,\xi_B=0}\right)=0.
\end{aligned}$$ In what follows we take $B_{\perp}$ and $B_{\parallel}$ as the components of magnetic field orthogonal and parallel to the wave vector $\hat{k}$, respectively. We divide the study of collective motions to three cases:
- $B_{\parallel} \neq 0 \, , \, B_{\perp} = 0$ Let us denote that the only objects we have in this case are two parallel vectors; so we are not able to present the amplitudes in a covariant way. We freely take the wave-vector and the magnetic field both along the $z-$axis. By this choice, the dispersion relation of modes and their amplitudes may be given as it can be seen in table (\[tabelmodesChemicNoAnomaly\]). (We have defined $\bar{\beta}= \beta _2 \bar{n}+\beta _1 \bar{w}$.)
[|l|c|]{} Mode & Eigen Vector\
$\omega_{1}^{(0)}=0$ & $\delta \phi_1(k,\omega_1) = \left(-\frac{\beta_2}{\beta_1},\,0,\,0,\,0,\,1 \right)$\
&\
&\
$\omega_{2,3}^{(0)} = \pm \frac{\bar{n} B_{\parallel}}{\bar{w}} = \pm \frac{\bar{n} B}{\bar{w}}$ &$
\delta \phi_{2,3}(k,\omega_{2,3}) = \left( 0,\,\pm i,\,1,\,0,\,0 \right),\nonumber$\
&\
&\
$\omega_{4,5}^{(0)} = \pm k \sqrt{\frac{\bar{\beta}}{\bar{w}}}$ & $ \delta \phi_{4,5}(k,\omega_{4,5}) = \left( 1,\,0,\,0,\,\pm\sqrt{\frac{\bar{\beta}}{\bar{w}}},\, \frac{\bar{n}}{\bar{w}}\right)$\
&\
Among the non-zero modes given in the table, let us first interpret $\omega_{2,3}$. The frequency of these modes is obviously independent of the wave vector; so they are non-propagating modes. Since $\hat{\boldsymbol{k}}.\delta \phi_{2,3}=0$, they just represent two circularly polarized standing waves of the transverse momenta. The presence of such vortex like rotating modes is the consequence of exerting the Lorentz force on the transverse momenta. This is a specific feature of the charged fluid and can not be observed in a neutral fluid even in presence of magnetic field. In next subsection we show that when $\bar{n}$ specifies the density of a chiral charge, these standing modes may both propagate due to effect of the anomalies. In fact $\omega_{2,3}$ found in the current case, are nothing but the gap between the chiral Alfvén waves in a charged fluid.
Now let us consider modes $\omega_{4,5}$. These are simply the longitudinal sound modes ($\hat{\boldsymbol{k}} \parallel \delta \phi_{4,5} $) whose velocity differs from the sound velocity in a neutral fluid ($v_s$): $$\label{speadsoundchemic}
\omega_{4,5}=\pm k \sqrt{\frac{\bar{\beta}}{\bar{w}}}=\pm k \sqrt{\beta_1+\frac{\beta_2 \bar{n}}{\bar{w}}}=\pm k \sqrt{v_s^2+\frac{\beta_2 \bar{n}}{\bar{w}}}$$ As it can be clearly seen, the speed of sound in this case, has nothing to do with the parallel magnetic field and the difference relative to $v_s$ is only due to presence of non-vanishing charge in the fluid. It could be also simply understood by considering the fluctuation of the pressure: $$\delta p=\beta_1 \delta \epsilon+\beta_2 \delta n=\left(\frac{\partial p}{\partial \epsilon}\right)_n \delta \epsilon+\left(\frac{\partial p}{\partial n}\right)_{\epsilon} \delta n.$$ In the limit where $\bar{n}=0$ the second term vanishes and only the energy perturbation contributes to propagation of sound. However, in the non-vanishing charge limit, the second term leads to appearance of a new contribution in the square root in (\[speadsoundchemic\]). In another word charge density perturbations may produce pressure gradient, as well as energy density perturbations do.
------------------------------------------------------------------------
- $B_{\parallel} = 0 \, , \, B_{\perp} \neq 0$ To evaluate the amplitude of fluctuations in this case, we first take the wave-vector along the $z-$axis and the magnetic field along the $y$ one. In this special frame we obtain: $$\begin{gathered}
\label{ampBtrans}
\delta \phi_a(k,\omega_1) = \left(-\frac{\beta _2}{\beta _1},0,0,0,1 \right),\\
\delta \phi_a(k,\omega_2) = \left( 0,0,1,0,0 \right),\nonumber\\
\delta \phi_a(k,\omega_3) = \left(-\frac{i \bar{n} }{\beta _1 \bar{w} },\frac{k}{B},0,0,0 \right), \nonumber\\
\delta \phi_a(k,\omega_{4,5}) = \left(\bar{w} ,- i \bar{n} \frac{B}{k},0,\pm \frac{1}{k} \sqrt{\bar{n}^2 B^2+\bar{w} k^2 \bar{\beta}},\bar{n} \right). \nonumber\end{gathered}$$ As we have shown in table (\[tabelmodesChemicNoAnomalyBtrans\]), the only non-zero modes in this case are $\omega_{4,5}$. With two elliptic polarization in the $x-z$ plane, these modes propagate in the $z-$direction. As a result, these modes are neither transverse nor longitudinal. Similar to the sound wave, the propagation of $\pi_z$ perturbations along $z$ is due to the pressure gradient. However the magnetic field (directed along $y$ direction) exerts a Lorentz force on the $\pi_{x}$ perturbation which makes the role of an extra pressure gradient in the $z$-direction. In order to mathematically show this increase in the sound speed, let us rewrite the last two dispersion relations with the substitution $|\boldsymbol{B}| = \alpha_{\perp} k$: $$\begin{gathered}
\omega_{4,5} = \pm k \frac{1}{\bar{w}}\sqrt{\bar{n}^2 \alpha_{\perp}^2+\bar{w}\bar{\beta} }. \nonumber\end{gathered}$$ Obviously, the velocity of these modes is greater than that of sound: $$\label{sound Chemic trans}
v=\sqrt{v_s^2+\frac{\beta_{2}\bar{n}}{\bar{w}}+\frac{\bar{n}^2 \alpha_{\perp}^2}{\bar{w}^2}}.$$ It can be seen that in addition to the pure contribution of the finite density, namely the term $\beta_2 \bar{n}/\bar{w}$ in the square root, there exists another contribution. The origin of the term $\bar{n}^2\alpha_{\perp}^2/\bar{w}^2$ in the square root is the Lorentz force, discussed in previous paragraph, exerting on the fluid.
[|l|c|]{} Mode & Eigen Vector\
&\
& $\delta \phi_1(k,\omega_1) = \left(-\frac{\beta _2}{\beta _1},\boldsymbol{0},1 \right)$,\
$\omega_{1,2,3}^{(0)}=0$ & $\delta \phi_2(k,\omega_2) = \left( 0,\frac{\boldsymbol{B}}{B},0 \right),\nonumber$\
& $\delta \phi_3(k,\omega_3) = \left(-\frac{i \bar{n}}{\beta _1 \bar{w} },\frac{\boldsymbol{B} \times \boldsymbol{k}}{B^2},0 \right)$\
&\
$\,\,\,\,\omega_{4,5}^{(0)} = \pm \frac{1}{\bar{w}}\sqrt{\bar{n}^2 B^2+\bar{w}\bar{\beta} k^2 } $ & $ \delta \phi_{4,5}(k,\omega_{4,5}) = \left(\bar{w} , - i \bar{n} \ \frac{\boldsymbol{B} \times \boldsymbol{k}}{k^2} \ \pm \ \frac{\boldsymbol{k}}{k^2} \sqrt{\bar{n}^2 B_{\perp}^2+\bar{w} k^2 \bar{\beta}},\bar{n} \right)$\
------------------------------------------------------------------------
- $B_{\parallel} \neq 0 \, , \, B_{\perp} \neq 0$ Let us first consider the wave-vector is taken along $z$-axis and the magnetic field has two components $B_\perp$ and $B_\parallel$ along axes $y$ and $z$, respectively. The corresponding amplitudes take the following form in this frame: $$\begin{gathered}
\delta \phi_a(k,\omega_1) = \left(-\frac{\beta _2}{\beta _1},0,0,0,1 \right),\\
\delta \phi_a(k,\omega_i) = \left( \bar{w},\frac{-i \bar{n} \ B_{y} \, \omega_i^2}{k \left( \omega_i^2 - \bar{n}^2B_{z}^2 /\bar{w}^2\right)},\frac{-2\, \bar{n}^2 \, B_{y} B_{z} \, \omega_i/\bar{w}}{k \left( \omega_i^2 - \bar{n}^2B_{z}^2/\bar{w}^2 \right)},\frac{ \omega_i\bar{w}}{ k}, \bar{n} \right) , \qquad i =2,3,4,5. \nonumber\end{gathered}$$ Considering the following definitions, we have listed the dispesion relation of the modes in addition to their covariant amplitudes in table (\[tabelmodesChemicNoanomalyboth B\]). $$\begin{gathered}
\label{eq:delta_a}
a = \bar{n}^2 B^2 + \bar{w} k^2 \bar{\beta},\\
\Delta =a^2-4 \bar{n}^2 \bar{w} (\boldsymbol{B}\cdot \boldsymbol{k})^2 \bar{\beta} \nonumber\end{gathered}$$ Obviously, each of the non-zero modes is a mixture of longitudinal and transverse propagation. However it is clear that generally, none of these mode are sound. All of them are modified sound modes which propagate with elliptic polarization in a plane neither parallel nor transverse to the wave vector. Only in a special case where $B_{\perp}\rightarrow 0$, two of these modes become the ordinary longitudinal sound waves. It is also worth mentioning that for any direction of the magnetic field, one of the five possible hydrodynamic modes will never be excited in the fluid. In next subsection we show that anomaly effects may excite the fifth mode.
[|l|c|]{} Mode & Eigen Vector\
$\omega_{1}^{(0)}=0$ & $\delta \phi_1(k,\omega_1) = \left(-\frac{\beta _2}{\beta _1},0,0,0,1 \right) $\
&\
$ \omega_{i}^{(0)} =
\pm \frac{\sqrt{a+\pm\sqrt{\Delta}}}{\sqrt{2} \bar{w}} $ & $\delta \phi_{2,3,4,5}(k,\omega_i) = \left( \bar{w}\, , \, \frac{-i \bar{n} \ \boldsymbol{k} \times \boldsymbol{B} \omega_i^2 - 2\frac{\bar{n}^2}{\bar{w}} \left( \boldsymbol{B} \cdot \boldsymbol{k} \right) \left( \boldsymbol{B} - (\boldsymbol{B} \cdot \boldsymbol{k}) \frac{\boldsymbol{k}}{k^2} \right) \omega_i }{\left( \omega_i^2 k^2 - \bar{n}^2 \left(\boldsymbol{B} \cdot \boldsymbol{k}\right)^2 /\bar{w}^2\right)} + \frac{ \omega_i\bar{w}}{ k^2} \boldsymbol{k} \, , \, \bar{n} \right)
, $\
$\qquad i =2,3,4,5$ & $\qquad i =2,3,4,5$\
&\
------------------------------------------------------------------------
Anomalous fluid with finite chiral chemical potential
-----------------------------------------------------
In this subsection we are going to take the effect of anomalies into account. It is important to note that the effect of anomalies enters via the non-anomalous transport coefficients. These coefficients appear at the first order of hydrodynamic derivative expansion. So all what we compute in this subsection will be to compute the derivative type corrections to our results in previous subsection. We limit our study to computing the derivative corrections to the dispersion relations. Just analogue of what we did in case of charged fluid with no anomaly, we divide our current study into three different cases.
------------------------------------------------------------------------
- $B_{\parallel} \neq 0 \, , \, B_{\perp} = 0$
Using the modes given in table (\[tabelmodesChemicNoAnomaly\]) as the zero order solution to the equation (\[eq:det\_M\]), we find the collective excitations to first order as the following: $$\label{anomalymodeparallel}
\begin{split}
\omega_1 =& \frac{\bar{w}}{\bar{\beta}}\left(\beta _1 \left(\frac{\partial \xi_B}{\partial n}\right)_\epsilon - \beta _2 \left(\frac{\partial \xi_B}{\partial \epsilon}\right)_n\right) B k \\
\omega_{2,3} =&\pm \frac{\bar{n} }{\bar{w}}B - \frac{\xi }{2 \bar{w}}B k\\
\omega_{4,5} = &\pm \frac{1}{\bar{w}}\sqrt{\bar{n}^2 B^2+\bar{w}\bar{\beta} k^2 } +\frac{\beta _2}{2 \bar{\beta}}\left(\bar{w} \left(\frac{\partial \xi_B}{\partial \epsilon}\right)_n + \bar{n} \left(\frac{\partial \xi_B}{\partial n}\right)_\epsilon - \frac{\xi _B}{\bar{w}} \bar{\beta}\right) B k
\end{split}$$ Let us remind that $B, k \sim O(\partial)$ and to first order in derivative expansion of hydrodynamic constitutive relations, dispersion relations would normally include terms with at most two derivatives.
The first mode $\omega_1$ denotes the CMW in a chiral charged fluid. Compared to CMW in neutral chiral fluid given in (\[first CMW\]), we find a new contribution as the following: $$-\frac{\bar{w}}{\beta}\,\beta_2 \left(\frac{\partial \xi_B}{\partial \epsilon}\right)_n\,B k=\,-\frac{\beta_2 \bar{n}}{2 \beta \bar{w}}\left(\mathcal{C}\mu^2+\frac{}{}\mathcal{D}T^2\right) B k.$$ The next two modes, namely $\omega_{2,3}$, are nothing but CAWs. As we explained in previous subsection, the net chiral charge of the fluid makes the CAW gapped. The most interesting feature of the results might be related to the last two sound modes. We remember from the case of a neutral chiral fluid that the sound modes are not affected by the effect of anomaly. However when the fluid is chirally charged, from the last line of \[anomalymodeparallel\] it is observed that the sound modes become dispersive. It would be better seen when writing the modes with substituting $|\boldsymbol{B}| = \alpha_{\parallel} k$: $$\begin{gathered}
\omega_1 =\frac{\bar{w}}{\bar{\beta}} \left(\beta _1 \left(\frac{\partial \xi_B}{\partial n}\right)_\epsilon - \beta _2 \left(\frac{\partial \xi_B}{\partial \epsilon}\right)_n\right) \alpha_{\parallel} k^2 ,\\
\omega_{2,3} = \pm \frac{\bar{n}}{\bar{w}} \alpha_\parallel k - \frac{\xi }{2 \bar{w}} \alpha_\parallel k^2,\nonumber\\
\omega_{4,5} = \pm \sqrt{\frac{\bar{\beta}}{\bar{w}}} k + \frac{\beta _2}{2 \bar{\beta}} \left(\bar{w} \left(\frac{\partial \xi_B}{\partial \epsilon}\right)_n + \bar{n} \left(\frac{\partial \xi_B}{\partial n}\right)_\epsilon - \frac{\xi _B}{\bar{w}} \bar{\beta}\right) \alpha_\parallel k^2. \nonumber\end{gathered}$$ The dispersive part of the sound only exists when the chiral density is finite.
------------------------------------------------------------------------
- $B_{\parallel} = 0 \, , \, B_{\perp} \neq 0$
The values of the frequencies do not differ from those of a non-anomalous fluid in this case (See table (\[tabelmodesChemicNoAnomalyBtrans\])). It means that anomaly effects can not be detected in directions transverse to the magnetic field even if the fluid is chirally charged. In another word, no first order parity odd correction contributes to the collective excitations in the direction transverse to the magnetic field.
------------------------------------------------------------------------
- $B_{\parallel} \neq 0 \, , \, B_{\perp} \neq 0$
In this part we give the results corresponded to propagation of hydrodynamic waves in an arbitrary direction with respect to an external magnetic field. At zero order in derivative expansion, there exist two modified sound excitations ($\omega_{4,5}$) in addition to another two mixed transverse-longitudinal waves($\omega_{2,3}$): $$\begin{gathered}
\omega^{(0)}_1 = 0,\\
\omega^{(0)}_{2,3} = \pm \frac{\sqrt{-\sqrt{\Delta}+a}}{\sqrt{2} \bar{w}}, \nonumber\\
\omega^{(0)}_{4,5} = \pm \frac{\sqrt{\sqrt{\Delta}+a}}{\sqrt{2} \bar{w}}. \nonumber \end{gathered}$$ Where $\Delta$ and $a$ are defined in . The fully covariant corrections in first order are given in table (\[tabelmodesGeneral\]).
[|l|c|]{} Zero Order & First Order\
&\
$\omega^{(0)}_1 = 0$ & $\omega^{(1)}_1 = \frac{\bar{w}}{\beta} \left(\boldsymbol{B} \cdot \boldsymbol{k}\right) \left(\beta _1 \left(\frac{\partial \xi_B}{\partial n}\right)_\epsilon - \beta _2 \left(\frac{\partial \xi_B}{\partial \epsilon}\right)_n\right)$\
&\
&\
$\omega^{(0)}_{2,3} = \pm \frac{\sqrt{a-\sqrt{\Delta}}}{\sqrt{2} \bar{w}}$& $\omega^{(1)}_{2,3}=
\frac{\left(\boldsymbol{B} \cdot \boldsymbol{k}\right)}{4 \sqrt{\Delta } \bar{w}^2 } \left( - \left(2 \bar{w}^2 k^2 -4 \bar{n}^2 \bar{w}^2 \frac{\left( \boldsymbol{B} \cdot \boldsymbol{k} \right)^2}{a-\sqrt{\Delta}}\right) \left(\beta _2 \left(-\bar{\beta} \xi _B+\bar{w}^2 \left(\frac{\partial \xi_B}{\partial \epsilon}\right)_n + \bar{n} \bar{w} \left(\frac{\partial \xi_B}{\partial n}\right)_\epsilon\right)+\bar{\beta} \xi \right)-2 \sqrt{\Delta } \xi \bar{w} \right)$\
&\
&\
$\omega^{(0)}_{4,5} =\pm \frac{\sqrt{a+\sqrt{\Delta}}}{\sqrt{2} \bar{w}}$& $\omega^{(1)}_{4,5} = \frac{ \left(\boldsymbol{B} \cdot \boldsymbol{k}\right)}{4 \sqrt{\Delta } \bar{w}^2}
\left( \left(2 \bar{w}^2 k^2 -4 \bar{n}^2 \bar{w}^2 \frac{\left( \boldsymbol{B} \cdot \boldsymbol{k} \right)^2}{a+\sqrt{\Delta}}\right) \left(\beta _2 \left(-\bar{\beta} \xi _B+\bar{w}^2 \left(\frac{\partial \xi_B}{\partial \epsilon}\right)_n + \bar{n} \bar{w} \left(\frac{\partial \xi_B}{\partial n}\right)_\epsilon\right)+\bar{\beta} \xi \right)-2 \sqrt{\Delta } \xi \bar{w} \right)$\
&\
The presence of a factor $\boldsymbol{B} \cdot \boldsymbol{k}$ in front of all $\omega^{(1)}$s is in agreement with our argument in part ($B_{\parallel} = 0 \, , \, B_{\perp} \neq 0$) of the current subsection. Apart from the CMW $\omega_1$, there exist four mixed dispersive modes. These four modes are in general mixed Modified Sound-Alfven waves. In the special case $\boldsymbol{B} \parallel \boldsymbol{k}$ two of these modes become sound waves; the other two are CAWs appearing just from the first order. That two of these modes vanish at zero order can be simply understood by considering this point that when $\boldsymbol{B} \parallel \boldsymbol{k}$, depending on whether $\bar{n} B$ is greater or $\bar{w} \bar{\beta} k^2$, one of the expressions $a+\sqrt{\Delta}$ and $a-\sqrt{\Delta}$ vanishes(See eqs (\[eq:delta\_a\])) while all $\omega^{(1)}$s remain non-vanishing .
------------------------------------------------------------------------
Parity violating fluid in $1+1$ dimensions {#sec5}
==========================================
What all we have done so far was especially related to chiral fluids in 3+1 dimensions. As it is well known, the chiral anomaly is also present in even space-time dimensions. Knowing this fact, chiral fluids have been also studied in 1+1 dimensions in the literature. There are well known results concerning the anomalous transport in 1+1 dimensions found from both effective field theory [@Dubovsky:2011sk] and partition function [@Jain:2012rh] methods.
Specifically, the authors of [@Dubovsky:2011sk] has considered a Wess-Zumino-like term to account the effect of anomalies. Interestingly, they have shown that in the spectrum of collective excitations of chiral fluid in 1+1 dimensions, in addition to two ordinary sound modes, there exists a new propagating mode; a right- or left-moving wave with propagation speed that goes to zero with the anomaly coefficient.
Analogous to $3+1$ dimensions, the hydrodynamic spectrum can be found directly from the linearized hydrodynamic equations in $1+1$ dimensions. In what follows we study the linearized equations of chiral hydrodynamic in $1+1$ dimensions and show that there exist exactly three hydrodynamic modes as found in [@Dubovsky:2011sk]. Furthermore we rewrite the dispersion relation of each hydrodynamic mode in an explicit expression of thermodynamic variables and the anomaly coefficient in the Landau-Lifshitz frame. Let us note that in [@Dubovsky:2011sk], the dispersion relation of hydrodynamic modes have been given in the limit $c\rightarrow 0$ and in the entropy frame ($s^{\mu}=\bar{s} u^{\mu}$). The hydrodynamic equations for a chiral fluid in $1+1$ dimensional flat space-time in presence of an external long-wave length gauge filed read $$\begin{split}
\partial_{\mu}T^{\mu \nu}=&\,F^{\nu \lambda} J_{\lambda}\\
\partial_{\mu} J^{\mu}=&\,c\, \epsilon_{\mu \nu}F^{\mu \nu}
\end{split}$$ with the anomaly coefficient $c$. The constitutive relations at zero order in derivative expansion are $$\label{T J 1+1}
\begin{split}
T^{\mu \nu}=& \,(\epsilon+p) u^{\mu} u^{\nu}+ p \,\eta^{\mu \nu} \\
J^{\mu}=& \,n u^{\mu} +\xi \tilde{u}^{\mu}.
\end{split}$$ where $\tilde{u}^{\mu}=\epsilon^{\mu \nu}u_{\nu}$ and the coefficient $\xi$ appearing in front of the parity violating term is an anomalous transport coefficient, as [@Dubovsky:2011sk; @Jain:2012rh] $$\xi=\,c\left(\frac{\bar{n} \mu^2}{\bar{\epsilon}+\bar{p}}- 2 \mu\right)-d\,\frac{\bar{n}T^2}{\bar{\epsilon}+\bar{p}}.$$ Let us recall that in $3+1$ dimensions, the anomaly effects arise from the first order in derivative expansion; that in $1+1$ dimensions there existed one anomalous coefficient even at zero order, is simply due to the rank of Levi-Civita tensor in $1+1$ dimensions.
To find the spectrum of the fluid, we have to specify the state of equilibrium. Since no magnetic field exists in $1+1$ dimensions, we take the state of equilibrium as $$u^{\mu}=(1,0),\,\,\,T=\text{Const.},\,\,\,\mu=0.$$ So the linearized hydrodynamic equations around the above state are $$\label{linearized1+1}
\begin{split}
& \partial_{t} \delta \epsilon +\, i k \pi=0\\
& \partial_{t}\pi+\,i k v_s^2 \delta \epsilon=0\\
& \partial_{t} n- \frac{\xi}{\bar{w}}\partial_t \pi+ i \left(\frac{\partial \xi}{\partial n}\right)_{\epsilon} k\, n\,=0.\\
\end{split}$$ Analogue of what we did in the case of a chiral fluid in $3+1$ dimensions, we take the super filed $\pi_a=\big(\delta \epsilon, \pi, n \big)$ and rewrite the above linearized equations in the form $ \partial_{t} \phi_{a}(t, \boldsymbol{k})+\,M_{a b}(\boldsymbol{k})\, \phi_{b}(t, \boldsymbol{k})=0$ with $$-i \omega \delta_{a b}+ M_{a b}(\boldsymbol{k})=
\left( {\begin{array}{ccc}
- i \omega & i k & 0\\
i k^{i} v_{s}^2 & - i \omega & 0 \\
0 & i \frac{\xi}{\bar{w}} \omega & - i \omega + i \left(\frac{\partial \xi}{\partial n}\right)_{\epsilon} k
\end{array} } \right).$$ Equating the det($-i \omega \delta_{a b}+ M_{a b}(\boldsymbol{k})$) to zero, we find three hydrodynamic modes in an ideal chiral fluid in $1+1$ dimensions $$\begin{split}
\omega_{1,2}(\boldsymbol{k})&=\, v_s k=\, \left(\frac{\partial p}{\partial \epsilon}\right) k\\
\omega_{3}(\boldsymbol{k})&=\,\left(\frac{\partial \xi}{\partial n}\right)_{\epsilon} k=\,\left(-\frac{2 c }{\chi}-\frac{d T^2}{\bar{w}}\right) k.
\end{split}$$ Obviously, the modes $\omega_{1,2}$ are the sound modes. The third mode, namely $\omega_3$, must be the one-and-halfth sound mode earlier found through the effective filed theory approach.
In summary, a chiral fluid in $1+1$ dimensions has three propagating mode; two ordinary sound waves and one chiral wave. Compared to [@Dubovsky:2011sk], we have computed the velocity of chiral wave in terms of anomaly coefficient and the value of thermodynamic variables in equilibrium. An undetermined integral constant, namely $d$, is also present in the dispersion-relation of this mode. ٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪٪
Summary and Outlook
===================
In this paper we computed the spectrum of hydrodynamic fluctuations for a chiral fluid in the presence of an external magnetic field. As one naturally expect, five distinguished hydrodynamics modes may propagate in fluid with one $U(1)$ global symmetry in 3+1 dimensions. When the current is conserved, only two of the five modes may be excited by perturbing the fluid. These are nothing but the sound modes. We have shown that when the $U(1)$ current is anomalous, an external magnetic field would be able to turn on all five possible hydrodynamic excitations. In the limit of vanishing net chiral charge, the three new modes are as follow: a degenerate Chiral Alfén Wave and a Chiral Magnetic Wave. We have also shown that the degeneracy between the CAWs might be removed if the effects of dissipation were considered.
When the fluid is chirally charged, similar to the previous case, five hydrodynamic modes may propagate. However the feature of propagation is somewhat different from that of happens in an uncharged chiral fluid. Here, sound waves combine with Chiral Alfén waves into for mixed waves. The new mixed waves are neither transverse nor longitudinal. The only unchanged mode is Chiral Magnetic Wave by this mean that it is a wave of scalar (density) perturbations yet, although its speed of propagation compared to the uncharged chiral fluid case.
While the main outcome of our computations is that the anomaly effects may macroscopically appear through hydrodynamic waves in the magnetic field, it is worth mentioning that these waves may propagate in every arbitrary directions except in direction perpendicular to the magnetic field.
What we have done in this work may be simply generalized to the case of a chiral fluid with both axial and vector currents. Whether in presence of two currents Chiral Alfven wave propagates or not is an interesting question which might be important in quark-gluon plasma physics. We leave more investigation on this issue to our future work.
In another direction, very recently the author of [@Chernodub:2015gxa] has shown that in a charged fluid at zero chemical potential a new type of chiral waves, different from the chiral Alfvén waves, may propagate. Analogous to the chiral Alfvén waves, the new wave, namely the chiral heat wave, is associated with anomalous effects with this difference that for the latter, the necessary condition of propagation is presence of a background vorticity $\boldsymbol{\omega}$ in the fluid. It would be interesting to investigate how the dissipative processes affect the chiral heat waves. It would be also interesting to study the mixing of chiral Alfvén/Heat waves in a fluid when considering both an external magnetic field and a constant vorticity in the fluid.
Another interesting problem is to study the Chiral Alfven waves in the holography. As it is well known, to every long wavelength perturbation of Einstein equations in Ads5 space, one fluid dynamical flow on the boundary of AdS is corresponded. The latter statement is the main subject of fluid/gravity duality [@Bhattacharyya:2008jc]. So it might be possible to determine CAW on the boundary of AdS is corresponded to which gravity set up in the AdS side.
Acknowledgements
================
We would like to thank to Naoki Yamamoto for exchanging two emails. We would also like to thank to Massimo Giovannini for discussion. K.H. wishes to thank institute for research in fundamental sciences, school of Particles and Accelerators for hospitality and partial financial support, and also acknowledges Hessamadin Arfaei for introducing him to the research group of A.D. and N.A.
Appendix- Comment on Parity violating fluid in $2+1$ dimensions {#App .unnumbered}
===============================================================
As we discussed in the text, chiral anomaly is present only in even spaec-time dimensions. So the presence of parity violating terms in hydrodynamic currents in odd space-time dimensions can not be related to anomaly. However, it is instructive to investigate how these terms affect the hydrodynamic transport in odd spaec-time dimensions.
In [@Jensen:2011xb], all transport coefficients of first order hydrodynamics in a parity broken system have been classified in $2+1$ dimensions. Furthermore, in the same paper the second law of thermodynamics, time reversal symmetry and properties of response functions have all been used to constrain the transport coefficients.
Due to absence of anomalies in $2+1$ dimensions, we do not expect the parity violating terms introduced in [@Jensen:2011xb] could make new propagating modes in the fluid. However it would be instructive to study the effect of these terms on the hydrodynamic modes. To proceed, we repeat the computations of previous sections for a non-dissipative parity violating fluid in $2+1$ dimensions.
In $2+1$ dimensions, the first order corrections in (\[TJ\]) are given by: $$\label{E:T1J1L}
\begin{split}
& \tau^{\mu\nu} = \left( - \zeta \nabla_\alpha u^\alpha - \tilde{\chi}_B B - \tilde{\chi}_\omega \omega\right) \Delta^{\mu\nu}
-\eta \sigma^{\mu\nu} - \tilde{\eta} \tilde{\sigma}^{\mu\nu} \,, \\
& \nu^\mu = \sigma V^{\mu} + \tilde{\sigma} \tilde{V}^{\mu} + \tilde{\chi}_E \tilde{E}^{\mu} + \tilde{\chi}_Tbe \epsilon^{\mu\nu\rho}u_{\nu} \nabla_{\rho} T \,.
\end{split}$$ with
\[E:defs\] $$\begin{aligned}
\label{E:OandB}
& \omega = -\epsilon^{\mu\nu\rho}u_{\mu} \nabla_{\nu} u_{\rho}, \,\,\,\,\,\,\,\,\,
B = -\frac{1}{2} \epsilon^{\mu\nu\rho}u_{\mu} F_{\nu\rho}, \\
& E^{\mu} = F^{\mu\nu}u_{\nu},\,\,\,\,\,\,\,\,\,
V^{\mu} = E^{\mu} - T p^{\mu\nu}\nabla_{\nu} \frac{\mu}{T}, \\
\label{E:sigmaDef}
& P^{\mu\nu} = u^{\mu}u^{\nu} + g^{\mu\nu}, \,\,\,\,\,\,\,\,\,\\
&
\sigma^{\mu\nu} = P^{\mu\alpha} P^{\nu\beta} \left(\nabla_{\alpha}u_{\beta} + \nabla_{\beta} u_{\alpha} - g_{\alpha\beta} \nabla_{\lambda} u^{\lambda} \right) \,,
\intertext{and}
&\tilde{E}^{\mu} = \epsilon^{\mu\nu\rho}u_{\nu}E_{\rho}\,,\,\,\,\,\,\,\,\,\,
\tilde{V}^{\mu} = \epsilon^{\mu\nu\rho}u_{\nu} V_{\rho}\,, \\
&\tilde{\sigma}^{\mu\nu} = \frac{1}{2} \left( \epsilon^{\mu\alpha\rho} u_{\alpha} \sigma_{\rho}^{\phantom{\rho}\nu} + \epsilon^{\nu\alpha\rho} u_{\alpha} \sigma_{\rho}^{\phantom{\rho}\mu} \right)\,.&
\end{aligned}$$
As before, the thermodynamic parameters $\bar{p}(\mu,T)$, $\bar{\epsilon}(\mu,T)$ and $\bar{n}(\mu,T)$ are the values of the pressure, energy density and charge density respectively in an equilibrium configuration in which $B=\omega=0$, where $B$ is the rest-frame magnetic field and $\omega$ the vorticity [^6].
Linearizing the hydrodynamic equation around the state $$u^{\mu}=(1,0,0),\,\,\,T=\text{Const.},\,\,\,\mu=0,$$ we find four hydrodynamic modes of which, two modes are zero modes which will become shear and heat modes when accounting the dissipative effects. The other two modes are the sound modes $$\omega_{1,2 }(\boldsymbol{k})=\,\pm v_{s} k\left(1-\frac{T B}{2 \bar{w}}\left(\tilde{\chi}_T+ c_v\frac{\partial \tilde{\chi}_B}{\partial \epsilon}\right)\right)$$
The parameters $\tilde{\chi}_B$ and $\tilde{\chi}_T$ are not independent; indeed at $\mu=0$ they are specified in terms of a thermodynamic function $$\label{E:MMO}
\mathcal{M}_{B} = \frac{\partial P}{\partial B}$$ and its derivatives with respect to $T$ and $\mu$ [@Jensen:2011xb]. Our above result shows that the parity violating terms in $2+1$ dimensions can only affect the sound propagation’s speed in a magnetized fluid [@Abbasi:2015nka]. As a result no new propagating mode would appear due to parity breaking in $2+1$ dimensions. This result illustrates the importance of relation between anomalies and parity violating terms in even space-time dimensions discussed earlier.
[10]{}
N. Yamamoto, “Chiral Alfvén Wave in Anomalous Hydrodynamics,” Phys. Rev. Lett. [**115**]{}, no. 14, 141601 (2015) \[arXiv:1505.05444 \[hep-th\]\]. D. E. Kharzeev, L. D. McLerran, and H. J. Warringa, Nucl. Phys. A [**803**]{}, 227 (2008).
K. Fukushima, D. E. Kharzeev, and H. J. Warringa, Phys. Rev. D [**78**]{}, 074033 (2008).
A. Vilenkin, Phys. Rev. D [**22**]{}, 3080 (1980).
A. Y. Alekseev, V. V. Cheianov, and J. Frohlich, Phys. Rev. Lett. [**81**]{}, 3503 (1998).
A. Vilenkin, Phys. Rev. D [**20**]{}, 1807 (1979).
J. Erdmenger, M. Haack, M. Kaminski, and A. Yarom, JHEP [**0901**]{}, 055 (2009).
N. Banerjee, J. Bhattacharya, S. Bhattacharyya, S. Dutta, R. Loganayagam, and P. Surowka, JHEP [**1101**]{}, 094 (2011).
D. T. Son and P. Surowka, Phys. Rev. Lett. [**103**]{} (2009) 191601 \[arXiv:0906.5044 \[hep-th\]\]. J. Bhattacharya, S. Bhattacharyya, S. Minwalla and A. Yarom, JHEP [**1405**]{} (2014) 147 \[arXiv:1105.3733 \[hep-th\]\].
Y. Neiman and Y. Oz, JHEP [**1103**]{}, 023 (2011).
K. Landsteiner, E. Megias, and F. Pena-Benitez, Phys. Rev. Lett. [**107**]{}, 021601 (2011); Lect. Notes Phys. [**871**]{}, 433 (2013).
D. T. Son and N. Yamamoto, Phys. Rev. Lett. [**109**]{}, 181602 (2012); Phys. Rev. D [**87**]{}, 085016 (2013).
M. A. Stephanov and Y. Yin, Phys. Rev. Lett. [**109**]{}, 162001 (2012).
J. -W. Chen, S. Pu, Q. Wang, and X. -N. Wang, Phys. Rev. Lett. [**110**]{}, 262301 (2013).
P. V. Buividovich, M. N. Chernodub, E. V. Luschevskaya and M. I. Polikarpov, Phys. Rev. D [**80**]{}, 054503 (2009) \[arXiv:0907.0494 \[hep-lat\]\]. P. V. Buividovich, M. N. Chernodub, D. E. Kharzeev, T. Kalaydzhyan, E. V. Luschevskaya and M. I. Polikarpov, Phys. Rev. Lett. [**105**]{}, 132001 (2010) \[arXiv:1003.2180 \[hep-lat\]\].
D. E. Kharzeev and H. U. Yee, Phys. Rev. D [**84**]{} (2011) 045025 \[arXiv:1105.6360 \[hep-th\]\].
K. Rajagopal and A. V. Sadofyev, arXiv:1505.07379 \[hep-th\]. P. Kovtun, J. Phys. A [**45**]{} (2012) 473001 \[arXiv:1205.5040 \[hep-th\]\].
N. Abbasi and A. Davody, JHEP [**1206**]{} (2012) 065 \[arXiv:1202.2737 \[hep-th\]\]. N. Abbasi and A. Davody, JHEP [**1312**]{} (2013) 026 \[arXiv:1310.4105 \[hep-th\]\].
N. Abbasi and A. Davody, arXiv:1508.06879 \[hep-th\].
M. Giovannini, Phys. Rev. D [**88**]{}, 063536 (2013).
M. Giovannini and M. E. Shaposhnikov, Phys. Rev. D [**57**]{}, 2186 (1998) doi:10.1103/PhysRevD.57.2186 \[hep-ph/9710234\].
S. Dubovsky, L. Hui and A. Nicolis, Phys. Rev. D [**89**]{}, no. 4, 045016 (2014) \[arXiv:1107.0732 \[hep-th\]\].
S. Jain and T. Sharma, JHEP [**1301**]{}, 039 (2013) \[arXiv:1203.5308 \[hep-th\]\].
K. Jensen, M. Kaminski, P. Kovtun, R. Meyer, A. Ritz and A. Yarom, JHEP [**1205**]{} (2012) 102 \[arXiv:1112.4498 \[hep-th\]\].
L. D. Landau and E. M. Lifshitz, Fluid Mechanics. Pergamon, 1987.
J. H. Gao, Z. T. Liang, S. Pu, Q. Wang, and X. N. Wang, Phys. Rev. Lett. [**109**]{}, 232301 (2012).
S. Golkar and D. T. Son, JHEP [**1502**]{}, 169 (2015).
K. Jensen, R. Loganayagam, and A. Yarom, JHEP [**1302**]{}, 088 (2013).
E. I. Buchbinder, S. E. Vazquez and A. Buchel, JHEP [**0812**]{}, 090 (2008) doi:10.1088/1126-6708/2008/12/090 \[arXiv:0810.4094 \[hep-th\]\].
D. E. Kharzeev and H. U. Yee, Phys. Rev. D [**83**]{}, 085007 (2011) doi:10.1103/PhysRevD.83.085007 \[arXiv:1012.6026 \[hep-th\]\]. M. N. Chernodub, arXiv:1509.01245 \[hep-th\].
C. Hoyos, B. S. Kim and Y. Oz, JHEP [**1403**]{} (2014) 029 \[arXiv:1309.6794 \[hep-th\], arXiv:1309.6794\].
D. Roychowdhury, arXiv:1508.02002 \[hep-th\].
P. Kovtun, G. D. Moore and P. Romatschke, JHEP [**1407**]{} (2014) 123 \[arXiv:1405.3967 \[hep-ph\]\].
D. Roychowdhury, JHEP [**1509**]{}, 145 (2015) \[arXiv:1508.02002 \[hep-th\]\]. J. R. David, M. Mahato, S. Thakur and S. R. Wadia, JHEP [**1101**]{}, 014 (2011) \[arXiv:1008.4350 \[hep-th\]\]. S. D. Chowdhury and J. R. David, arXiv:1508.01608 \[hep-th\].
S. Bhattacharyya, V. E. Hubeny, S. Minwalla and M. Rangamani, JHEP [**0802**]{}, 045 (2008) doi:10.1088/1126-6708/2008/02/045 \[arXiv:0712.2456 \[hep-th\]\].
[^1]: The Chiral effects have been also studied in Lifshitz hydrodynamics [@Roychowdhury:2015jha].
[^2]: In the context of gauge-gravity duality, the gradient corrections to the drag force has been computed for the first time in [@Abbasi:2012qz; @Abbasi:2013mwa].
[^3]: Two dimensional chiral transport has been also studied at weak coupling in [@David:2010qc; @Chowdhury:2015pba].
[^4]: In the special case of incompressible fluid, Yamamoto showed that CAW is a transverse mode before[@Yamamoto:2015ria]
[^5]: As was denoted earlier, the Chiral Alfvén wave may propagate due to momentum fluctuations.
[^6]: In $2+1$ dimensions, both magnetic field and the vorticity are scalar quantities.
|
---
abstract: 'We study synchronization of random one-dimensional linear maps for which the Lyapunov exponent can be calculated exactly. Certain aspects of the dynamics of these maps are explained using their relation with a random walk. We confirm that the Lyapunov exponent changes sign at the complete synchronization transition. We also consider partial synchronization of nonidentical systems. It turns out that the way partial synchronization manifests depends on the type of differences (in Lyapunov exponent or in contraction points) between the systems. The crossover from partial synchronization to complete synchronization is also examined.'
author:
- Adam Lipowski
- Ioana Bena
- Michel Droz
- 'Antonio L. Ferreira'
title: Synchronization of Random Linear Maps
---
Introduction
============
Synchronization of chaotic systems is a subject of current intensive study [@PREP]. To a large extent this is due to its various applications, ranging from laser dynamics [@EXP] to electronic circuits [@EXTENDED], chemical and biological systems [@BIOL], secure communications [@SECURE], etc. But there is also a pure theoretical interest in this phenomenon, that is related perhaps to its counterintuitive nature: how it is possible that chaotic (i.e., by definition unpredictable) systems can be synchronized and thus brought under some ‘control’. And, even more puzzling, noise can play the role of the synchronizing factor. Indeed, early reports [@MARITAN] that sufficiently strong noise can completely synchronize two identical chaotic systems were initially met with scepticism and attributed to finite precision of computations [@PIKO] or to biased noise [@HERZEL]. However, more recent examples show this effect even for unbiased noise [@TORAL].
Since real systems are typically nonidentical, complete synchronization is difficult to achieve. It is an interesting problem to examine whether noise can induce some sort of ‘weaker synchronization’ in nonidentical but relatively similar systems. Recent works do show the existence of such partial synchronization [@ROSENBLUM; @ZKURTHS].
One important problem of the theoretical and numerical studies of synchronization is how to detect it. This problem is essentially solved for the complete synchronization of two identical systems that are described by variables $x$ and $x'$, respectively. In this case, for the synchronized state the difference $|x-x'|$ equals zero, while it remains positive in the unsynchronized state. Moreover, the transition between these two states is accompanied by the change of the sign of the largest Lyapunov exponent, that becomes negative in the synchronized state.
However, for the partial synchronization this problem is much more subtle. In this case, the two systems are not identical and the difference $|x-x'|$ always remains positive. It has already been noted for some models with continuous dynamics that partial synchronization manifests through changes in the probability distribution of the ‘phase difference’ [@ZKURTHS]. In addition to that, in the partially synchronized phase the so-called zero Lyapunov exponent becomes negative [@ROSENBLUM; @ZKURTHS].
Studies of synchronization rely, to a large extent, on numerical calculations. Precise estimations of Lyapunov exponents or probability distributions (invariant measures) constitute very often demanding computational problems. To further test the already accumulated knowledge on synchronization, it would be desirable to find models where at least some of these properties could be computed analytically.
In the present paper we examine synchronization of random one-dimensional linear maps. For such maps one can easily find the exact Lyapunov exponent and locate the point where it changes sign. Numerical calculations for two identical systems confirm that this is also the point where a complete synchronization transition takes place. We briefly report on a correspondence between such maps and a random walk process, that allows for a simple interpretation of the initial stages of the evolution of the maps.
We also examine the partial synchronization of nonidentical maps. It is seen that the way partial synchronization manifests depends on the type of difference between the nonidentical systems. In a certain case, the difference in the location of contraction points of the maps is imprinted in the probability distribution at the partial synchronization transition. When the difference between two subsystems $\delta$ tends to vanish, partial synchronization approaches the complete synchronization. Due to the exact knowledge of the complete synchronization point, one can examine some details of this crossover. In particular, it is shown that for $\delta\rightarrow 0$, vanishing of the synchronization error is very slow $\sim (-1/{\rm log}_{10}\;\delta)$.
Random 2-maps
=============
First, let us consider the simplest example of a random linear map $$x_{n+1}=f_i(x_n), \ i=0,1\ {\rm and} \ n=0,1,\ldots
\label{e1}$$ where at each time step $n$ one of the maps $f_0$ or $f_1$ is applied with a probability $p$ and $(1-p)$, respectively. The maps are defined as $f_0(x)=a\,x\;{\rm mod}(1)$, and $f_1(x)=b\,x$, where $0<x<1$ and $a>1,\ 0<b<1$. Related models have already been examined in the context of on-off intermittency [@HEAGY; @YANG], advection of particles by chaotic flows [@ROMEIRAS], and others [@IRVIN; @VULPIANI]. Some aspects of synchronization were also studied for piecewise linear random maps, but both the Lyapunov exponent and the location of the synchronization transition were determined only numerically [@KOCAREV].
For the map (\[e1\]) it is elementary to calculate exactly its Lyapunov exponent $\lambda$ defined as $$\lambda=\underset{N\rightarrow\infty}{{\rm lim}}\frac{1}{N}
\sum_{n=0}^{N-1} {\rm log}_{10}\Big\arrowvert\frac{dx_{n+1}}{dx_n}\Big\arrowvert\;.
\label{e2}$$ Indeed, since both $f_0$ and $f_1$ have constant derivatives, one immediately obtains $$\lambda=p\;{\rm log}_{10}\,a+(1-p)\;{\rm log}_{10}\,b\;.
\label{e3}$$ It follows that $\lambda$ changes sign at $$p=p_c=\frac{-{\rm log}_{10}\,b}{{\rm log}_{10}\,(a/b)}\;.
\label{e4}$$ It is easy to understand this result. For $p>p_c$ the expanding (chaotic) map $f_0$ prevails over the contracting map $f_1$ and the overall behaviour is chaotic with $\lambda>0$. The opposite situation takes place for $p<p_c$ and the map (\[e1\]) contracts to $x=0$. At $p=p_c$ and in its close vicinity the map (\[e1\]) exhibits intermittent bursts of activity [@HEAGY; @YANG], but we will not focus on such a behaviour in the present paper.
To study synchronization one can make two runs $\{x_n\}$ and $\{x_n'\}$ of iterations of map (\[e1\]) starting each time from (slightly) different initial conditions $x_0$ and $x_0'$ but with the same realization of noise, i.e., with the same sequence of maps $f_i$. Then one measures the synchronization error $w_n$ defined as $$w_n=\langle|x_n-x_n'|\rangle \,,
\label{e6}$$ where $\langle...\rangle$ represents the mean over the realizations of the noise. Moreover, we introduce the steady-state average $w= \mbox{lim}_{n\rightarrow \infty}w_n$. In the synchronized state $w=0$ while it is positive in the unsynchronized state. Our numerical results for $a=1/b=3/2$ (not presented here) show that $w$ vanishes at $p=1/2$, i.e., the point where the Lyapunov exponent changes sign.
However, due to the fact that it contracts to $x=0$ for $p<p_c$, the map (\[e1\]) is not quite suitable to study synchronization of chaotic systems. For such a purpose it would be desirable to have a map with a somehow more complex behaviour in the regime with a negative Lyapunov exponent.
Nevertheless, the simplicity of the map (\[e1\]) allows us to get some additional insight into its dynamics. Let us fix the initial points of our maps as, e.g., $x_0<x_0' \ll 1$. As we shall see, such a choice results in a certain transient regime that can be understood using an analogy with a random walk process.
Numerical evaluation of $w_n$ shows that it exhibits three types of behavior as a function of time $n$ (Fig. \[fig1\]).\
(a) For $p>p_c$ (positive Lyapunov exponent) after an initial exponential increase, $w_n$ saturates and acquires a constant nonzero value.\
(b) For $p\lesssim p_c$, $w_n$ decreases after an initial exponential increase. The asymptotic exponential decay is consistent with the (negative) Lyapunov exponent (\[e3\]).\
(c) For even smaller values of $p$ the synchronization error $w_n$ decreases exponentially already from the beginning.
To understand the initial behaviour of $w_n$, let us recall that the iteration of $x_n$ and $x_n'$ starts from very small values and for a certain number of iterations the $\mbox{mod}(1)$ part of the map (\[e1\]) does not play any role. Consequently, one has $$w_n\,=\,\langle\alpha\rangle^{n}\,w_0\,,
\label{initial}$$ where $$\langle\alpha\rangle=p\,a+(1-p)\,b\,.$$ Correspondingly, the initial decrease or increase of $w_n$ depends whether $p$ is smaller or greater than $(1-b)/(a-b)$.
After a certain time, the ${\rm mod}(1)$ part of the map comes into play and the initial behaviour of $w_n$ (\[initial\]) is replaced by a different one. Namely, for $p>p_c$, $w_n$ saturates at a nonzero value and it decays exponentially for $p<p_c$. We shall return to this point at the end of this section.
To estimate the time scale $\tau$ when the inital behaviour (\[initial\]) changes, we relate our map to a random walk process. For simplicity, we consider the case $b=1/a$. In this case there is a one-to-one correspondence between the stochastic variable $x_n$ and $y_n=-\mbox{log}_{a}\,(x_n)$. Multiplication of $x_n$ by $a$ or $1/a$ corresponds to the decrease or increase of $y_n$ by unity. Consequently, $y_n$ is nothing else but the position of a random walker on a lattice of unit spacing with transition probabilities $p$ to the left and $(1-p)$ to the right. The correspondence with the random walk holds as long as the mod(1) part of the map is not applied, i.e., the walker does not cross ${\rm log}_a(1)=0$. The above random-walker problem has two characteristic time scales connected to the first-passage process [@FELLER] from its initial position to 0, and one can expect that one of them is related to $\tau$ \[which is, recall, the time scale on which the initial behavior of the map (\[initial\]) is altered by the $\mbox{mod}(1)$ part of the map\]. (i) First, there is the mean first passage time $\tau_M$. But for $p\leq p_c$, $\tau_M$ is infinite - contray to $\tau$, and therefore one cannot use $\tau_M$ as a measure for $\tau$. (ii) Secondly, there is the time moment $\tau_P$ when the probability that the walker hits $0$ for the first time is maximal. The probability distribution of the first hit is known to be [@FELLER]: $$\begin{aligned}
P(n;p,y_0)&=&\displaystyle\frac{y_0}{n}\,
\left(
\begin{array}{c}
n \\
\displaystyle\frac{n-y_0}{2}
\end{array}
\right)
\displaystyle\,p^{\frac{(n+y_0)}{2}}
(1-p)^{\frac{(n-y_0)}{2}}\;,\nonumber\\
&&
\label{prob}\end{aligned}$$ where $y_0$ is the initial position of the walker and $n$ is the number of steps; the binomial coefficient is to be interpreted as zero if $({n-y_0})/{2}$ is not an integer in the interval $[0,\,n]$. Our numerical simulations show (see Fig. \[fig1\]) that the value of $n=\tau_P$ for which $P(n;p,y_0)$ in Eq. (\[prob\]) attains a maximum offers a reasonable estimation of $\tau$. One cannot expect to get a more precise estimate of $\tau$, since the change of the initial behavior (\[initial\]) of the map towards the asymptotic one is a gradual process, that involves all the trajectories of the equivalent random walker that hit 0 (for the first time) before as well as after $\tau_P$.
Let us notice that the bias of the random walk is related with the sign of the Lyapunov exponent and therefore with the asymptotic behaviour of our map. Indeed, for $p<p_c=1/2$ the random walk is biased toward $+\infty$, that translates into an exponential decay of $w_n$. Our numerical simulations for the longer time regime suggest that this decay is governed by the Lyapunov exponent $\lambda$ (\[e3\]). Recall that $\lambda$ is known to govern the evolution of the typical value of $|x_n-x_n'|$ (see, e.g., [@VULPIANI]). Thus in the long-time regime the mean $w_n$ and the typical value of $|x_n-x_n'|$ behave identically. On the other hand, they are clearly different in early-time regime. It means that in this regime $w_n$ is strongly influenced by rare events, i.e., unlikely excursions of the random walker against the bias. For $p>p_c=1/2$ the random walk is biased toward 0 and, since the map is bounded, $w_n$ saturates at a positive value.
Random 3-maps
=============
As already mentioned, the map (\[e1\]) has a trivial behaviour for $p<p_c$ and is not suitable to study synchronization of chaotic systems. In this context the following 3-map version is more interesting: $$x_{n+1}=f_i(x_n), \ i=0,1,2,
\label{e5}$$ where $f_0(x)=a\,x\;{\rm mod}(1)$, $f_1(x)=b\,x$, and $f_2(x)=b\,x+(1-b)$. The maps $f_0$, $f_1$, and $f_2$ are applied at random with probabilities $p$, ${(1-p)}/{2}$, and ${(1-p)}/{2}$, respectively. It is easy to show that for such a random map Eqs. (\[e3\])-(\[e4\]) still hold.
Numerical evaluation of the synchronization error $w=\mbox{lim}_{n\rightarrow
\infty} w_n$ for the map (\[e5\]) with $a=1/b=3/2$, based on $N=10^8$ iterations, is shown in Fig. \[steadysame\]. For $p>p_c=1/2$ the Lyapunov exponent $\lambda$ is positive and the system is not synchronized ($w>0$). For $p<1/2$ we have $\lambda<0$ and the system synchronizes ($w=0$). But this time the behaviour for $p<1/2$ is much more complex. The two maps $f_1$ and $f_2$ are still contracting ones, but to two different points (0 and 1). Since they are applied randomly, the system, while remaining synchronized, irregularly wanders throughout the whole interval $(0,1)$. The probability density $P(x)$ of visiting a given point $x$, that is shown in the inset of Fig. \[steadysame\], confirms such a behaviour.
Random linear maps can be also used to study [*partial synchronization*]{} that might occur for two dynamical systems that are not identical, although relatively similar. Generally, one considers the following pair of maps $$x_{n+1}=f_i(x_n),\ x_{n+1}'=f_i'(x_n'),\ i=0,1,2\,,
\label{twomaps}$$ for which $\{f_0,\,f_0'\}$, $\{f_1,\,f_1'\}$, and $\{f_2,\,f_2'\}$ are applied at random with probabilities $p$, ${(1-p)}/{2}$, and ${(1-p)}/{2}$, respectively. To complete the definition we have to specify the functions $f_i$ and $f_i'$, $i=0,1,2$. We present below the results for two particular choices. These ones correspond, respectively, to a perturbation of the Lyapunov spectrum - case (A) - and of the attractor - case (B).\
Case (A): We choose $$f_0'(x)=a'\,x\;{\rm mod}(1),\ f_1'(x)=b'\,x,\ f_2'(x)=b'\,x+(1-b')
\label{e7}$$ where $a'=3/2+\delta$, $b'=1/a'$ and $\delta=10^{-3}$. Functions $f_0$, $f_1$, and $f_2$ are defined as for the map (\[e5\]) with $a=1/b=3/2$. For such a choice, the Lyapunov exponent of the map in Eq. (\[e7\]) is a linear function of $p$ that also changes sign at $p=p_c=1/2$ albeit with a different slope. Assuming that a partial synchronization transition is also governed by the Lyapunov exponent, we might expect that such a transition, if it exists, takes place at $p=1/2$. For the pair of maps (\[twomaps\]), $w_n$ is defined as in Eq. (\[e6\]) but $x_n$ and $x_n'$ evolve according to Eq. (\[twomaps\]). Numerical calculation of $w={\rm lim}_{n\rightarrow\infty}w_n$ for $N=10^8$ iterations shows (Fig. \[steadyslope\]) that it remains positive and smooth around this value. However, there is a more subtle change in the system at or around $p=1/2$. Indeed, the probability distribution of $P(|x-x'|)$ shows a pronounced peak in the $p<1/2$ domain (inset of Fig. \[steadyslope\]). This is yet another example that shows that partial synchronization manifests through a change in the probability distribution [@ZKURTHS]. However, $P(|x-x'|)$ was calculated for $p=0.4,\ 0.5$, and $0.6$, i.e., values that are relatively far from each other. For smaller differences between the values of $p$ the corresponding curves are getting similar and it is not clear to us whether there is a qualitative change in the probability distribution $P(|x-x'|)$ that could locate precisely the partial synchronization transition. If not, it could mean that in this case either there is not a well-defined partial synchronization transition, or we are not looking at the right quantity to detect it.
For further comparison, we also calculated the probability $s$ that the difference $|x-x'|$ remains smaller (if initially so) than a given value $\varepsilon=10^{-4}$ during the iterations of the system (\[twomaps\]). One finds that $s$ is a monotonous function of $p$ (Fig. \[steadyslope\]).\
Case (B): We choose $$\begin{aligned}
&&f_0'(x)=a'\,x\;{\rm mod}(1),\; f_1'(x)=b'\,x,\nonumber\\
&& f_2'(x)=b'\,x+(1-b')-b''\;,
\label{e8}\end{aligned}$$ where $a'=3/2$, $b'=2/3$ and $b''=10^{-3}$. Functions $f_0$, $f_1$, and $f_2$ are defined as in case (A). For such a choice the Lyapunov exponents of both systems are the same and they change sign at $p=p_c=1/2$. The only difference is that the function $f_2(x)$ has a contracting point at $1$, while $f_2'(x')$ at a slightly smaller value $x'=0.997$.
In this case the synchronization error $w$ behaves similarly to case (A) (Fig. \[steady\]). However, the probability distribution of $P(|x-x'|)$ has a little bit different shape. In particular, even at $p=1/2$ it has a certain peak which increases with decreasing $p$. The maximum of the peak at $p=1/2$ is most likely located at $|x-x'|=0.003$ (Fig. \[dist05\]) and this is related with the difference of the contracting points of the functions $f_2$ ($x=1$) and $f_2'$ ($x'=0.997$). We checked that off the $p=1/2$ point the maximum shifts away from $|x-x'|=0.003$. Moreover, the probability $s$ that the difference $|x-x'|$ is smaller than $\varepsilon=10^{-4}$ shows a maximum at $p=1/2$ (Fig. \[steady\]).
Comparison of partial synchronization in cases (A) and (B) revealed important differences between them. In case (A), where we perturbed the Lyapunov exponent, the probability distribution $P(|x-x'|)$ develops a peak around/at $|x-x'|=0$ in the partially synchronized state that presumably exists for $p<1/2$. Such a feature is similar to those already reported in the literature [@ZKURTHS]. However, it is not clear to us whether in this case such a change can be characterized more quantitatively so that a well-defined transition point exists. A different behaviour takes place in the case (B) with perturbed contracting points. In this case the peak of the probability distribution is shifted by a value that at the partial synchronization transition $p=1/2$ (as deduced from the vanishing of the Lyapunov exponent) curiously matches the shift of the contracting points. In addition to that, the probability of being in a state with a small difference $|x-x'|$ has a maximum at $p=1/2$ – in drastic contrast with case (A).
For simple maps as those examined in the present paper one has a complete knowledge of the Lyapunov exponents and fixed points. This is usually not the case for more complicated dynamical systems like Lorentz or Rössler equations. Perturbing some parameters of these equations one usually modifies both the Lyapunov spectrum and the attractors. It is henceforth possible that partial synchronization in such systems combines some features of both cases (A) and (B) we discussed.
When the difference between nonidentical systems vanishes, partial synchronization is replaced by complete synchronization. We shall now briefly examine such a situation. In particular we consider two nonidentical systems as those described in the case (A) above, with varying difference $\delta$. We calculated the synchronization error $w$ as a function of $\delta$ for $p=0.4$ and $p=1/2$. In both cases we expect that $w$ vanishes for $\delta\rightarrow 0$. Figure \[diff\] confirms such an expectation, but it also reveals that for $p=1/2$ the convergence to zero is much slower than for $p=0.4$. While for $p=0.4$ the synchronization error seems to vanish as $w\sim \delta$, the inset in Fig. \[diff\] suggests that at the synchronization transition the vanishing is most likely logarithmic $w\sim ({-1}/{{\rm log}_{10}\,\delta)}$.
Conclusions
===========
In the present paper we examined synchronization of one-dimensional random maps. Our results confirm that a complete synchronization transition coincides with the change of sign of the Lyapunov exponent. We also showed that the way the partial synchronization manifests depends on the type of difference between the two nonidentical systems. It would be desirable to explain the origin of the logarithmically slow crossover of the partial synchronization to the complete synchronization. It would be also interesting to explain why the location of the maximum of the probability distribution at the partial synchronization transition discussed in case (B) above matches exactly the shift of the contracting points.
For a recent survey see S. Boccaletti, J. Kurths, G. Osipov, D.l. Vallades and C.S. Zhou, Phys. Rep. [**366**]{} 1 (2002). R. Roy and K. S. Thornburg, Phys. Rev. Lett. [**72**]{}, 2009 (1994). J. F. Heagy, T. L. Carroll, and L. M. Pecora, Phys. Rev. A [**50**]{}, 1874 (1994). I. Schreiber and M. Marek, Physica D [**5**]{}, 258 (1982); S. K. Han, C. Kurrer, and Y. Kuramoto, Phys. Rev. Lett. [**75**]{}, 3190 (1995). L. Kocarev and U. Parlitz, Phys. Rev. Lett. [**74**]{}, 5028 (1995). A. Maritan and J. R. Banavar, Phys. Rev. Lett. [**72**]{}, 1451 (1994). A. Pikovsky, Phys. Rev. Lett. [**73**]{}, 2931 (1994). H. Herzel and J. Freund, Phys. Rev. E [**52**]{}, 3238 (1995). C. H. Lai and C. S. Zhou, Europhys. Lett. [**43**]{}, 376 (1998); R. Toral [*et al.*]{}, Chaos [**11**]{}, 665 (2001). M. G. Rosenblum, A. S. Pikovsky, and J. Kurths, Phys. Rev. Lett [**78**]{}, 4193 (1997). C. Zhou and J. Kurths, Phys. Rev. Lett. [**88**]{}, 230602 (2002). J. F. Heagy, N. Platt, and S. M. Hammel, Phys. Rev. E [**49**]{}, 1140 (1994). H. L. Yang and E. J. Ding, Phys. Rev. E [**50**]{}, R3295 (1994). F. J. Romeiras, C. Grebogi, and E. Ott, Phys. Rev. A [**57**]{}, 784 (1990). A. J. Irwin, S. J. Fraser, and R. Kapral, Phys. Rev. Lett. [**64**]{}, 2343 (1990). V. Loreto, G. Paladin, M. Pasquini, and A. Vulpiani, Physica A [**232**]{}, 189 (1996); V. Loreto, G. Paladin, and A. Vulpiani, Phys. Rev. E [**53**]{}, 2087 (1996). L. Kocarev and Z. Tasev, Phys. Rev. E [**65**]{}, 046215 (2002). See, e.g., W. Feller, [*An Introduction to Probability Theory and Its Application*]{} (Wiley, New York, 1957).
|
---
abstract: 'We derive the star formation histories of eight dwarf spheroidal (dSph) Milky Way satellite galaxies from their alpha element abundance patterns. Nearly 3000 stars from our previously published catalog (@kir10b) comprise our data set. The average \[$\alpha$/Fe\] ratios for all dSphs follow roughly the same path with increasing \[Fe/H\]. We do not observe the predicted knees in the \[$\alpha$/Fe\] vs. \[Fe/H\] diagram, corresponding to the metallicity at which Type Ia supernovae begin to explode. Instead, we find that Type Ia supernova ejecta contribute to the abundances of all but the most metal-poor (${{\rm [Fe/H]}}< -2.5$) stars. We have also developed a chemical evolution model that tracks the star formation rate, Types II and Ia supernova explosions, and supernova feedback. Without metal enhancement in the supernova blowout, massive amounts of gas loss define the history of all dSphs except Fornax, the most luminous in our sample. All six of the best-fit model parameters correlate with dSph luminosity but not with velocity dispersion, half-light radius, or Galactocentric distance.'
author:
- 'Evan N. Kirby, Judith G. Cohen, Graeme H. Smith, Steven R. Majewski, Sangmo Tony Sohn, Puragra Guhathakurta'
title: |
Multi-Element Abundance Measurements from Medium-Resolution Spectra.\
IV. Alpha Element Distributions in Milky Way Dwarf Satellite Galaxies
---
Introduction {#sec:intro}
============
Understanding the origins of galaxies requires understanding the histories of their dark matter growth, gas flows, and star formation. Of these, the dark matter growth is the most straightforward to model [e.g., @die07; @spr08]. The gas flow history presents more difficult obstacles, such as collisional dissipation, gas cooling, stellar feedback, and conversion into stars. Despite the challenges, some models—built on top of dark matter simulations—track all of these processes over cosmic time [e.g., @gov07]. The results of these models have observational consequences for the properties of the present stellar populations of galaxies.
Methods for Determining Star Formation Histories
------------------------------------------------
The star formation histories (SFHs) of galaxies may be deduced from the colors and magnitudes of the population and from the spectra of the stars and gas, if present. Distant, unresolved galaxies display only a single, composite spectral energy distribution, which may be examined through calibrations of spectrophotometric indices [e.g., @gra08] or, in some cases, spectral synthesis [@mcw08; @col09]. Nearer stellar systems may be resolved both photometrically and spectroscopically. The [*Hubble Space Telescope*]{} (HST) has enabled the characterization of the SFHs of many nearby galaxies [@wei08; @dal09; @ber09], including most of the dwarf galaxies in the Local Group [@hol06; @orb08].
Photometrically derived SFHs are most sensitive to young stars and metal-rich stars because the separation between isochrones increases with decreasing age and increasing metallicity. Elemental abundances obtained from spectroscopy do not give absolute ages, but they can provide finer relative time resolution for old, metal-poor populations. @gil91 showed that star formation bursts of varying duration and frequency in dwarf galaxies engrave signatures on the ratio of oxygen to iron as a function of metallicity. Because oxygen-rich Type II supernovae (SNe) explode within tens of Myr of a starburst, the oxygen content of stars forming soon after the burst will be high. Within hundreds of Myr, iron-rich Type Ia SNe begin to explode. The injection of iron into the interstellar medium (ISM) depresses the oxygen-to-iron ratio of subsequently forming stars. These processes are generalizable to other elements. The abundances of the next several elements with even atomic number beyond oxygen—the alpha elements (Ne, Mg, Si, S, Ar, Ca, and Ti)—roughly scale with oxygen abundance. The abundances of iron-peak elements (V, Cr, Mn, Co, and Ni) roughly scale with iron abundance. The trend of the alpha-to-iron-peak ratio with iron-peak abundance, a proxy for elapsed time or integrated star formation, reveals the relative star formation history with a resolution of about 10 Myr, the approximate timescale for a Type II SN.
Chemical Evolution Models
-------------------------
A glance at a diagram of \[Mg/Fe\] vs. \[Fe/H\] gives a qualitative sense of a galaxy’s star formation history. Converting quantitative abundances into a quantitative SFH requires a chemical evolution model. @pag97 described in detail how to create such a model, and @tol09 reviewed recent progress on modeling the SFHs of Local Group dwarf galaxies. @mat08 described the levels of approximation that the models assume. In general, more sophisticated and presumably more accurate models reduce the number of approximations. The most basic assumptions are instantaneous recycling and instantaneous mixing. Consideration of stellar lifetimes and SN delay times removes the first approximation. Three-dimensional hydrodynamical simulations remove the second approximation.
A chemical evolution model reflects the history not only of star formation but also of gas flow. A complete explanation of metallicity and alpha element distributions requires both inflows and outflows. The metallicity distribution functions (MDFs) of nearby Galactic G dwarfs cannot be explained with a closed box model [@van62; @sch63]. @pag97 discussed some of the proposed solutions to the G dwarf problem, including variable nucleosynthesis yields, bimodal star formation, and pre-enrichment. One of the most promising solutions is infalling matter [@lar72]. Gases undoubtedly flow out of the galaxy, either from SN winds [@mat71; @lar74] or stripping from the influence of external or host galaxies [@tin79; @lin83]. For example, interactions with the Milky Way could remove gas from the satellite galaxies discussed here. Both inflows and outflows affect the star formation rate (SFR) throughout the history of the galaxy. Therefore, they shape the MDF and the trend of \[$\alpha$/Fe\] with \[Fe/H\].
Chemical evolution models suffer from uncertainties in the initial mass function of stars and stellar lifetimes [@rom05], nucleosynthesis yields [@rom10], and the delay time distribution (DTD) for Type Ia SNe [@mat09]. However, these limitations have not prevented the models from providing good fits to abundance data. Even models with some of the first theoretical SN yields [@woo93] successfully reproduced the observed metallicity distribution and abundance patterns in the Galaxy [@pag95]. Models with newer SN yields also match the solar neighborhood abundance distributions very well [e.g., @rom10]. Nonetheless, uncertainties in the model assumptions do complicate the interpretation of the model results. For example, changing the Type Ia DTD, particularly the turn-on time, affects the derived timescale for star formation. The best way to circumvent these uncertainties is to apply the same model consistently to several systems and compare them differentially. Although the absolute ages or SFRs may be affected by systematic errors in the model, the relative quantities between different galaxies will be meaningful.
Local Group dwarf galaxies make good subjects for chemical evolution models. First, the Local Group contains many resolved dwarf galaxies [@mat98; @tol09] with stars bright enough for medium- or high-resolution spectroscopy. Second, dwarf galaxies span a wide range of properties, including velocity dispersion and luminosity. The populations of the lowest luminosity galaxies enable the study of star formation on small scales [@martin08a; @nor08]. The changes in populations for more luminous or more massive galaxies show how star formation responds to galaxy size [@mat98; @kir10a]. Third, dwarf galaxies host some of the most metal-poor stars known [@kir08b; @kir09; @geh09; @coh09; @coh10; @fre10a; @fre10b; @sim10; @nor10a; @nor10b; @sta10; @taf10]. and sim10b These stars retain the chemical imprint of the ISM when the Universe was less than 1 Gyr old. Therefore, dwarf galaxies permit the study of star formation not only on small scales but also at early times. Finally, dwarf galaxies may be the primary building blocks for the Milky Way (MW) halo [@sea78; @whi78]. The stellar populations of the surviving dwarf galaxies may reflect the stellar populations of the dissolved building blocks, and they may show how the surviving satellites evolved since the time of rapid accretion onto the MW.
In a series of articles, @lan03 [@lan04; @lan07; @lan10] and @lan06 [@lan08] presented numerical models that tracked the evolution of several elements in dSphs. The models plausibly explained the MDFs and the available multi-element abundance measurements in dSphs. However, large samples of published abundance measurements in any individual dSph have been sparse until recently [@she09; @kir09; @kir10b; @let10]. Other chemical evolution models of dSphs have examined the effects of reionization [@fen06] and star formation stochasticity [@car08]. @rec01 constructed one of the first hydrodynamical models of dwarf galaxy evolution. In particular, they simulated a galaxy similar to IZw18. @mar06 [@mar08] published hydrodynamical simulations of an isolated, Draco-like dSph. Their models relaxed the assumption of instantaneous mixing and allowed inhomogeneous chemical enrichment. Some of the newest hydrodynamical models [@rev09; @saw10] tracked both the kinematics and abundances of the stars as they form. They attempted to explain not only chemical abundance patterns but also dynamical properties of dSphs, such as the seemingly universal dynamical mass measured within their optical radii [@mat98; @str08] and out to the edge of their light distributions [@gil07].
History of Chemical Analysis of Milky Way Satellites
----------------------------------------------------
The earliest indications of heavy element abundance spreads among red giants of the dSph systems in Draco, Ursa Minor, Sculptor, and Fornax were first obtained by the multichannel scanner observations of @zin78 [@zin81], initial efforts at spectroscopy [@nor78; @kin80; @kin81; @ste84; @smi84; @leh92], and both broad and narrow band photometry [@dem79; @smi83]. The globular clusters of the Fornax system proved to differ in their metallicities [@zinper81]. The presence of carbon stars [@aar80; @aar82; @aarhod83; @azz85] and so-called anomalous Cepheids [@dem75; @nor75; @hir80; @smi86] further indicated the potential complexity of the stellar populations in dSphs. Carbon stars are exceedingly rare in globular clusters, while the period-luminosity relations of the anomalous Cepheids implied that they are more massive than typical cluster Cepheids [@zin76]. As a consequence, by the mid-1980s, circumstantial evidence was building to suggest that dSphs had more complex and possibly more extensive star formation and chemical evolution histories than globular clusters.
Since that time, the application of ground-based CCD and HST imaging has lead to greatly improved color-magnitude diagrams (CMDs) that have clearly shown the presence of significant internal age spreads within [*some*]{} of the Milky Way’s retinue of dSphs, such as Carina, Fornax, Leo I, and Sextans [e.g., @mig90; @mig97; @sme96; @hur98; @buo99; @gal99a; @gal99b; @sav00; @lee09]. Spectroscopy with large ground-based telescopes has demonstrated the presence of abundance inhomogeneities in the majority of these systems [e.g., @sun93; @sme99; @she01b; @she03; @tol01; @tol03; @tol04; @win03; @pon04; @gei05; @mcw05a; @mcw05b; @bat06; @koc06; @bos07; @sbo07; @gul09; @coh09; @coh10; @kir09].
Chemical Evolution Models for the New Catalog
---------------------------------------------
In this article, we interpret the multi-element abundance distributions in eight dSphs with our own chemical evolution model. The data set is our catalog of abundances based on spectral synthesis of medium-resolution spectra from the DEIMOS spectrograph on the Keck II telescope [@kir10b Paper II]. The catalog contains 2961 stars with abundance measurements. The number of stars in each dSph ranges from 141 (Sextans) to 827 (Leo I). It is the largest homogeneous chemical abundance data set in dwarf galaxies. The typical areal coverage is about $300~{\rm arcmin}^2$ at or near the center of each dSph. The median uncertainty on \[Fe/H\] is 0.12 dex. The fraction of the sample with \[Mg/Fe\] uncertainties less than 0.2 (0.3) dex is 42% (53%). That fraction increases to 54% (69%) for \[Ti/Fe\], which is easier to measure than \[Mg/Fe\]. For $\langle[\alpha/\rm{Fe}]\rangle$ (the average of \[Mg/Fe\], \[Si/Fe\], \[Ca/Fe\], and \[Ti/Fe\]), the fraction increases to 71% (88%).
Our one-zone model is simple, but it incorporates some of the newest SN yields and the most recently measured DTD for Type Ia SNe. The biggest advantage of our data set is that it is homogeneous. All of the spectra were obtained with the same spectrograph configuration, and all of the abundances were measured with the same spectral synthesis code. Thus, the derived star formation and gas flow histories from our model—despite its simplicity—will be easy to interpret differentially. In other words, the absolute ages and star formation rates may be affected by model uncertainties, but the trends with galaxy properties, such as luminosity, should reflect the true SFHs.
We begin by describing our model (Sec. \[sec:model\]). Then, we apply the model to the eight dSphs by finding the solution that best matches the abundances. We discuss how our results compare to previous photometric and spectroscopic studies (Sec. \[sec:dsphs\]). Next, we change some of the model variables to estimate the systematic errors in the derived SFHs (Sec. \[sec:exploration\]). Then, we explore how the abundance distributions, SFHs, and gas flow histories change with galaxy properties such as luminosity and velocity dispersion (Sec. \[sec:trends\]). Finally, we enumerate our conclusions (Sec. \[sec:conclusions\]).
Chemical Evolution Model {#sec:model}
========================
[lll]{} $t$ & Time since start of simulation & Gyr\
$M$ & Mass of a single star & $M_{\sun}$\
$\xi_j(t)$ & Gas mass in element $j$ & $M_{\sun}$\
$X_j(t)$ & Mass fraction in element $j$ & dimensionless\
$Y$ & Primordial helium mass fraction ($X_{\rm He}(0)$) & dimensionless\
$M_{\rm gas}(t)$ & Total gas mass & $M_{\sun}$\
$Z(t)$ & Metal fraction (all elements heavier than He) & dimensionless\
$\dot{\xi}_j(t)$ & Time derivative of $\xi_j$ & $M_{\sun}~{\rm Gyr}^{-1}$\
$\dot{\xi}_{j,*}(t)$ & Star formation rate, or rate of gas loss in element $j$ due to star formation & $M_{\sun}~{\rm Gyr}^{-1}$\
$\dot{\xi}_{j,{\rm II}}(t)$ & Type II SN or HN yield rate for element $j$ & $M_{\sun}~{\rm Gyr}^{-1}$\
$\epsilon_{\rm HN}$ & Fraction of HNe among stars with $M \ge 20~M_{\sun}$ & dimensionless\
$\zeta_{j,{\rm II}}(M,Z)$ & Mass of element $j$ ejected by one Type II SN & $M_{\sun}$\
$\dot{\xi}_{j,{\rm Ia}}(t)$ & Type Ia SN yield rate for element $j$ & $M_{\sun}~{\rm Gyr}^{-1}$\
$t_{\rm delay}$ & Type Ia SN delay time & Gyr\
$\Psi_{\rm Ia}(t_{\rm delay})$ & Type Ia SN delay time distribution & ${\rm SN}~{\rm Gyr}^{-1}~{M_{\sun}}^{-1}$\
$\zeta_{j,{\rm Ia}}$ & Mass of element $j$ ejected by one Type Ia SN & $M_{\sun}$\
$\dot{\xi}_{j,{\rm AGB}}(t)$ & AGB yield rate for element $j$ & $M_{\sun}~{\rm Gyr}^{-1}$\
$\zeta_{j,{\rm AGB}}(M,Z)$ & Mass of element $j$ ejected by one AGB star & $M_{\sun}$\
$A_*$ & Normalization of star formation rate law (free parameter) & $M_{\sun}~{\rm Gyr}^{-1}$\
$\alpha$ & SFR exponent of $M_{\rm gas}$ (free parameter) & dimensionless\
$A_{\rm in}$ & Normalization of gas infall rate (free parameter) & $M_{\sun}~{\rm Gyr}^{-1}$\
$\tau_{\rm in}$ & Gas infall time constant (free parameter) & Gyr\
$A_{\rm out}$ & Gas lost per SN (free parameter) & $M_{\sun}~{\rm SN}^{-1}$\
$M_{\rm gas}(0)$ & Initial gas mass (free parameter) & $M_{\sun}$\
In order to provide a rough interpretation of the abundance trends in @kir10b’s catalog, we have developed a rudimentary model of chemical evolution. Table \[tab:gcevars\] defines the symbol for each variable or constant in the model. The model supposes that a dwarf galaxy at any instant is a chemically homogeneous system that can accrete or lose gas. The ejecta of Type II SNe enrich the gas according to the total lifetime of massive ($10 < M/M_{\sun} < 100$) stars, while the Type Ia SNe follow the observed DTD [@mao10 see below]. Stars form according to the @kro93 initial mass function (IMF, $dN/dM = 0.31 M^{-2.2}$ for $0.5 < M/M_{\sun} < 1$ and $dN/dM = 0.31 M^{-2.7}$ for $M > 1~M_{\sun}$).
The calculation tracks the mass of H, He, Mg, Si, Ca, Ti, and Fe at each time step ($\Delta t = 1$ Myr). The calculation is terminated when the system reaches zero gas mass.
We define $\xi_j(t)$ as the galaxy’s gas mass of element $j$ at time $t$. The galaxy’s total gas mass at time $t$ is
$$\begin{aligned}
M_{\rm gas}(t) &=& \sum_j \xi_j(t) \label{eq:mgasexact} \\
&\approx& \xi_{\rm H}(t) + \xi_{\rm He}(t) + 20.4[\xi_{\rm Mg}(t) + \xi_{\rm Si}(t) + \nonumber \\
& & \xi_{\rm Ca}(t) + \xi_{\rm Ti}(t)] + 1.07\xi_{\rm Fe}(t) \label{eq:mgasapprox}\end{aligned}$$
The summation in Equation \[eq:mgasexact\] is over all elements in the periodic table. However, our model tracks only seven elements. Therefore, we assume the ratio of the sum of all elements from Li to Ti, inclusive, to the sum of Mg, Si, Ca, and Ti is the same as in the Sun. This ratio is 20.4 [@and89]. Similarly, we assume the solar ratio for the sum of all elements V through Ge compared to Fe: 1.07. Elements beyond Ge are neglected. Equation \[eq:mgasapprox\] reflects these approximations. For convenience, we define the metallicity of the gas as follows:
$$Z = \frac{M_{\rm gas}(t) - \xi_{\rm H}(t) - \xi_{\rm He}(t)}{M_{\rm gas}(t)} \label{eq:z}$$
We also define the gas-phase mass fraction in an element $j$:
$$X_j(t) = \frac{\xi_j(t)}{M_{\rm gas}(t)}$$
The following subsections explain the components of the models. Each component is expressed as the time change in $\xi_j(t)$, where $\dot{\xi}_j \equiv d\xi_j(t)/dt$.
Star Formation Rate {#sec:sfr}
-------------------
For simplicity, we assume that the star formation rate is a power law in the gas mass of the galaxy. With this assumption,
$$\dot{\xi}_{j,*} = A_* X_j(t) \left(\frac{M_{\rm gas}(t)}{10^6~M_{\sun}}\right)^{\alpha} \label{eq:sfr}$$
The variables $A_*$ and $\alpha$ are free parameters in the model. In the complete chemical evolution equation (Eq. \[eq:gce\]), the sign of $\dot{\xi}_{j,*}$ is negative because $\xi_j$ represents the gas mass, which is depleted due to star formation.
Equation \[eq:sfr\] is a generalization of a Kennicutt-Schmidt law [@sch59; @ken98], which connects the SFR to the gas surface density, $\Sigma_{\rm gas}$. Surface density is perhaps more appropriate for disks than spheroids. Desiring a more three-dimensional property, we have used the gas mass, $M_{\rm gas}$, instead of $\Sigma_{\rm gas}$. The volume density, $\rho_{\rm gas}$, would be a better description, but the difference between $M_{\rm
gas}$ and $\rho_{\rm gas}$ is simply a constant because our model is one-zoned.
Type II Supernovae {#sec:II}
------------------
In our model, stars more massive than $10~M_{\sun}$ and less massive than $100~M_{\sun}$ explode according to their total lifetimes [@pad93; @kod97]:
$$\tau_*(M) = \left(1.2 \left(M/M_{\sun}\right)^{-1.85} + 0.003\right)~{\rm Gyr} \label{eq:lifetime_massive}$$
This formula is valid for stars more massive than $6.6~M_{\sun}$, (inclusive of our entire mass range for Type II SNe). @mae89 give slightly different formulas for stars less massive than $60~M_{\sun}$, but the differences do not affect the chemical evolution model appreciably.
Stars more massive than $100~M_{\sun}$ do not form in this model. The Type II SN ejecta are mixed homogeneously and instantaneously into the interstellar medium (ISM) of the entire dSph.
We adopt the Type II SN nucleosynthetic yields of @nom06. The symbol $\zeta_{j,{\rm II}}(M,Z)$ represents the mass in element $j$ ejected from the Type II SN explosion of a star with an initial mass $M$. It is a function of both initial stellar mass and metallicity. @nom06 tabulated the yields for seven initial masses ranging from $13~M_{\sun}$ to $40~M_{\sun}$ and four metallicities from $Z=0$ to $Z=0.02$. The total mass of the ejecta is always less than the birth mass of the star because the star loses some mass during its lifetime and because some mass is locked up forever in a SN remnant.
@nom06 modeled both normal core-collapse SNe and very energetic hypernovae (HNe). The lowest mass HN they model is $20~M_{\sun}$. The fraction of stars at least this massive that explode as HNe is $\epsilon_{\rm HN}$. @nom06 adopted $\epsilon_{\rm HN} = 0.5$ for their own model of the solar neighborhood. @rom10 explored the cases of $\epsilon_{\rm HN}
= 0$ and 1. In our own experimentation, we have found that $\epsilon_{\rm HN} = 0$ produces good matches to the dSph abundance patterns at the lowest values of \[Fe/H\], and we adopt this value for the model. In Sec. \[sec:epshn05\], we explore the effect of increasing $\epsilon_{\rm HN}$ on the model.
The following integral gives the instantaneous change in gas mass from the ejecta of Type II SNe ($M_{\sun}~{\rm Gyr}^{-1}$):
$$\begin{aligned}
\dot{\xi}_{j,\rm{II}} &=& 0.31~M_{\sun}^{0.7}\:\int_{10~M_{\sun}}^{100~M_{\sun}} \zeta_{j,{\rm II}}(M,Z(t-\tau_*(M)))\, \nonumber \\
& & \; \times \; \dot{\xi}_*(t-\tau_*(M)) \, M^{-2.7} \, dM \label{eq:SNII}\end{aligned}$$
The coefficient $0.31~M_{\sun}^{0.7}$ is the normalization from the IMF. This integral depends on the SN yields ($\zeta_{j,{\rm II}}$), the recent star formation history ($\dot{\xi}_*$), and the high-mass IMF slope ($M^{-2.7}$). In practice, this integral is performed numerically with Newton-Cotes integration over an array of 100 logarithmically spaced masses between $10~M_{\sun}$ and $100~M_{\sun}$. The values of $\zeta_{j,{\rm II}}$ and $\dot{\xi}_*$ are interpolated onto this array. The metallicity used to look up the appropriate SN yields is consistent with the metallicity of the gas at the time the exploding star formed. (In other words, at any given time step, the metallicities of the lower mass SNe are less than the metallicities of higher mass SNe from more recently formed stars.)
The instantaneous Type II SN rate (SN Gyr$^{-1}$) is given by a related integral:
$$\dot{N}_{\rm{II}} = 0.31~M_{\sun}^{0.7}\:\int_{10~M_{\sun}}^{100~M_{\sun}} \dot{\xi}_*(t-\tau_*(M))\,M^{-2.7}\,dM \label{eq:N_SNII}$$
This integral is performed over the same array of massive star lifetimes as a function of mass as for Eq. \[eq:SNII\]. The value will be used to determine the mass lost from SN winds (Sec. \[sec:winds\]).
Type Ia Supernovae {#sec:Ia}
------------------
![Type Ia supernova delay time distribution, as measured by @mao10. The data come from a variety of star formation environments, given in the figure legend. Equation \[eq:dtd\] gives the expression for this function. Compare this figure to @mao10’s Fig. 2.\[fig:dtd\]](SNIa_DTD.eps){width="\linewidth"}
We adopt the Type Ia SN yields of @iwa99. The mass of element $j$ ejected per Type Ia SN is $\zeta_{j,{\rm Ia}}$. The SNe explode according to a function that approximates the delay time distribution observed by @mao10 [see Fig. \[fig:dtd\]]. The following equation describes the adopted delay time distribution.
$$\begin{aligned}
\Psi_{\rm Ia} &=& \left\{\begin{array}{lcr}
0 &~~~& t_{\rm delay} < 0.1~{\rm Gyr} \\
\begin{array}{l} (1 \times 10^{-3}~{\rm SN~Gyr}^{-1}~M_{\sun}^{-1}) \\ \:\:\:\:\: \times \; \left(\frac{t_{\rm delay}} {\rm{Gyr}}\right)^{-1.1}\end{array} &~~~& t_{\rm delay} \ge 0.1~{\rm Gyr} \\
\end{array} \right. \label{eq:dtd}\end{aligned}$$
The variable $t_{\rm delay}$ is used instead of $t$ to indicate that the DTD will be integrated from time $t$ into the past.
Unfortunately, the abundance distributions derived from the chemical evolution model depend sensitively on the normalization and turn-on time of $\Psi_{\rm Ia}$. Both of these quantities—particularly the turn-on time—have large uncertainties. The normalization affects \[Fe/H\] and the slope of \[$\alpha$/Fe\] with \[Fe/H\]. We have chosen $1
\times 10^{-3}~{\rm SN~Gyr}^{-1}~M_{\sun}^{-1}$ for the normalization because that is the value that @mao10 reported. Even though the data in Fig. \[fig:dtd\] are easily consistent with half that value, the larger value better reproduces the slope of \[$\alpha$/Fe\] with \[Fe/H\] for many of the dSphs. The turn-on time determines the time or \[Fe/H\] at which \[$\alpha$/Fe\] begins to drop. We have chosen 0.1 Gyr because that is approximately the maximum value acceptable for the DTD data (Fig. \[fig:dtd\]). See Sec. \[sec:tIa3\] for a discussion of the effect of increasing this minimum delay time to 0.3 Gyr.
The instantaneous Type Ia SN rate is given by combining $\Psi_{\rm
Ia}$ with the past star formation history:
$$\dot{N}_{\rm{Ia}} = \int_t^0 \dot{\xi}_{*}(t_{\rm delay}) \, \Psi_{\rm Ia}(t-t_{\rm delay}) \, dt_{\rm delay} \: . \label{eq:SNIa}$$
The mass returned to the ISM is the product of the SN Ia yields ($\zeta_{j,{\rm Ia}}$) and the Ia rate:
$$\dot{\xi}_{j,\rm{Ia}} = \zeta_{j,{\rm Ia}} \dot{N}_{\rm{Ia}} \label{eq:N_SNIa}$$
Asymptotic Giant Branch Stars
-----------------------------
Winds from low- and intermediate-mass stars return a small but significant amount of mass to the ISM. The stars lose less than 1% of this mass before reaching the asymptotic giant branch [AGB, @van97]. Therefore, we consider mass loss on the AGB only.
We adopt the AGB yields of @kar10, who tracked all of the elements we consider here except Ca and Ti. (We assume that the fraction of Ca and Ti in AGB ejecta is the same as in the material that formed the star.) We assume all of the mass is ejected in the final time step of the star’s lifetime. This assumption is appropriate because an AGB star’s thermal pulsation period, during which it loses most of its mass, lasts on the order of 1 Myr [@mar07], which is the length of one time step in our model. Equation \[eq:lifetime\_massive\] gives the lifetimes of stars more massive than $6.6~M_{\sun}$. Less massive stars obey @pad93’s ([-@pad93]) and @kod97’s ([-@kod97]) equation:
$$\tau_*(M) = 10^{\frac{0.334-\sqrt{1.790-0.2232[7.764-\log (M/M_{\sun})]}}{0.1116}}~{\rm Gyr} \label{eq:lifetime_lowmass}$$
Each AGB star ejects $\zeta_{j,{\rm AGB}}$ solar masses of element $j$. Stars lighter than $10~M_{\sun}$ participate in AGB mass loss whereas stars heavier than $10~M_{\sun}$ explode as Type II SNe (Sec. \[sec:II\]). The lower mass limit we consider for AGB stars is $0.865~M_{\sun}$, which is the stellar lifetime corresponding to the age of the Universe, 13.6 Gyr, according to Eq. \[eq:lifetime\_lowmass\]. The AGB mass return rate in $M_{\sun}~{\rm Gyr}^{-1}$ is given by
$$\begin{aligned}
\dot{\xi}_{j,\rm{AGB}} &=& 0.31~M_{\sun}^{0.2}\:\int_{0.865~M_{\sun}}^{1~M_{\sun}} \zeta_{j,{\rm AGB}}(M,Z(t-\tau_*(M)))\, \nonumber \\
& & \; \times \; \dot{\xi}_*(t-\tau_*(M)) \, M^{-2.2} \, dM \nonumber \\
& & \; + \; 0.31~M_{\sun}^{0.7}\:\int_{1~M_{\sun}}^{10~M_{\sun}} \zeta_{j,{\rm AGB}}(M,Z(t-\tau_*(M)))\, \nonumber \\
& & \; \times \; \dot{\xi}_*(t-\tau_*(M)) \, M^{-2.7} \, dM \label{eq:AGB}\end{aligned}$$
Compared to SN ejecta, AGB ejecta affect the chemical evolution of the elements considered here to a small degree. AGB ejecta are more important for other elements, such as C, N, and O.
Gas Infall {#sec:infall}
----------
Infall of gas during the star formation lifetime of a dSph is required to explain its MDF [@kir10a Paper III]. Therefore, our model allows pristine gas to fall into the dSph. The gas has a helium fraction of $Y = X_{\rm He}(0) = 0.2486$, which is the value obtained when the WMAP7 [@lar10] baryon-to-photon ratio is applied to the formula of @ste07. The rest of the infalling gas is hydrogen.
The MDFs of the dSphs are generally more peaked than a closed box model predicts. One scenario that explain such a distribution is gas infall that first increases and then decreases [@lyn75; @pag97]. We find that a quick increase of the rate of gas falling into the galaxy followed by a slower decrease in the infall rate does well at reproducing the data. We parametrize the gas infall rate as follows.
$$\dot{\xi}_{j,\rm{in}} = A_{\rm in} \, X_j(t=0) \, \left(\frac{t}{\rm{Gyr}}\right) \, e^{-t/\tau_{\rm in}} \label{eq:infall}$$
The term $X_j(t=0)$ means that the infalling gas is primordial (metal-free). The variables $A_{\rm in}$ and $\tau_{\rm in}$ are free parameters in the model.
Supernova Winds {#sec:winds}
---------------
The MDFs of dSphs require gas outflow. If that were not the case, the metallicities would approach the supernova yields, which are much larger than observed in even the most metal-rich star in any dSph. Gas may be lost through supernova winds, stellar winds, or gas stripping from an external source. All of these sources undoubtedly occur over a dSph’s lifetime, but supernova winds are the most straightforward to include in a chemical evolution model. We ignore other sources of gas loss.
Our computation of gas loss is fairly simple. The galaxy loses a fixed amount of gas for every supernova that explodes. The blown-out gas mass does not vary with SN type because the explosion energies for Types II and Ia SNe are similar. See @rec01, @rom06, and @mar08 for examples of chemical evolution models that treated the energy input from the two SNe types differently. The rate of gas loss is
$$\dot{\xi}_{j,\rm{out}} = A_{\rm out} \, X_j \, (\dot{N}_{\rm{II}} + \dot{N}_{\rm{Ia}}) \label{eq:winds}$$
The parameter $A_{\rm out}$ is a free parameter in the model. An energy argument shows that the ejected gas mass is of the order of $10^{4}~M_{\sun}~{\rm SN}^{-1}$. One supernova explodes with a typical energy of $10^{51}$ erg [@woo95]. In the late stages of expansion, the kinetic energy of the ejecta is $E_{\rm ej} \sim 8.5
\times 10^{49}$ erg [@tho98]. A typical line-of-sight velocity dispersion for a dwarf galaxy is $\sigma_{\rm los} \sim 10~{\rm
km~s}^{-1}$. Given the virial theorem ($GM/R = 3\sigma_{\rm
los}^2$) and the escape velocity ($v_{\rm esc}^2 = 2GM/R$), then the gas mass ejected as a result of SN blowout is $M_{\rm ej} = E_{\rm
ej}/v_{\rm esc}^2 = E_{\rm ej}/(6\sigma_{\rm los}^2) \sim 7 \times
10^3~M_{\sun}~{\rm SN}^{-1}$.
A metal-enhanced wind can prevent the galaxy from becoming too metal-rich without such a large gas loss [@vad86]. For simplicity, we assume that the SN winds have the same chemical content as the gas remaining in the galaxy. See Sec. \[sec:Zwind\] for a further discussion of including metal-enhanced winds in the model.
Complete Chemical Evolution Equation
------------------------------------
The complete equation that describes the chemical evolution of the galaxy’s gas is
$$\begin{aligned}
\xi_j(t) &=& M_{\rm gas}(0) + \int_0^{t} (-\dot{\xi}_{j,*} + \dot{\xi}_{j,{\rm II}} + \dot{\xi}_{j,{\rm Ia}} + \label{eq:gce} \\
& & \dot{\xi}_{j,{\rm AGB}} + \dot{\xi}_{j,{\rm in}} - \dot{\xi}_{j,{\rm out}})\, dt \nonumber\end{aligned}$$
The initial gas mass, $M_{\rm gas}(0)$, is a free parameter. A non-zero initial gas mass may seem inconsistent with Eq. \[eq:sfr\] because the gas should form stars as it falls into the galaxy. However, the galaxy could acquire gas available for star formation—via gravitational collapse or cooling, for example—on a timescale faster than the star formation timescale. We will show that the non-zero initial gas mass is more important for the more luminous dSphs.
Shortcomings of the Model {#sec:shortcomings}
-------------------------
Our model incorporates realistic conditions in dwarf galaxies. We model chemical evolution using an observed Type Ia SN DTD [@mao10]. We also take into account the lifetimes of Type II SN progenitors, rather than assuming instantaneous recycling. The delay helps to shape the metal-poor abundance distributions because it affects the rapid rise in metallicity after the onset of star formation.
However, our model is not as sophisticated as some other chemical evolution models of dwarf galaxies [e.g., @mar08; @rev09; @saw10]. In the next section, we show the best model fits to eight different MW satellite galaxies. The simplicity of our model reduces the computational demand of finding the best solution. Nonetheless, we enumerate some shortcomings which affect the interpretation of the abundance distributions.
1. The turn-on time for Type Ia SNe is poorly constrained. @mao10 showed that it is almost certainly 0.1 Gyr or less (at least in the Magellanic Clouds and higher redshift elliptical galaxies), but the DTD slope ($t_{\rm delay}^{-1.1}$) is divergent as $t_{\rm delay}$ approaches zero. Therefore, the number of Type Ia SN that explode shortly after their progenitors form depends sensitively on the turn-on time. The uncertainty in the turn-on time translates to a large uncertainty in the Fe abundance distribution. With all other model parameters held fixed, an earlier turn-on time would cause the metallicity of the MDF peak to increase and \[$\alpha$/Fe\] at a given metallicity to decrease. See @mat09 for a detailed discussion of the effect of adjusting the ratio of prompt to delayed Type Ia SNe.
2. The SN yields are imperfect. As we mention in Sec. \[sec:dsphs\], we needed to increase the \[Mg/H\] output of the model by 0.2 dex [see @fra04]. Furthermore, Ti is severely underproduced in our model. Therefore, we do not consider Ti abundances at all.
3. Our model assumes instantaneous mixing. Relaxing this approximation would require multiple zones, which we do not consider for the sake of computational simplicity. See @mor02, @mar06 [@mar08], @rev09, and @saw10 for three-dimensional chemical models of dwarf galaxies.
4. We also assume instantaneous gas cooling. The cooling time for gas to become available for star formation (after accretion or ejection from SNe and AGB stars) may be longer than the model time step, $\Delta t = 1~{\rm Myr}$. A more proper treatment of the cooling time, such as in a hydrodynamical model, might result in slightly longer SF durations that we derive with instantaneous cooling.
5. On a related note, we also ignore dynamical processes. Our adoption of a single value of $A_{\rm out}$, the gas ejected from the galaxy in the wind of one supernova, implicitly assumes that the potential of the galaxy is homogeneous and static. This assumption is inconsistent with our allowance of gas to flow into the galaxy. Although dark matter dominates the dynamical mass of dSphs, they undoubtedly change their dark matter masses during their star formation lifetimes [@rob05; @bul05; @joh08]. Furthermore, baryonic (adiabatic) contraction can affect star formation and feedback in the dense centers of the dSphs [@nap10].
6. We consider only one parametrization of the gas infall rate. Because the star formation rate is proportional to the gas mass, the gas infall rate essentially shapes the differential MDF. Differently shaped gas infall histories might better reproduce the dSph MDFs. External influences on the gas flow (or alternatively, availability of gas cool enough to form stars) that we do not consider include reionization [@bul00] and tidal and ram pressure stripping [@lin83].
7. We model only one episode of star formation. CMDs have revealed extended and possibly bursty SFHs in several dSphs in our sample (Fornax and Leo I and II). These bursts will not be included in our model. In these cases, we defer to the photometrically derived SFHs. In fact, we suggest for future study a more sophisticated analysis that models both the CMD and abundance distributions.
8. The infalling gas is assumed to be metal-free at all times. In reality, the metallicity may have increased over time because the source of the new gas may have been blowout from prior SF episodes in the galaxy in question or other galaxies. This gas would have been enriched by SNe and other nucleosynthetic sources.
9. The modeling result for a given galaxy represents only part of that galaxy’s stellar population. Our spectroscopic samples were centrally concentrated to maximize the number of member stars on a DEIMOS slitmask, but most dSphs have radial population gradients [e.g., Sculptor, @bat08]. As a result, we preferentially probe the younger, more metal-rich populations. MW satellite galaxies also shed stars as they interact with the Galaxy. @maj00b identified stars from the Carina dSph beyond Carina’s tidal radius. @maj02 and @mun06 discussed the implications for Carina’s present stellar population. In particular, the remaining stars are on average younger and more metal-rich than the lost stars. Consequently, the spectroscopic sample favors the younger, more metal-rich stars.
Some of these shortcomings are observational or theoretical uncertainties (1–2), which can only be resolved with a more thorough investigation of SN rates or yields. Others are simplifications (3–8), which can be resolved with more sophisticated models. The last shortcoming (9) could be resolved by an intensive, wide-field campaign with the intent to recover spectra for a magnitude-limited sample of red giants in a dSph. This project would require a great deal of telescope time, but it could be accomplished in principle for one or two dSphs. Foreground contamination could be minimized by selecting a dSph at high Galactic latitude or photometrically pre-selecting likely members [e.g., @maj00a].
Gas Flow and Star Formation Histories {#sec:dsphs}
=====================================
We apply our chemical evolution model to eight dSphs: Fornax, Leo I, Sculptor, Leo II, Sextans, Draco, Canes Venatici I, and Ursa Minor. We use the abundance measurements from @kir10b. For each galaxy, we attempt to match simultaneously the distribution of \[Fe/H\] and the trends of \[Mg/Fe\], \[Si/Fe\], and \[Ca/Fe\] with \[Fe/H\] by adjusting the six free parameters listed at the bottom of Table \[tab:gcevars\].
Unfortunately, some elemental abundances could not be matched for any combination of parameter values. In particular, the model underpredicts \[Mg/H\] and \[Ti/H\]. @fra04 constructed a chemical evolution model for the Milky Way and also encountered trouble in reproducing the yields. They concluded that the SN yields should be modified. They specifically singled out Mg for being underproduced by both Type Ia SNe and low-mass Type II SNe. We feel comfortable modifying the model results for \[Mg/H\] because chemical evolution models by different authors over a wide range of galaxy masses and ages indicate that such modification is necessary. We add 0.2 dex to \[Mg/H\] to bring the model into better agreement with the data. However, the @nom06 Type II SN yield for \[Ti/Fe\] is about $-0.1$ dex, which is far below the value observed for metal-poor stars in dSphs or in the MW halo. Rather than attempting to correct such a large deficit, we ignore the model result for Ti. @nom06 also ignore their Ti yields in their own chemical evolution model of the solar neighborhood.
In @kir10a, we found the best-fit analytical chemical evolution models for the same eight dSphs based on their MDFs alone. We repeat the process here for our more sophisticated model. As in @kir10a, we use maximum likelihood estimation to find the best-fit model parameters.
The likelihood that a particular model matches the data is the product of probability distributions. Each star is represented by a probability distribution in a four-dimensional space. The four dimensions are \[Fe/H\], \[Mg/Fe\], \[Si/Fe\], and \[Ca/Fe\]. We denote these quantities as $\epsilon_{i,j}$, where $i$ represents the $i^{\rm th}$ star and $j$ identifies one of the four element ratios. The Gaussian is centered on the star’s observed values. The width in each axis is the estimate of measurement uncertainty ($\delta \epsilon_{i,j}$) in that quantity. Stars with larger uncertainties have less weight in the likelihood calculation than stars with smaller uncertainties. (Although Figs. 2–9 show only stars with uncertainties less than 0.3 dex, there is no error cut in the likelihood calculation. Instead, we downweight stars with large uncertainties.) The chemical evolution model traces a path $\epsilon_j(t)$ in the four-dimensional space. The probability that a star formed at a point $t$ on the path is $dP/dt = \dot{M}_*(t)/M_*$, where $M_*$ is the galaxy’s final stellar mass. The likelihood that one star conforms to the model is the line integral of $dP/dt$ along the path $\epsilon_j(t)$. The total likelihood $L$ is the product of the individual likelihoods of the $N$ stars:
$$\begin{aligned}
L &=& \prod_{i=1}^{N} \int_0^t \left(\prod_j \frac{1}{\sqrt{2\pi}\,\delta\epsilon_{i,j}} \exp \frac{-(\epsilon_{i,j}-\epsilon_j(t))^2}{2(\delta\epsilon_{i,j})^2}\right) \frac{\dot{M}_*(t)}{M_*} \, dt \nonumber \\
& & \times \; \bigg(\frac{1}{\sqrt{2\pi}\,\delta M_{*,{\rm obs}}} \exp \frac{-(M_{*,{\rm obs}}-M_{*,{\rm model}})^2}{2(\delta M_{*,{\rm obs}})^2} \nonumber \\
& & \times \; \frac{1}{\sqrt{2\pi}\,\delta M_{\rm{gas},{\rm obs}}} \exp \frac{-(M_{{\rm gas},{\rm obs}})^2}{2(\delta M_{{\rm gas},{\rm obs}})^2}\bigg)^{0.1N} \label{eq:lprod}\end{aligned}$$
The second line of the equation requires that the final stellar mass of the model ($M_{*,{\rm model}}$) matches the observed stellar mass ($M_{*,{\rm obs}}$) within the observational uncertainties. We adopt the stellar masses of @woo08. They did not study Canes Venatici I. We assume that galaxy has about the same stellar mass as Ursa Minor because it has the same luminosity within the observational uncertainties. The third line of the equation assures that the dSph ends up gas free. We fairly arbitrarily assume an uncertainty of $\delta M_{{\rm gas},{\rm obs}} = 10^3~M_{\sun}$ because even lower values of $\delta M_{{\rm gas},{\rm obs}}$ cause the chemical evolution model to converge on spurious solutions. The exponent $0.1N$ sets the relative influence of the final stellar and gas mass compared to the abundance distributions. This value was chosen so that these quantities did not dominate the likelihood but also so that the modeled galaxies ended up gas-free and with about the correct stellar mass.
For computational simplicity, we minimize the quantity ${\hat L} =
-\ln L$:
$$\begin{aligned}
{\hat L} &=& -\sum_{i=1}^{N} \ln \int_0^t \left(\prod_j \frac{1}{\sqrt{2\pi}\,\delta\epsilon_{i,j}^2} \exp \frac{-(\epsilon_{i,j}-\epsilon_j(t))^2}{2\delta\epsilon_{i,j}^2}\right) \frac{\dot{M}_*(t)}{M_*} \, dt \nonumber \\
& & + \; 0.1N \bigg(\frac{(M_{*,{\rm obs}}-M_{*,{\rm model}})^2}{2(\delta M_{*,{\rm obs}})^2} \nonumber \\
& & + \; \frac{(M_{{\rm gas},{\rm obs}})^2}{2(\delta M_{{\rm gas},{\rm obs}})^2} + \ln (2\pi) + \ln (\delta M_{*,{\rm obs}}) + \ln (\delta M_{\rm{gas},{\rm obs}})\bigg) \label{eq:lsum}\end{aligned}$$
We find the values of the six parameters that minimize ${\hat L}$ using Powell’s method. We calculate uncertainties on the model parameters via a Monte Carlo Markov chain. We perform at least $10^4$ trials for each dSph after a burn-in period of $10^3$ trials. The dSphs with shorter SF durations require less computation time, and we were able to perform up to $5 \times 10^4$ trials for some of the dSphs. As in @kir10a, the model uncertainties are the two-sided 68.3% confidence intervals. These uncertainties incorporate only observational uncertainty and not systematic model errors. Table \[tab:gcepars\] lists the solutions for each dSph in order of decreasing luminosity.
Table \[tab:duration\] lists the total star formation durations for the most likely models. The duration is not a free parameter but a result of the model. The table also lists some timescales derived from HST CMDs [@dol05; @orb08]. It is not possible to measure photometrically the total star formation duration for predominantly ancient stellar populations because 10 Gyr isochrones are extremely similar to 13 Gyr isochrones. Therefore, we have quoted $f_{10G}$, the fraction of stars formed more recently than 10 Gyr. For small or zero values of $f_{10G}$, the CMD shows that the population is ancient, but there is no time resolution. We also show the stellar mass-weighted mean age $\tau$ [@orb08]. For the three dSphs with intermediate-aged populations (Fornax and Leo I and II), $\tau$ combined with $f_{10G}$ gives some idea of the star formation duration. For example, Fornax formed $1 - f_{10G} = 27\%$ of its stars beyond 10 Gyr ago, but the mean age is just 7.4 Gyr. Half of Fornax’s stars formed over at least 2.6 Gyr, and the other half formed even more recently. Our abundance-derived duration of [1.3]{} Gyr is inconsistent with this photometric star formation duration. For Fornax and Leo I and II, we defer to the photometrically derived SFHs (see item 7 of Sec. \[sec:shortcomings\]). They are more realistic because they permit an arbitrary number of SF episodes. For the galaxies whose CMDs identify them to be ancient, our abundance distributions are far more sensitive probes of the SF duration than the CMD.
In the following sections, we discuss the derived star formation and gas flow histories for each dSph and compare them to previous photometrically and spectroscopically derived SFHs.
Fornax
------
We begin our discussion with the most luminous of the mostly intact MW dSph satellites, Fornax. Its \[$\alpha$/Fe\] distribution (Fig. \[fig:for\]) shows the least evidence of correlation with \[Fe/H\] of all eight dSphs studied here. In the range $-1.3 \la
{{\rm [Fe/H]}}\la -0.5$, the four \[$\alpha$/Fe\] element ratios span almost 1 dex at a fixed metallicity with no evidence of a slope with \[Fe/H\]. The rarer stars more metal poor than ${{\rm [Fe/H]}}\approx -1.3$ have higher average \[$\alpha$/Fe\].
The large range of \[$\alpha$/Fe\] and the lack of correlation with \[Fe/H\] each suggest bursty or inhomogeneous star formation. A bursty SFH would cause spikes and depressions in \[$\alpha$/Fe\] as \[Fe/H\] increases monotonically [e.g., @gil91], even if the star formation were well-mixed over the whole galaxy at all times. Measurement uncertainties might blur the division between the \[$\alpha$/Fe\] spikes in different bursts. Alternatively, if the SN nucleosynthetic products were not well-mixed, the \[$\alpha$/Fe\] value of a star would reflect the particular SFH of its birth site rather than the galaxy as a whole. Consequently, the abundance distribution would be a composite of several different SFHs. Coupled with measurement uncertainties, the composite distribution may look like an uncorrelated scatter of points, such as the distribution in Fig. \[fig:for\]. Burstiness and inhomogeneity are not mutually exclusive. Both processes might have affected Fornax’s SFH.
Based on HST/Wide Field Planetary Camera 2 (WFPC2) photometry, @buo99 surmised that the field (not globular cluster) population of Fornax endured three major bursts of star formation separated by about 3 Gyr. @sav00, @bat06, @gul07, and @col08 provided additional photometric and spectroscopic evidence of multiple discrete populations, including a burst 4 Gyr ago. @gre99, @bat06, and @col08 additionally showed that the younger, more metal-rich populations are more centrally concentrated. Thus, it seems that star formation in Fornax was both bursty and inhomogeneous.
Our chemical evolution model is incompatible with Fornax’s complex SFH. First, we model the SFR as a smooth function, not a bursty one. Second, the model has only one zone and does not account for spatially segregated star formation. Consequently, the SFH derived from our model should be viewed with skepticism. Most notably, we derive a total star formation duration of [1.3]{} Gyr (the time at which star formation and SN winds exhausted the gas supply, thereby truncating star formation), whereas every photometric study shows that star formation in Fornax lasted for most of the age of the Universe. In addition, the model does not match the observed flatness of the \[$\alpha$/Fe\] distribution for the bulk of the stars. However, the model does share one important quality with photometrically derived SFHs: The initial metal enrichment is very rapid. The metallicity in our model reaches ${{\rm [Fe/H]}}= -1$ at 0.3 Gyr after the commencement of star formation. @pon04 deduced that Fornax reached ${{\rm [Fe/H]}}=
-1$ within a few Gyr. One advantage of a spectroscopically derived SFH is that it is sensitive to relative ages, whereas a photometrically derived SFH is sensitive to absolute ages but has poor age resolution for old populations. @let10 measured multi-element abundances from higher resolution spectra of 81 Fornax members. We showed in @kir10b that our abundance measurements match theirs very well. They pointed out that centrally selected stars in Fornax will preferentially sample the young, metal-rich component. In fact, the most metal-poor star known in Fornax [${{\rm [Fe/H]}}= -3.66$, @taf10] is very far ($43'$) from the center of the dSph. The discovery emphasizes that selecting stars in the center of the dSph biases the age and metallicity distribution.
Leo I
-----
=.5 {width="49.50000%"} {width="49.50000%"}
=.5 {width="49.50000%"} {width="49.50000%"}
Leo I is the second most massive dwarf galaxy in our sample. The \[$\alpha$/Fe\] distribution of Leo I shows a moderate correlation with \[Fe/H\]. In particular, the lower metallicity stars (\[Fe/H\]$ < -1.5$) show on average higher \[$\alpha$/Fe\] (except for Ti) than the more metal-rich stars.
@lee93 obtained the first CCD-based CMD of Leo I, and they found hints of a young (3 Gyr) population. @cap99 and @gal99a conducted the first comprehensive studies of Leo I’s SFH using CMDs obtained with HST/WFPC2. Because these CMDs reached the main-sequence turnoff of the oldest ($>10$ Gyr) populations, they were able to study the multiple stellar populations and complex SFH. Leo I was thought to be unique among the MW satellite dSphs for lacking a conspicuous horizontal branch (HB) until a $12\arcmin \times
12\arcmin$ ground-based survey of Leo I by @hel00 revealed a HB structure in its CMD. The existence of both an extended blue HB and RR Lyrae stars [@hel01] suggested that Leo I is in fact similar to other local dSph galaxies in having a $> 10$ Gyr population, but the majority of stars were still believed to have formed later than 7 Gyr ago. However, a recent CMD obtained with HST/Advanced Camera for Surveys/Wide Field Camera [@sme09] reached far deeper than the earlier ones and showed that at least half of the stars were in fact formed more than 9 Gyr ago, which is consistent with the abundant RR Lyrae stars found by @hel01. In addition, @sme09 combined their CMD with the spectroscopic MDF of @bos07 to find that Leo I experienced two episodes of star formation around 2 and 5 Gyr ago.
Because our chemical evolution models halt when the gas mass drops to zero, we are unable to recover the later phases of SFH (i.e., the two bursts at 2 and 5 Gyr ago). Nonetheless, our model provides insights into the early phase with better time resolution. Overall, our model matches the observed trend of \[$\alpha$/Fe\] with \[Fe/H\] fairly well, but the model MDF slightly overpredicts the frequency of metal-rich stars. The observed MDF also shows a more pronounced peak at \[Fe/H\] $= -1.4$ than the model. The initial starburst that likely led to the formation of Leo I lasted for about [1.4]{} Gyr. This is much shorter than the star formation duration of $\sim 5$ Gyr derived by photometric studies. As with other galaxies in our sample, adding burstiness to our model would help resolve these discrepancies. @lan10 suggested that Leo I is characterized by a low SFR and intense galactic wind. The main difference between their model and ours is that we start with a much higher gas mass (by a factor of $\sim 400$). Also, our model requires a highly efficient SFR to match the observed MDF. The discrepancies with @lan10 partly result from our choice to use unenhanced galactic winds. Metal-enhanced winds would reduce the amount of gas required to be blown out. As for Fornax, our model is qualitatively consistent with previously derived SFHs in the sense that the overall metallicity increases quickly at early times.
Leo I’s orbital dynamics, as studied by @soh07 and @mateo08, indicate close passes to the center of the MW. The dSph almost certainly lost stars in tidal interactions near its perigalacticon. The prevalence of an intermediate-aged (rather than old) population in Leo I may be a consequence of this tidal stripping. Because the stripped stars do not fall in our spectroscopic sample, our model does not represent some stars that formed early in Leo I’s history (see Sec. \[sec:shortcomings\], item 9).
Sculptor
--------
=.5 {width="49.50000%"} {width="49.50000%"}
Our chemical evolution model for Sculptor produces one of the best fits to the abundance distributions (Fig. \[fig:scl\]) out of all of the dSphs, particularly for the asymmetrical MDF. In @kir10a, we could not reproduce the width of Sculptor’s MDF with an analytical model of chemical evolution. Our more sophisticated model, which more properly treats Fe as a secondary nucleosynthetic product with multiple origins (Types II and Ia SNe), yields a broad, well-matched MDF for the appropriate choice of parameters. The combination of a low SFR normalization ($A_*$) and low initial gas mass maintains a lower rate of star formation than Fornax or Leo I. Consequently, the metal enrichment is less rapid and the SN-induced gas blowout is less severe. The resulting MDF has both metal-poor and metal-rich stars and is less-peaked than for the more luminous dSphs.
@nor78 first drew attention to the possibility that Sculptor was chemically inhomogeneous. @dac84 found that the bulk of Sculptor’s stars are slightly younger than the oldest globular clusters (GCs) but older than Fornax. With HST/WFPC2 photometry, @mon99 found that Sculptor is just as old as the GCs. Neither study could determine whether the bluer stars were a younger population or blue stragglers from the older population. @map09 presented evidence that the blue stars are true blue stragglers, meaning that Sculptor has only an old population. However, old does not necessarily mean single-aged. In fact, @maj99 found that Sculptor undoubtedly contains multiple stellar populations based on its HB and red giant branch (RGB) morphologies. The existence of a metallicity spread, the depression of \[$\alpha$/Fe\] with increasing metallicity, and the radial change in HB morphology means that star formation lasted for at least as long as the lifetime of a Type Ia SN and possibly for a few Gyr [@tol03; @bab05].
Our chemical evolution model conforms to the photometric description of Sculptor’s SFH. According to our model, Sculptor formed stars for [1.1]{} Gyr. In fact, one of the major advantages of an abundance-derived SFH is that it can resolve ages of old populations much more finely than a photometrically-derived SFH. As a result, we believe our estimate of the star formation duration to be the most precise presently available for Sculptor.
@lan04 also found a chemical evolution model to match the five stars with then-available multi-element abundance measurements [@she03]. Their model showed a sharp kink or knee at the time when Type Ia SNe ejecta began to dilute the \[$\alpha$/Fe\] ratio with large amounts of Fe. Our model shows a less pronounced knee that occurs at lower \[Fe/H\] and higher \[$\alpha$/Fe\] primarily due to our different treatments of the Type Ia SN DTD. @rev09 modeled unpublished abundance measurements by the Dwarf Abundances and Radial Velocities Team (DART) for Sculptor with a sophisticated hydrodynamical model. They found that nearly all of the stars formed between 10 and 14 Gyr ago, with nearly half of the stars forming at least 13 Gyr ago. The model supposed that the stars formed in about five bursts. It is possible that adding burstiness to our model would help to reconcile the model with the observed data, such as the peak in the MDF at ${{\rm [Fe/H]}}= -1.3$ and the discrepancy in \[Ca/Fe\] at high metallicity. However, @rev09’s model predicted many more stars at ${{\rm [Fe/H]}}< -3$ than we or DART (who sample a wider area) observe. A less intense initial burst (crudely approximated by the 0.3 Gyr SFR rise time in Fig. \[fig:scl\]) better matches the low-metallicity MDF. Finally, in constructing a chemical evolution model of Sculptor, @fen06 found that neutron-capture elements contribute significantly to the ability to discriminate between different models of star formation. Large, high-resolution surveys will add these elements to the dSphs’ repertoire of abundance measurements.
Like Fornax, the central regions of Sculptor are dominated by a more metal-rich population than the outer regions [@bat08]. Our sample is centrally concentrated in order to maximize the sample size. The selection results in a bias toward metal-rich, presumably younger stars, possibly shortening the derived the SF duration compared to what we would deduce from a more radially extended sample.
We also presented Sculptor’s abundance distributions in @kir09 [@kir09]. Minor modifications to the abundance measurements (@kir10b) and the restriction of the plot to points with measurement uncertainties less than 0.3 dex in either axis cause Fig. \[fig:scl\] to appear slightly different from Figs. 10–12 in @kir09. The differences do not affect any of the conclusions of @kir09.
Leo II
------
=.5 {width="49.50000%"} {width="49.50000%"}
The abundance distributions for Leo II resemble Sculptor in many ways. The MDF slowly rises to a peak followed by a sharp cut-off, and \[$\alpha$/Fe\] declines smoothly with increasing \[Fe/H\]. The best-fit SFH model shows a great deal of gas loss, like Sculptor. @bos07 also suggested that Leo II may have experienced more intense galactic winds than Leo I due to a lower peak in the MDF. In fact, we find that the mass lost per SN ($A_{\rm out}$) is higher in Leo II (${6.6}\times 10^3~M_{\sun}~{\rm SN}^{-1}$) than in Leo I (${3.9}\times 10^3~M_{\sun}~{\rm SN}^{-1}$).
Perhaps by virtue of its large Galactocentric distance (221 kpc), Leo II has maintained star formation for longer than Sculptor. @mig96 found from HST/WFPC2 photometry that the dSph started forming stars 14 Gyr ago and continued forming stars for about 7 Gyr. In a reanalysis of the same data, @orb08 determined that 30% of Leo II’s stars formed earlier than 10 Gyr ago and 67% formed between 5 and 10 Gyr ago. @she09 resolved the age-metallicity degeneracy in the CMD by using metallicities based on spectral synthesis of Keck/LRIS spectra. They found a significant population of stars as young as 3 Gyr. However, they pointed out a number of caveats that may introduce large errors into their age measurements.
We derive a star formation duration of [1.6]{} Gyr. Although it is the longest duration that we measure for the eight dSphs, it does not approach the photometrically derived durations. The smoothness of the modeled SFR may mask the true duration of SF. The abundance distributions—particularly \[Si/Fe\] and \[Ti/Fe\]—show a smattering of points beyond the main trend line. These stars may represent stellar populations of temporally separated bursts. @rev09 showed that a model with about 13 SF episodes matches the dispersion in \[Mg/Fe\] at a given \[Fe/H\] [observations by @she09] fairly well. Our model for Leo II, like Sculptor, may benefit by adding burstiness.
Sextans {#sec:sex}
-------
=.5 {width="49.50000%"} {width="49.50000%"}
Sextans, Draco, and Ursa Minor form a class of galaxies with similar abundance distributions and SFH models. Their MDFs are fairly symmetric (less so for Ursa Minor) with a clump of stars at ${{\rm [Fe/H]}}\sim -3$. Their \[$\alpha$/Fe\] ratios decline smoothly with increasing \[Fe/H\]. The dispersion in \[$\alpha$/Fe\] at a given \[Fe/H\] is fairly small. Most of the derived star formation parameters are similar (infall normalization, $A_{\rm in} \sim 1.1-1.5 \times
10^9~{\rm Gyr}$; infall timescale, $\tau_{\rm in} \sim 0.2$; outflow rate, $A_{\rm out} \sim 10^4~M_{\sun}~{\rm SN}^{-1}$).
The small bump in the MDF at ${{\rm [Fe/H]}}\sim -3$ deserves some discussion because it appears in Sextans, Draco, and Ursa Minor. A depression in the MDF appears between the bump and the bulk of the MDF. This bump might indicate a small, rapid SF burst at early times followed by an epoch of minimal star formation, possibly because the SNe from the initial burst blew out the gas. When the galaxy reacquired more cool gas, the bulk of SF began. The few available \[$\alpha$/Fe\] measurements in the bump are large, indicating that the stars in the bump formed before the onset of Type Ia SNe. Because our model does not permit individual bursts, we can not support this speculation beyond our qualitative argument.
Despite the low metallicity and low luminosity of Sextans, @bel01 found that the dSph has at least two stellar populations based on its HB and RGB morphology. With HST/WFPC2 photometry, @orb08 found no stars older than 10 Gyr. @lee09 measured Sextans’s SFH based on wide field photometry coupled with an algorithm that self-consistently derives the SFH and chemical evolution of the galaxy. They deduced that SF in Sextans occurred mainly between 11 and 15 Gyr ago, but some stars formed as recently as 8 Gyr ago. However, they assumed that Sextans is a closed box. In @kir10a, we showed that the MDF is inconsistent with a closed box. We allow gas to leave the system, which would bring an earlier end to SF than in a closed box. As a result, we find a SF duration of just [0.8]{} Gyr.
Draco
-----
=.5 {width="49.50000%"} {width="49.50000%"}
Because we conducted a more intense observational campaign on Draco than on Sextans, we better sample Draco’s abundance space. The better sampling does not change our qualitative description of the trio comprised of Sextans, Draco, and Ursa Minor (see Sec. \[sec:sex\]). The metal-rich side of Draco’s MDF seems tiered, with fewer stars than our model predicts at ${{\rm [Fe/H]}}= -1.5$ and $-1.2$. The tiers may indicate discontinuous periods of SF.
As a consequence of its proximity, Draco was one of the first dSphs subjected to spectroscopic scrutiny. This system has a stellar mass comparable to globular clusters, which are homogeneous in iron-peak elements. Therefore, the discovery of a metal abundance spread within this system [@kin80; @kin81; @ste84; @smi84; @leh92] proved to be a notable peculiarity. Furthermore, Draco contains stars more metal-poor than any globular cluster. The first attempt to interpret the metallicity distribution within Draco was that of @zin78. He compared metallicities derived for 23 red giants from the Hale 5-m multichannel scanner to a chemical evolution model that incorporated gas loss (with a rate proportional to the SFR) but no gas inflow. In order to account for the low metallicity of Draco, @zin78 inferred that this system had lost some 90–99% of its initial gas mass. Subsequent spectroscopic and photometric work has more extensively documented the MDF and increased the number of elements for which abundances have been measured [@she98; @she01a; @apa01; @bel02; @win03; @smi06; @far07; @abi08; @coh09].
HST/WFPC2 photometry [@gri98] and wide-field Isaac Newton Telescope photometry [@apa01] showed little evidence for stars younger than 10 Gyr in Draco. On the other hand, @iku02, who also pointed out the similarities between Sextans, Draco, and Ursa Minor, found a longer SF duration: between 3.9 and 6.5 Gyr. However, @iku02, like @lee09, assumed that a closed box was an adequate description of the galaxy. In @kir10a, we determined that failing to account for gas outflow overpredicts the peak metallicity of the MDF and that failing to account for gas infall results in an MDF shape that does not match the observations. Our abundance-based SF duration, relaxing the closed box assumption, is [0.7]{} Gyr. Strangely, based on the same HST/WFPC2 data that @gri98 used, @orb08 determined that half of the stars in Draco are younger than 10 Gyr. @orb08 derived SFHs for many dSphs, and they did not mention Draco explicitly in their text. As a result, we do not know why their SFH diverged from that of @gri98
@coh09 analyzed high-resolution spectroscopic abundances for eight newly observed stars and six stars from the literature. They fit a toy model with low- and high-metallicity plateaus in \[X/Fe\]. The low-metallicity plateau has a maximum metallicity of ${{\rm [Fe/H]}}=
-2.9$ for \[Mg/Fe\] and $-2.4$ for \[Si/Fe\]. We do not see a low-metallicity plateau because our sample does not include enough metal-poor stars. Instead, we observe a smooth, monotonic decline in all four \[$\alpha$/Fe\] ratios as a function of increasing \[Fe/H\]. The absence of a low-metallicity plateau for the metallicity range of our sample suggests that Type Ia SNe were exploding for nearly the entire SF lifetime of Draco. @mar06 [@mar08] constructed a hydrodynamical model of a Draco-like dSph. In order for \[$\alpha$/Fe\] to drop to 0.2 dex, their modeled dSph must have evolved for at least 2 Gyr. However, at small radius—the location of most spectroscopic surveys, including the majority of our Draco sample—\[$\alpha$/Fe\] does drop to lower values sooner than in the dSph as a whole. Nonetheless, @mar08 predicted mostly stars with \[$\alpha$/Fe\] larger than 0.2 dex with a plateau at low metallicity. We observe neither of these qualities. Nonetheless, their model does qualitatively reproduce important features of dSph abundance distributions, including radial gradients in both \[Fe/H\] and \[$\alpha$/Fe\], the shape of the MDF, and an anti-correlation between metallicity and velocity dispersion.
Finally, we point out that, according to our model, Draco lost an enormous amount of gas from SN winds during its SF lifetime. @lan07 used Draco and Ursa Minor as case studies in the importance of SN winds. One interesting divergence from our model is that they found that a wind intensity proportional to the SFR rather than the SN rate better voided the dSph of gas by the present time, in agreement with the observed absence of gas. Our different prescription for the Type Ia DTD may mitigate the difference between the SFR and SN rate.
Canes Venatici I
----------------
=.5 {width="49.50000%"} {width="49.50000%"}
Of all of our dSph models, that for Canes Venatici I adheres most closely to the observed abundance distributions, in part because of the sparse sampling. The MDF is a perfect match, and the predicted \[$\alpha$/Fe\] line passes through the observed locus of points, except for veering to slightly high \[$\alpha$/Fe\] values at high \[Fe/H\]. Unfortunately, only three stars pass the \[Mg/Fe\] uncertainty cut of 0.3 dex. More measurements of \[Si/Fe\] and \[Ca/Fe\] help us to determine a SF duration of [0.9]{} Gyr and an unusually low SFR exponent of $\alpha = {0.36}$. The weaker dependence on gas mass shapes the SFR profile in such a way that produces a more symmetric MDF while preserving a steadily declining \[$\alpha$/Fe\] distribution with increasing \[Fe/H\].
Because Canes Venatici I was discovered recently [@zuc06], few photometric studies exist. @martin08b found that the dSph contains mostly stars older than 10 Gyr, but 5% of the stars could be as young as 1.4 Gyr. @kue08, with a shallower CMD, found possible evidence for a population as young as 0.6 Gyr. They also found three candidate anomalous Cepheid variables, indicating an intermediate-age population. Because the young population is much smaller than the old population, our chemical evolution model and its SF duration should be viewed as applicable to the dominant old population.
Ursa Minor
----------
=.5 {width="49.50000%"} {width="49.50000%"}
The low-mass Ursa Minor dSph has sometimes been studied in comparison with the Draco dSph, in regard to both its metallicity inhomogeneity and stellar population [@zin81; @ste84; @bel85; @she01b; @bel02; @win03; @abi08]. A relatively small age spread and an ancient mean age [@ols85; @mig99; @carr02] also makes it also an interesting contrast to halo globular clusters. However, spectroscopy has shown that Ursa Minor has a heavy element abundance spread of more than 1 dex [@zin81; @she01a; @win03; @sad04; @coh10] even though its stellar mass is similar to that of a GC. @cud86 conducted a photometric survey of Ursa Minor down to the HB. With $\sim 450$ members, they found that the stellar population resembles that of an old, metal-poor GC with a steep RGB and a blue horizontal branch. The HST/WFPC2 imaging study of @mig99 confirmed this SFH: a single major burst of star formation about 14 Gyr ago with a duration of less than 2 Gyr. Our best-fit model agrees with these earlier results. From our observed abundance distributions, we deduce that almost all of the star formation in Ursa Minor occurred over an interval of only [0.4]{} Gyr. In contrast, @iku02 derived an extended period of star formation lasting for about 5 Gyr from their closed-box analysis of the CMD. In @kir10a, we showed that Ursa Minor’s MDF is inconsistent with a closed box. @coh10 used metallicities from moderate resolution spectra combined with ages from isochrones to reaffirm that most of the stars in Ursa Minor are quite old.
MDFs have been generated from photometric surveys by @bel02 and from moderate resolution spectroscopy by @win03. That of @bel02 is a good match to our observed MDF given in Fig. 9. Both show a sharp rise to a peak metallicity of about $-2$ dex with a more gradual decline towards higher \[Fe/H\]. The best fit chemical evolution model for Ursa Minor produces an MDF that fails to match the rapid rise seen at ${{\rm [Fe/H]}}\la -2.3$ dex.
@coh10 provided detailed abundance analyses for a sample of 16 RGB stars, 6 of which came from earlier work by @she01a or from @sad04. Their trends for \[Mg/Fe\], \[Si/Fe\], \[Ca/Fe\], and \[Ti/Fe\] agree qualitatively with those found here, but their sample has better coverage of the regime ${{\rm [Fe/H]}}< -2.5$ dex, where they found a plateau in \[$\alpha$/Fe\]. At very low metallicity, \[$\alpha$/Fe\] in our models reaches highly supersolar ratios, which are larger than those observed at the metal-poor end of the Ursa Minor population by @coh10.
Previous chemical evolution models of Ursa Minor include those of @lan04, who found that Ursa Minor has the shortest duration of star formation of any of the six dSph satellites they studied. They deduced that Ursa Minor experienced only a single burst lasting perhaps 3 Gyr, a moderately high star formation efficiency, and an intermediate wind efficiency. In our model the wind efficiency, $A_{\rm out}$, is the highest of all the dSphs in our sample (see Tab. \[tab:gcepars\]). @lan04’s predicted MDF fails at low \[Fe/H\], as does ours, by being too extended. In a later paper, @lan07 studied the effect of galactic winds. They concluded that a strong galactic wind is necessary to reproduce the rather low \[Fe/H\] of the peak of the Ursa Minor MDF, but they still failed to reproduce the sudden scarcity of stars more metal-poor than the MDF peak.
Both @mar01 and @mun05 have discovered tidal debris around Ursa Minor. As we discussed in Sec. \[sec:shortcomings\] (item 9), our observations are centrally concentrated and therefore biased toward the relatively younger, more metal-rich population that is still bound to the dSph. A truly complete analysis of Ursa Minor’s SFH must also include the tidally stripped, unbound stars.
Further Exploration of the Chemical Evolution Model {#sec:exploration}
===================================================
In this section, we explore the parameters of the chemical evolution model that were previously not allowed to vary. Namely, we examine the dependence of the outcome of the model on the Type Ia SN delay time distribution, the hypernova fraction, and the metal enhancement of supernova winds. We have chosen Sculptor as a case study. In each of the following three sections, we alter one aspect of the chemical evolution model for Sculptor. Then, we use Powell’s method to find the combination of the six free parameters that maximizes the likelihood estimator, as before. A Monte Carlo Markov Chain of at least $10^4$ trials provides the two-sided 68.3% confidence intervals for the first two altered models. Table \[tab:systematics\] compares the results of the new models with the original model.
Type Ia Delay Time Distribution {#sec:tIa3}
-------------------------------
=.5 {width="49.50000%"} {width="49.50000%"}
We have adopted the Type Ia DTD of @mao10. The model is very sensitive to the delay time of the [*first*]{} Type Ia SN to explode after the onset of star formation. Unfortunately, this quantity is poorly measured. We have chosen 0.1 Gyr because that is the maximum value that @mao10’s DTD seems to allow. However, the DTD was measured in a range of galaxies with widely varying star formation environments. The details of the combined DTD (Fig. \[fig:dtd\]) may not be appropriate for dSphs. For example, @kob98 and @kob09 suggested that single-degenerate Type Ia SNe will be inhibited at low metallicity (${{\rm [Fe/H]}}\la -1$). Nonetheless, the decline of \[$\alpha$/Fe\] with increasing \[Fe/H\] in Figs. \[fig:leoi\]–\[fig:umi\] demands that some kind of Type Ia SN explode. Thus, the low-metallicity Type Ia SNe in dSphs may be mergers of double-degenerate binaries only. The removal of the single-degenerate channel could affect the DTD.
In order to explore the impact of changing the DTD on the chemical evolution model, we have recomputed the most-likely model parameters for Sculptor with a minimum Type Ia delay time of 0.3 Gyr instead of 0.1 Gyr. We did not change the DTD normalization. Figure \[fig:tIa3\] shows the result compared to the original model (Fig. \[fig:scl\]). The abundance distribution is identical except for the low-metallicity \[$\alpha$/Fe\] plateau, which is flatter for the longer delay time because the mass dependence of the Type II SN yields is muted. However, the right panel of Fig. \[fig:tIa3\] shows that the SFH has changed dramatically. In particular, the timescale of SF has been expanded. In fact, the differences in the SFHs can be explained by multiplying the time variable in the original model by about 3.5. The result is less intense star formation over a longer time. In the end, just as many stars are formed and just as much gas is blown out as in the original model.
We conclude that the Type Ia SN DTD is a major uncertainty in our model. The abundance data alone does not help to determine the minimum delay time. The timescales in our models can be multiplied by a factor constrained only by the poorly known minimum Type Ia SN delay time.
Hypernova Fraction {#sec:epshn05}
------------------
=.5 {width="49.50000%"} {width="49.50000%"}
SN 1998bw was immediately identified to be unusual because of its association with a gamma ray burst and a light curve that suggested relativistically expanding gas [@gal98]. @iwa98 determined that the explosion energy for SN 1998bw was about 30 times larger than the average SN. The energy of the explosion has consequences for the nucleosynthesis. @nom06 calculated nucleosynthetic yields for SNe at a variety of explosion energies.
One of the fixed parameters in our model is the fraction of stars that explode as very energetic hypernovae ($\epsilon_{\rm HN}$). We initially chose $\epsilon_{\rm HN} = 0$ (no HNe) because it seemed to better match the abundance patterns at the lowest metallicities (e.g., \[Ca/Fe\] in Sculptor). In order to explore the effect of HNe, we have also found the most likely model for Sculptor with $\epsilon_{\rm HN}
= 0.5$. This is the value that @nom06 chose for their own chemical evolution model of the solar neighborhood. @rom10 further explored the effect of changing $\epsilon_{\rm HN}$.
Figure \[fig:epshn05\] compares the result of the model with $\epsilon_{\rm HN} = 0.5$ with the original model ($\epsilon_{\rm HN}
= 0$). The abundance distributions are nearly identical except at ${{\rm [Fe/H]}}< -2.3$. The model with larger $\epsilon_{\rm HN}$ reaches higher \[Fe/H\] before Type Ia SNe turn-on. This ensures that the lowest metallicity stars are not polluted by Type Ia SNe ejecta. The result is a plateau in \[$\alpha$/Fe\] at low \[Fe/H\]. We further discuss the presence of such a plateau in the \[Ca/Fe\] ratio of Sculptor and the absence of plateaus in other dSphs in Sec. \[sec:universalpattern\].
The effect on the SFH is more noticeable than on the abundance distributions. The total star formation duration shortens to [0.82]{} Gyr from [1.1]{} Gyr. The HN model also requires no initial gas, though the original model for Sculptor already did not require very much gas. Less gas is lost to supernova winds in the HN model.
In conclusion, the inclusion of HNe has a minor effect on the abundance distributions and SFH. The most notable result is that very metal-poor stars (${{\rm [Fe/H]}}< -2.3$) in the HN model have \[$\alpha$/Fe\] ratios that are inconsistent with any amount Type Ia SN ejecta. Instead, these stars incorporate the ejecta of only Type II SNe or HNe.
Metal-Enhanced Supernova Winds {#sec:Zwind}
------------------------------
=.5 {width="49.50000%"} {width="49.50000%"}
The SNe in our model expel gas without regard to its composition. However, SN winds might be expected to be more metal-rich than the average gas-phase metallicity because metals are more opaque (and therefore more susceptible to radiation pressure) than hydrogren and helium and because the same SNe that create the metals could blow them away [@vad86; @mac99]. In this section, we explore the effect of a metal-enhanced SN wind. We refer the reader to @rob05 for a more thorough discussion of a model that included metal-enhanced winds from dwarf galaxies.
We paramaterize the metallicity dependence of the wind by $f_Z$, which can vary between 0 and 1. Thus, we replace Eq. \[eq:winds\] with
$$\begin{aligned}
\dot{\xi}_{j,{\rm out}} &=& \left\{\begin{array}{lcr}
A_{\rm out} \, X_j \, (\dot{N}_{\rm{II}} + \dot{N}_{\rm{Ia}}) (1-f_Z) &~~~& j = {\rm H, He} \\
A_{\rm out} \, X_j \, (\dot{N}_{\rm{II}} + \dot{N}_{\rm{Ia}}) \left[f_Z \left(\frac{1}{Z} - 1 \right) + 1 \right] &~~~& {\rm otherwise} \\
\end{array} \right. \label{eq:Zwind}\end{aligned}$$
If $f_Z = 0$, then the wind is unenhanced. If $f_Z = 1$, then the winds expel only metals and no hydrogen or helium. For this experiment, we fix $f_Z$ at 0.01. Although that value seems small, the effect on the SFH is dramatic.
The modeled metallicity distribution (Fig. \[fig:Zwind\]) does not fit the observed distribution as well as for the original model. Instead, there is an overabundance of metal-rich stars. The metal-rich discrepancy could be mitigated by increasing $A_{\rm out}$ (the total amount of gas lost per SN) at the cost of worsening the match at intermediate metallicities. The predicted \[$\alpha$/Fe\] distributions change only at ${{\rm [Fe/H]}}\ga -1.2$. Metal-enhanced gas loss causes the hook back toward lower \[Fe/H\] in the \[$\alpha$/Fe\] diagrams. Because the SFR is very low by the time \[Fe/H\] begins to decrease, very few stars are formed during this time.
The most dramatic effect on the SFH is that much less gas is lost over the lifetime of SF in the metal-enhanced wind model than in the original model. With an unenhanced wind, Sculptor ejects [$1.8 \times 10^8~M_{\sun}$]{} of the gas that it starts with or accretes. With a metal-enhanced wind, that number decreases to [$4.5 \times 10^6~M_{\sun}$]{}. In both models, Sculptor forms about $1.2 \times 10^6~M_{\sun}$ of stars. The implications for galaxy evolution are dramatic. In the first case, over $10^8~M_{\sun}$ of gas is required to catalyze star formation in Sculptor. Nearly all of this gas is returned to the ISM. In the metal-enhanced wind case, star formation in Sculptor requires a gas mass of only a few times its final stellar mass. The mass of metals returned to the intergalactic medium in both cases is the same, but in the metal-enhanced wind model, the metals in the ejected gas are much more concentrated. Changes to other aspects of the SFH are subtle.
We conclude that the amount of metal enhancement in the SN blowout dramatically affects the gas dynamics of the dSph. Even a 1% metal enhancement reduces the total amount of gas required for star formation by a factor of 40. However, a model with $f_Z = 0.01$ results in a worse match to the observed metallicity distribution than the original model with an unenhanced wind. A lower, non-zero value of $f_Z$ might produce better agreement with the observed abundance data while reducing the amount of gas infall required from the unenhanced wind scenario. The literature on galactic chemical evolution contains a diversity of SN feedback treatments. We refer the reader to the articles we have already mentioned [e.g., @rec01; @lan04; @rob05; @rom06; @mar08] for more thorough treatments.
Trends with Galaxy Properties {#sec:trends}
=============================
![The moving averages, inversely weighted by measurement uncertainty, of abundance ratios for the eight dSphs and for the Milky Way [@ven04 who compiled data from the references given in footnote 7]. The bottom panel shows $\langle[\alpha/\rm{Fe}]\rangle$, the average of the top four panels. The line weight is proportional to the number of stars contributing to the average. The legend lists the dSphs in decreasing order of luminosity. Except for \[Ca/Fe\] in Sculptor, the abundance ratios do not show a low-metallicity plateau, which indicates that Type Ia SNe explode for nearly the entire duration of star formation. Our data are sparse at ${{\rm [Fe/H]}}< -2.5$, and Type Ia SNe need not explode at times corresponding to those low metallicities. Only the galaxies luminous enough to reach ${{\rm [Fe/H]}}\ga -1$ eventually achieve an equilibrium between Types II and Ia SNe and therefore a plateau at high metallicity.\[fig:alphatrends\]](alphatrends.eps){width="\linewidth"}
We now discuss trends of the abundance distributions and derived SFH parameters with observed galaxy properties, such as luminosity, velocity dispersion, half-light radius, and Galactocentric distance. We show that luminosity is the only galaxy property that shows any convincing correlation with the properties of the abundance distributions.
General \[$\alpha$/Fe\] Trends {#sec:gentrends}
------------------------------
Figure \[fig:alphatrends\] shows the trend lines of the different element ratios with \[Fe/H\]. The trend line is defined by the average of the element ratio, weighted by the inverse square of the measurement uncertainties, in a moving window of 0.5 dex in \[Fe/H\]. The moving averages relax the uncertainty cut of 0.3 dex (used for Figs. \[fig:for\]–\[fig:umi\]) to 1 dex, meaning that all of the measurements from the catalog (@kir10b) are included. The bottom panel shows the average of four element ratios, which is called $\langle[\alpha/\rm{Fe}]\rangle$. The weight of the line fades as fewer stars contribute to the average near the ends of the MDF. The figure legend lists the dSphs in order of decreasing luminosity. For comparison, some panels of the figure also display the trends for the Milky Way halo and disk for available element ratios [@ven04][^1].
Fig. \[fig:alphatrends\] presents the broad trends of the evolution of \[$\alpha$/Fe\] with increasing \[Fe/H\]. It does not convey the width of the dispersion of the \[$\alpha$/Fe\] distributions at a given metallicity, nor does it show the details at the margins of the MDF. The extremely metal-poor stars, which represent some of the oldest known stars, are not shown in Fig. \[fig:alphatrends\].
### Universal Abundance Pattern in dSphs {#sec:universalpattern}
The figure does show that the abundance distributions of dSphs evolve remarkably similarly. Although the dSphs span different ranges of \[Fe/H\], $\langle[\alpha/\rm{Fe}]\rangle$ follows roughly the same trend line. This similarity contradicts the reasonable expectation that different dSphs should show a knee in \[$\alpha$/Fe\] at different values of \[Fe/H\] [e.g., @mat90; @gil91; @tol09]. In fact, @tol09 did indeed find a knee in at ${{\rm [Fe/H]}}= -1.8$ in DART’s preliminary measurements for \[Ca/Fe\] in Sculptor. Our measurements of \[Ca/Fe\] in Sculptor also show a knee at the same metallicity and the same \[Ca/Fe\]. Ursa Minor possibly has a knee in \[Ca/Fe\], but with a lower \[Ca/Fe\] plateau. In agreement with @tol09’s and others’ predictions for lower mass systems to experience less intense SF, Ursa Minor’s possible knee occurs at lower \[Fe/H\] than Sculptor’s knee. However, the knee is apparent only in \[Ca/Fe\] and only in Sculptor and possibly Ursa Minor. The element ratios that would better identify the onset of Type Ia SNe, \[Mg/Fe\] and \[Si/Fe\], do not show a knee for any dSph.
The lack of knees for ${{\rm [Fe/H]}}> -2.5$ and the lack of low-metallicity plateaus in the \[$\alpha$/Fe\] distributions implies that Type Ia SNe exploded throughout almost all of the SFHs of all dSphs. Of course, the very first stars, which have yet to be found, must be free of all SN ejecta. The stars to form immediately after the first SNe must incorporate only Type II SN ejecta. The very lowest metallicity stars in dSphs likely represent this population. Stars with ${{\rm [Fe/H]}}\ga -2.5$ formed after the Type Ia SN-induced depression of \[$\alpha$/Fe\]. We have already explored the possibility of low-metallicity plateaus in \[Ca/Fe\], but we discount the absence of Type Ia SN products as the cause because \[Ca/Fe\] is the only element ratio to show the plateau. We speculate instead that metallicity-dependent Type Ia nucleosynthesis [e.g., @tim03; @how09] might shape the \[Ca/Fe\] distribution differently from the other element ratios.
High-metallicity plateaus can form when the SF achieves a constant rate for a duration long enough for the ratio between Types II and Ia SNe to be constant. The SFR would achieve an equilibrium between the production of $\alpha$ elements and Fe. The value of \[$\alpha$/Fe\] at the plateau depends on the IMF and SN delay time distribution. The SFR need not be strictly constant. As @rev09 pointed out, a bursty SF profile with a high duty cycle can mimic a constant SFR. In that case, we would expect a scatter about the mean value of \[$\alpha$/Fe\] at a given \[Fe/H\], but the mean value would not necessarily evolve with increasing \[Fe/H\]. We do observe high-metallicity plateaus, seen in Fig. \[fig:alphatrends\]. The trends for \[Mg/Fe\] and \[Si/Fe\] do not completely flatten, but the slopes at ${{\rm [Fe/H]}}> -1$ are less than the slopes at ${{\rm [Fe/H]}}<
-1.5$. The trends for \[Ca/Fe\] and \[Ti/Fe\] do completely flatten for some dSphs. Only the more luminous dSphs, which reached metallicities of ${{\rm [Fe/H]}}\ga -1.2$, achieved the high-metallicity plateau. The \[$\alpha$/Fe\] ratios of Sextans, Draco, Canes Venatici I, and Ursa Minor do not flatten. We conclude that dSphs with high enough SFRs to reach stellar masses of at least $10^6~M_{\sun}$ experienced roughly constant SF at late times, corresponding to metallicities ${{\rm [Fe/H]}}\ga -1.2$.
Beneath the apparently universal path in \[$\alpha$/Fe\]-\[Fe/H\] space, the abundance trends vaguely group by luminosity. Higher luminosity dSphs tend to have slightly higher values of \[$\alpha$/Fe\] at a given \[Fe/H\] than lower luminosity dSphs. The tracks for Sextans, Draco, and Canes Venatici I tend to lie below the other dSphs. Fornax and Leo I tend to lie above Sculptor and Leo II. These divisions are reminiscent of the groupings we proposed in @kir10a based on MDF shapes. We classified Fornax, Leo I, and Leo II as “infall-dominated” and Sextans, Draco, Canes Venatici I, and Ursa Minor as “outflow-dominated.” Sculptor sat in its own class. The similar groupings based on MDF and \[$\alpha$/Fe\] unsurprisingly reaffirm that the SFH shapes both the MDF and the element ratio distributions.
The MW satellite galaxies more luminous than Fornax sample a regime of greater integrated star formation and higher metallicity. @pom08 measured \[$\alpha$/Fe\] for individual red giants in the disk of the Large Magellanic Cloud (LMC), and @muc08 measured the same for red giants in LMC globular clusters. The stars span the range $-1.2 \le {{\rm [Fe/H]}}\le -0.3$ with one additional star at ${{\rm [Fe/H]}}= -1.7$. The \[Ca/Fe\] ratios of the disk stars decline slightly with increasing \[Fe/H\], but the other element ratios are nearly flat. In fact, the LMC stars seem to follow the same \[$\alpha$/Fe\] trends as Fornax or Leo I, albeit shifted to higher \[Fe/H\], except for \[Ti/Fe\]. The average \[Ti/Fe\] in the LMC is about 0.1 dex higher than Leo I and 0.3 dex higher than Fornax. The Sagittarius dSph also shows a higher average \[Ti/Fe\] than Fornax or Leo I [@cho10]. Also, \[Ti/Fe\] in Sagittarius declines with increasing \[Fe/H\] over the entire range that @cho10 sampled ($-1.5 \le {{\rm [Fe/H]}}\le +0.1$).
The available evidence indicates that the evolution of \[$\alpha$/Fe\] with \[Fe/H\] is nearly universal in MW satellite galaxies except for \[Ti/Fe\] at ${{\rm [Fe/H]}}\ga -1.3$. The average values of \[Ti/Fe\] for the dSphs and the LMC at these metallicities vary from about $-0.3$ (Sculptor) to $0.0$ (LMC and Sagittarius), and the slopes vary from $\Delta \rm{[Ti/Fe]} / \Delta \rm {[Fe/H]} \approx -0.8$ (Sagittarius) to $0.0$ (Fornax). Ti is both an $\alpha$ element and an iron-group element, and it has an appreciable yield from both Types II and Ia SNe [@woo95]. Therefore, \[Ti/Fe\] responds to changes in the SFR and the IMF differently from the “purer” $\alpha$ elements, like Mg and Si. Unfortunately, our chemical evolution model failed to reproduce realistic values of \[Ti/Fe\] because the theoretical Type II SN yields of Ti were too small. We suggest that future work explore ratios such as \[Mg/Ti\] to better understand why \[Ti/Fe\] behaves differently in different dwarf galaxies at high \[Fe/H\].
### \[Mg/Fe\]
Our data set for the first time has enabled the exploration of the bulk properties of \[$\alpha$/Fe\] in dSphs that span two orders of magnitude in luminosity. In particular, Fig. \[fig:alphatrends\] shows that \[Mg/Fe\] values higher than in the MW are not unique to the extremely metal-poor stars in dSphs [e.g., @fre10b] but also exist in stars of more modest metallicity (${{\rm [Fe/H]}}\la -1.8$).
Factors beyond the SFH may affect the absolute value of \[Mg/Fe\] and other element ratios at low metallicity. First, changing the IMF alters \[$\alpha$/Fe\] because Type II SN yields depend on the mass of the exploding star. Second, the early gas mass of the dSph might change the shape of the low-metallicity \[$\alpha$/Fe\] distribution also because SN yields depend on mass. The first SNe in a galaxy can more efficiently enrich a small gas mass than a large gas mass. Massive SNe explode before less massive SNe, and massive SNe generally produce higher \[$\alpha$/Fe\]. As a result, \[$\alpha$/Fe\] at low metallicity could depend on the initial gas mass that was enriched by the first SNe. This effect possibly explains the larger \[Mg/Fe\] in dSphs than in the MW. We suggest that the stars at ${{\rm [Fe/H]}}\sim
-2.5$ in dSphs were enriched by SNe of higher average mass than the stars at ${{\rm [Fe/H]}}\sim -2$ in the MW. Finally, the shape of the abundance distribution might depend on the early gas mass because SN yields also depend on metallicity. In addition to sampling higher mass SNe, stars at a given \[Fe/H\] in a lower mass galaxy sample lower metallicity SNe than stars at the same \[Fe/H\] in a higher mass galaxy.
### Unexplained Details
Many details in Figure \[fig:alphatrends\] defy obvious explanations. For example, the \[Ca/Fe\] ratio is flatter than the other element ratios. Sculptor has a strangely large \[Ca/Fe\] at low \[Fe/H\]. The \[Si/Fe\] trend for Fornax is above the other dSphs’ trends, but the other element ratios seem consistent. Similarly, the \[Ti/Fe\] ratio—and only \[Ti/Fe\]—for Leo I lies above the other dSphs. Ursa Minor, despite being the least luminous dSph in the figure, has the second largest \[$\alpha$/Fe\] at a given metallicity for much of the metallicity range. The slope of \[Mg/Fe\] flattens for all of the dSphs at ${{\rm [Fe/H]}}\ga -1.2$, but the slope of \[Si/Fe\] flattens only for Fornax and Leo II.
We suggest that future work examine the abundance catalog in more detail. For example, element ratios with a denominator other than Fe could constrain the IMF. The predicted yields of \[Mg/Si\] decrease from $+0.2$ for a progenitor mass of $18~M_{\sun}$ to $-0.3$ for a progenitor mass of $40~M_{\sun}$ [@nom06]. Our data set possesses the sample size and precision to address such questions.
Trends in Chemical Evolution Model Parameters
---------------------------------------------
{width="\linewidth"}
We now invoke the best-fit parameters of the chemical evolution model in a more quantitative discussion of the correlation between abundance distributions and galaxy properties. Figure \[fig:gcetrends\] presents the parameters against luminosity, line-of-sight velocity dispersion, half-light radius, and Galactocentric distance. In addition to the model parameters, the bottom row of the figure shows the star formation duration, which is a quantity derived from the best-fit model, not a free parameter.
Luminosity can reasonably be expected to show the best correlation with quantities related to SF. Of the four abscissas in Fig. \[fig:gcetrends\], $L$ is the only one that could be predicted from our simple chemical evolution model. Roughly, $L$ is the integral of past SF, modulated by the reddening and dimming associated with aging. Therefore, it is not surprising that the chemical evolution parameters vary with $L$. Although we have plotted the SF parameters against $L$, $L$ is not necessarily the independent variable. Luminosity is a present-day quantity, and the stars did not know the final stellar mass of the galaxy while they were forming. The SFH determines the present luminosity.
### Star Formation Rate Parameters
The SFR normalization, $A_*$, is roughly constant at for galaxies less luminous than Leo I. The value roughly doubles for Leo I and increases by an order of magnitude for Fornax. The increase in $A_*$ is expected because a more luminous galaxy must have formed more stars than a less luminous galaxy. If the SF timescale does not change much with luminosity, then the SFR must. We observe that the SF duration changes by a factor of about four across the luminosity range. Therefore, we estimate a range of 40 in luminosity. The actual $L$ range is 80, but our simple estimate ignored the ages of the stellar population and the other model parameters which affect the SFR, such as $\tau_{\rm in}$.
The exponent of the SFR law, $\alpha$, also varies with $L$. If we assume that SFR is proportional to gas volume density, then $\alpha$ may indicate the degree to which the gas was concentrated in the center of the galaxy. However, we find no correlation between $\alpha$ and the concentration of the light profiles [@irw95 not shown in Fig. \[fig:gcetrends\]]. Our interpretation of $\alpha$ is purely speculative because SF is a complex process affected by many external factors, such as an ionizing radiation background. These factors become more difficult to predict for smaller galaxies [e.g., @gne10].
### Gas Infall Parameters
The intensity of infalling gas (or gas cooling to become available for SF) drives the SFR. The parameter $A_{\rm in}$ is closely related to $A_*$. The dSph cannot maintain a high SFR without the addition of new gas. Therefore, a luminous galaxy must have had large values of both $A_*$ and $A_{\rm in}$. Alternatively, a luminous galaxy could have started its life with a large reservoir of gas. However, in order to prevent too many metal-poor stars from forming early, new gas must have been added during the SF lifetime. The net result is that $A_*$, $A_{\rm in}$, and $M_{\rm gas}(0)$ are highly covariant.
The most likely timescales for gas infall (or cooling) vary from [0.17]{} to [0.42]{} Gyr. It may be significant that none of the timescales exceeds [0.42]{} Gyr. We propose three conjectures. First, $\tau_{\rm in}$ may reflect the time the dSph requires to accumulate gas. The central densities of dSphs are similar [@mat98; @gil07; @str08]. Therefore, the similar gravitational potentials of the dSphs themselves might enforce similarly small gas accretion timescales.
Second, the dSphs’ environment may set the $\tau_{\rm in}$ timescale. Interestingly, $\sim 0.1$ Gyr was the timescale for the Galaxy’s monolithic collapse proposed by @egg62. This collapse time corresponds to a period when the gas in the vicinity of the MW was rapidly coalescing into individual structures, such as the proto-Galaxy and the dSphs. After $~0.1$ Gyr, gas accretion would have declined considerably because the MW and its satellites would by then have accreted the bulk of the surrounding gas. In the $\Lambda$CDM paradigm, the formation time for a dSph-sized dark matter halo is only 0.4 Gyr after the Big Bang [@wec02]. Therefore, our most likely gas accretion timescales are consistent with both cosmogonies.
Third, the time from the formation of the first stars to cosmological reionization is roughly 0.5 Gyr. @ric05 referred to all eight of our dSphs as “true” or “polluted fossils,” meaning that all or most of their stars formed before reionization. Our models are sensitive to the bulk of the population, and not the few younger stars present in most dSphs. Therefore, the best-fit values of $\tau_{\rm
in}$ may be probing the pre-reionization SF timescale. Fornax must be a exception because the bulk of its population formed after reionization. The majority stellar populations in other dSphs may be fossils with SF timescales on the order of the reionization time. The (small) dispersion among our $\tau_{\rm in}$ values may be a result of temporally protracted, spatially inhomogeneous reionization [@mir00]. However, we note that our derived SF durations are longer than 0.5 Gyr except for Ursa Minor. To the extent that these durations are accurate, we surmise that reionization is one of several mechanisms that inhibited SF in dSphs.
### Supernova Winds {#supernova-winds}
The role of SN feedback for dSphs has been emphasized repeatedly. @dek86 posited that SN feedback regulates the SFR for dwarf galaxies. It can cause a terminal wind, or it can blow out gas that is later re-accreted. For the smallest galaxies, including the dSphs presented here, radiation feedback also plays a significant role [@dek03]. The best-fit SN wind intensities, $A_{\rm out}$, also show a strong correlation with $L$. More luminous dSphs experienced more intense winds. This trend is a direct result of the metallicity-luminosity relation for dSphs (e.g., @kir10a). For reasons discussed in @kir10a, more intense gas outflow lowers the effective metal yield. Therefore, the less luminous, more metal-poor dSphs naturally show more gas outflow. However, we expected that $A_{\rm out}$ also correlate with the velocity dispersion, a measure of the depth of the potential well. No such correlation exists. The lack of correlation is puzzling, but the gas blowout depends on the unmeasurable mass density profile at the time of SF and on the locations of the SNe within the gravitational potential.
### Galaxy Properties Other Than Luminosity
The model parameters are insensitive to galaxy properties other than $L$. The velocity dispersions of dSphs do not span nearly as large a range as their luminosities, which may partly explain the lack of dependence on $\sigma_{\rm los}$. The half-light radius and luminosity together are related to the galaxy’s surface brightness and stellar density. It does not seem that the SF parameters in our model depend significantly on these quantities. The timescales, $\tau_{\rm
in}$ and the SF duration, may depend weakly on Galactocentric distance. The Pearson linear correlation coefficient between $\tau_{\rm in}$ and $D_{\rm GC}$ is [0.69]{}. Because $\tau_{\rm
in}$ basically represents the SF duration (the correlation coefficient between $\tau_{\rm in}$ and the SF duration is [0.96]{}), this relation may indicate that more distant dSphs survive SF-truncating interactions with the MW longer than closer dSphs. In fact, @sil87 suggested host galaxies competed with their satellites for gas accretion. The more distant satellites, such as dwarf irregulars, successfully accreted more gas to power present star formation than the closer satellites, such as dSphs. Orbital history would be a better indicator of past interaction with the MW. Orbital parameters based on proper motions are available for Fornax [@pia07], Sculptor [@pia06], and Ursa Minor [@pia05]. @soh07 also constrained the orbit of Leo I based on the shape and dynamics of tidal debris. We leave orbital analyses for future work.
We conclude that luminosity is more directly related to a dSph’s SFH than dynamical or morphological properties. The present luminosity can not drive the past star formation, but the luminosity does mirror a single parameter which determines the SFH. This conclusion is similar to the fundamental line for dwarf galaxies defined by @woo08. They also found that stellar mass (closely related to luminosity) is the best predictor of other dSph properties. However, stellar mass loss by tidal stripping may obfuscate the correlation between present stellar mass and past star formation.
Summary and Conclusions {#sec:conclusions}
=======================
We have made a first attempt at quantitative chemical evolution models for the large sample of multi-element abundance measurements for MW dSphs that we published in @kir10b. Our simple model is a significant improvement to the analytical models of the metallicity distributions that we explored in @kir10a. We fit the MDF and \[$\alpha$/Fe\] distribution simultaneously to derive the SF and gas flow histories of each of eight dSphs spanning about two orders of magnitude in luminosity. Our model produces reasonable fits to the abundance distributions of dSphs whose color-magnitude diagrams show that most or all of their stars are older than 10 Gyr.
We draw the following conclusions from our models and from the general trends in abundance distributions (Fig. \[fig:alphatrends\]):
1. The \[$\alpha$/Fe\] ratios evolve with metallicity along nearly the same path for all dSphs. The average value of \[Mg/Fe\], \[Si/Fe\], \[Ca/Fe\], and \[Ti/Fe\] drop from $+0.4$ at ${{\rm [Fe/H]}}= -2.5$ to $0.0$ at ${{\rm [Fe/H]}}\approx -1.2$, where the slope flattens.
2. No low-metallicity plateaus or knees exist in \[$\alpha$/Fe\] vs. \[Fe/H\] space for any dSph at ${{\rm [Fe/H]}}> -2.5$. We conclude that Type Ia supernovae contributed to chemical evolution for all but the most metal-poor stars.
3. The \[Mg/Fe\] ratio in dSphs exceeds that of the Milky Way at ${{\rm [Fe/H]}}\la -1.8$. We suggest that the abundance ratios of stars in low-mass systems are more sensitive to the mass and metallicity dependence of Type II supernovae yields than stars at the same metallicity in higher-mass systems, such as the progenitors of the inner MW halo.
4. The dSphs may be grouped based on their \[$\alpha$/Fe\] distributions into roughly the same groups that we defined based on their metallicity distributions (@kir10a). The more luminous dSphs have infall-dominated MDFs and slightly higher $\langle\rm{[\alpha/Fe]}\rangle$ at a given \[Fe/H\]. The less luminous dSphs have outflow-dominated MDFs and slightly lower $\langle\rm{[\alpha/Fe]}\rangle$ at the same \[Fe/H\].
5. Some SF model parameters correlate with present luminosity, but not with velocity dispersion, half-light radius, or Galactocentric distance except for a possible correlation between gas infall timescale and $D_{\rm GC}$.
6. The gas flow histories for all dSphs except Fornax are characterized by large amounts of gas loss, probably driven by supernova winds. Less luminous dSphs experienced more intense gas loss.
7. Allowing supernova winds to be metal-enhanced drastically reduces the amount of gas infall and outflow required to explain the observed abundance distributions.
8. The gas infall timescale does not exceed [0.42]{} Gyr. This possibly reflects the amount of time ancient stars had to form before reionization ended star formation.
9. The derived star formation timescales are extremely sensitive to the delay time for the first Type Ia SN. Increasing the delay time from 0.1 Gyr to 0.3 Gyr results in a star formation duration in Sculptor inflated by a factor of 3.5.
10. The presence of bumps in the MDFs and stars with \[$\alpha$/Fe\] ratios far from the average trend lines suggests that the SFHs of dSphs were characterized by bursts, which are not included in our model. Bursts are a common feature of more sophisticated models.
Some of our conclusions (5–10) depend on the realism of our chemical evolution model. Many more sophisticated models exist, and we encourage their application to our data set. @kir10b contains the complete abundance catalog.
The major strength of the present work is that we apply the same model to a homogeneous data set of hundreds of stars in each of eight dSphs. The sample size and diversity of galaxies has allowed us to present an overview of chemical evolution in dwarf galaxies. We have discovered patterns not apparent in previous data sets due to small samples or lack of diversity among the well-sampled galaxies. In particular, we have shown that \[$\alpha$/Fe\] distributions of dSphs do not form a sequence of knees corresponding to the metallicities at which Type Ia supernovae began to explode. Instead, the \[$\alpha$/Fe\] patterns of all dSphs are largely the same, but different dSphs sample different regions in metallicity.
We thank John Johnson, Hai Fu, Julianne Dalcanton, Chris Sneden, and Bob Kraft for insightful discussions. Support for this work was provided by NASA through Hubble Fellowship grant 51256.01 awarded to ENK by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. SRM acknowledges support from NSF grants AST-0307851 and AST-0807945, and from the SIM Lite key project “Taking Measure of the Milky Way” under NASA/JPL contract 1228235. PG acknowledges NSF grants AST-0507483, AST-0607852, and AST-0808133.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
[*Facility:*]{}
Aaronson, M., Hodge, P. W., & Olszewski, E. W. 1983, , 267, 271
Aaronson, M., Liebert, J., & Stocke, J. 1982, , 254, 507
Aaronson, M., & Mould, J. 1980, , 240, 804
Abia, C. 2008, , 136, 250
Anders, E., & Grevesse, N.1989, , 53, 197
Aparicio, A., Carrera, R., & Mart[í]{}nez-Delgado, D. 2001, , 122, 2524
Azzopardi, M., Lequeux, J., & Westerlund, B. E. 1985, , 144, 388
Babusiaux, C., Gilmore, G., & Irwin, M. 2005, , 359, 985
Battaglia, G., Helmi, A., Tolstoy, E., Irwin, M., Hill, V., & Jablonka, P. 2008, , 681, L13
Battaglia, G., et al. 2006, , 459, 423
Bell, R. A. 1985, , 97, 219
Bellazzini, M., Ferraro, F. R., Origlia, L., Pancino, E., Monaco, L. & Oliva, E., 2002, , 124, 3222 Bellazzini, M., Ferraro, F. R., & Pancino, E. 2001, , 327, L15
Bensby, T., Feltzing, S., & Lundstr[ö]{}m, I. 2003, , 410, 527
Bernard, E. J., et al. 2009, , 699, 1742
Bosler, T. L., Smecker-Hane, T. A., & Stetson, P. B. 2007, , 378, 318
Bullock, J. S., & Johnston, K. V. 2005, , 635, 931
Bullock, J. S., Kravtsov, A. V., & Weinberg, D. H. 2000, , 539, 517
Buonanno, R., Corsi, C. E., Castellani, M., Marconi, G., Fusi Pecci, F., & Zinn, R. 1999, , 118, 1671
Burris, D. L., Pilachowski, C. A., Armandroff, T. E., Sneden, C., Cowan, J. J., & Roe, H. 2000, , 544, 302
Caputo, F., Cassisi, S., Castellani, M., Marconi, G., & Santolamazza, P. 1999, , 117, 2199
Carigi, L., & Hernandez, X. 2008, , 390, 582
Carrera, R., Aparicio, A., Mart[í]{}nez-Delgado, D., & Alonso-Garc[í]{}a, J. 2002, , 123, 3199
Chou, M.-Y., Cunha, K., Majewski, S. R., Smith, V. V., Patterson, R. J., Mart[í]{}nez-Delgado, D., & Geisler, D. 2010, , 708, 1290
Cohen, J. G., & Huang, W.2009, , 701, 1053
———. 2010, , 719, 931
Coleman, M. G., & de Jong, J. T. A. 2008, , 685, 933
Colucci, J. E., Bernstein, R. A., Cameron, S., McWilliam, A., & Cohen, J. G. 2009, , 704, 385
Cudworth, K. M., Olszewski, E. W. & Schommer, R. A., 1986, , 92, 766 Da Costa, G. S. 1984, , 285, 483
Dalcanton, J. J., et al. 2009, , 183, 67
Dekel, A., & Silk, J. 1986, , 303, 39
Dekel, A., & Woo, J. 2003, , 344, 1131
Demarque, P., & Hirshfeld, A. W. 1975, , 202, 346
Demers, S., Kunkel, W. E., & Hardy, E. 1979, , 232, 84
Diemand, J., Kuhlen, M., & Madau, P. 2007, , 667, 859
Dolphin, A. E., Weisz, D. R., Skillman, E. D., & Holtzman, J. A. 2005, in Resolved Stellar Populations, ed. D. Valls-Gabaud & M. Chavez, arXiv:astro-ph/0506430
Edvardsson, B., Andersen, J., Gustafsson, B., Lambert, D. L., Nissen, P. E., & Tomkin, J. 1993, , 275, 101
Eggen, O. J., Lynden-Bell, D., & Sandage, A. R. 1962, , 136, 748
Faria, D., Feltzing, S., Lundstr[ö]{}m, I., Gilmore, G., Wahlgren, G. M., Ardeberg, A., & Linde, P. 2007, , 465, 357
Fenner, Y., Gibson, B. K., Gallino, R., & Lugaro, M. 2006, , 646, 184
Fran[ç]{}ois, P., Matteucci, F., Cayrel, R., Spite, M., Spite, F., & Chiappini, C. 2004, , 421, 613
Frebel, A., Kirby, E. N., & Simon, J. D. 2010a, , 464, 72
Frebel, A., Simon, J. D., Geha, M., & Willman, B. 2010b, , 708, 560
Fulbright, J. P. 2000, , 120, 1841
Fulbright, J. P. 2002, , 123, 404
Galama, T. J., et al. 1998, , 395, 670
Gallart, C., Freedman, W. L., Aparicio, A., Bertelli, G., & Chiosi, C. 1999a, , 118, 2245
Gallart, C., et al. 1999b, , 514, 665
Geha, M., Willman, B., Simon, J. D., Strigari, L. E., Kirby, E. N., Law, D. R., & Strader, J. 2009, , 692, 1464
Geisler, D., Smith, V. V., Wallerstein, G., Gonzalez, G., & Charbonnel, C. 2005, , 129, 1428
Gilmore, G., Wilkinson, M. I., Wyse, R. F. G., Kleyna, J. T., Koch, A., Evans, N. W., & Grebel, E. K. 2007, , 663, 948
Gilmore, G., & Wyse, R. F. G. 1991, , 367, L55
Gnedin, N. Y., & Kravtsov, A. V. 2010, , submitted, arXiv:1004.0003
Governato, F., Willman, B., Mayer, L., Brooks, A., Stinson, G., Valenzuela, O., Wadsley, J., & Quinn, T. 2007, , 374, 1479
Gratton, R. G., & Sneden, C. 1988, , 204, 193
———. 1991, , 241, 501
———. 1994, , 287, 927
Graves, G. J., & Schiavon, R. P. 2008, , 177, 446
Grebel, E. K., & Stetson, P. B. 1999, IAU Symp. 192, The Stellar Content of Local Group Galaxies, ed. P. Whitelock & R. Cannon (San Francisco: ASP), 165
Grillmair, C. J., et al. 1998, , 115, 144
Gullieuszik, M., Held, E. V., Rizzi, L., Saviane, I., Momany, Y., & Ortolani, S. 2007, , 467, 1025
Gullieuszik, M., Held, E. V., Saviane, I., & Rizzi, L. 2009, , 500, 735
Hanson, R. B., Sneden, C., Kraft, R. P., & Fulbright, J. 1998, , 116, 1286
Held, E. V., Clementini, G., Rizzi, L., Momany, Y., Saviane, I., & Di Fabrizio, L. 2001, , 562, L39
Held, E. V., Saviane, I., Momany, Y., & Carraro, G. 2000, , 530, L85
Hirshfeld, A. W. 1980, , 241, 111
Holtzman, J. A., Afonso, C., & Dolphin, A. 2006, , 166, 534
Howell, D. A., et al. 2009, , 691, 661
Hurley-Keller, D., Mateo, M., & Nemec, J. 1998, , 115, 1840
Ikuta, C., & Arimoto, N. 2002, , 391, 55
Irwin, M., & Hatzidimitriou, D. 1995, , 277, 1354
Ivans, I. I., Sneden, C., James, C. R., Preston, G. W., Fulbright, J. P., H[ö]{}flich, P. A., Carney, B. W., & Wheeler, J. C. 2003, , 592, 906
Iwamoto, K., Brachwitz, F., Nomoto, K., Kishimoto, N., Umeda, H., Hix, W. R., & Thielemann, F.-K. 1999, , 125, 439
Iwamoto, K., et al. 1998, , 395, 672
Johnson, J. A. 2002, , 139, 219
Johnston, K. V., Bullock, J. S., Sharma, S., Font, A., Robertson, B. E., & Leitner, S. N. 2008, , 689, 936
Karakas, A. I. 2010, , 403, 1413
Kennicutt, R. C., Jr. 1998, , 498, 541
Kinman, T. D., & Kraft, R. P. 1980, , 85, 415
Kinman, T. D., Kraft, R. P., & Suntzeff, N. B. 1981, in Physical Processes in Red Giants, ed. I. Iben & A. Renzini (Dordrecht: Reidel), 71
Kirby, E. N., Guhathakurta, P., Bolte, M., Sneden, C., & Geha, M. C. 2009, , 705, 328 (Paper I)
Kirby, E. N., Lanfranchi, G. A., Simon, J. D., Cohen, J. G., & Guhathakurta, P. 2010a, , in press, arXiv:1011.4937 (Paper III)
Kirby, E. N., Simon, J. D., Geha, M., Guhathakurta, P., & Frebel, A. 2008, , 685, L43
Kirby, E. N., et al. 2010b, , 191, 352 (Paper II)
Kobayashi, C., & Nomoto, K. 2009, , 707, 1466
Kobayashi, C., Tsujimoto, T., Nomoto, K., Hachisu, I., & Kato, M. 1998, , 503, L155
Koch, A., Grebel, E. K., Wyse, R. F. G., Kleyna, J. T., Wilkinson, M. I., Harbeck, D. R., Gilmore, G. F., & Evans, N. W. 2006, , 131, 895
Kodama, T. 1997, Ph.D. Thesis, Univ. of Tokyo
Kroupa, P., Tout, C. A., & Gilmore, G. 1993, , 262, 545
Kuehn, C., et al. 2008, , 674, L81
Lanfranchi, G. A., & Matteucci, F. 2003, , 345, 71
———. 2004, , 351, 1338
———. 2007, , 468, 927
———. 2010, , 512, A85
Lanfranchi, G. A., Matteucci, F., & Cescutti, G. 2006, , 453, 67
———. 2008, , 481, 635
Larson, D., et al. 2010, , in press, arXiv:1001.4635
Larson, R. B. 1972, , 236, 7
———. 1974, , 169, 229
Lee, M. G., Freedman, W., Mateo, M., Thompson, I., Roth, M., & Ruiz, M.-T. 1993, , 106, 1420
Lee, M. G., Yuk, I.-S., Park, H. S., Harris, J., & Zaritsky, D. 2009, , 703, 692
Lehnert, M. D., Bell, R. A., Hesser, J. E., & Oke, J. E. 1992, , 395, 466
Letarte, B., et al. 2010, , 523, A17
Lin, D. N. C., & Faber, S. M. 1983, , 266, L21
Lynden-Bell, D. 1975, Vistas in Astronomy, 19, 299
Mac Low, M.-M., & Ferrara, A. 1999, , 513, 142
Maeder, A., & Meynet, G. 1989, , 210, 155
Majewski, S. R., Ostheimer, J. C., Kunkel, W. E., & Patterson, R. J. 2000a, , 120, 2550
Majewski, S. R., Ostheimer, J. C., Patterson, R. J., Kunkel, W. E., Johnston, K. V., & Geisler, D. 2000b, , 119, 760
Majewski, S. R., Siegel, M. H., Patterson, R. J., & Rood, R. T. 1999, , 520, L33
Majewski, S. R., et al. 2002, in ASP Conf. Ser. 285, Modes of Star Formation and the Origin of Field Populations, ed. E. Grebel & W. Brandner (San Francisco: ASP), 199
Maoz, D., Sharon, K., & Gal-Yam, A. 2010, , 722, 1879
Mapelli, M., Ripamonti, E., Battaglia, G., Tolstoy, E., Irwin, M. J., Moore, B., & Sigurdsson, S. 2009, , 396, 1771
Marcolini, A., D’Ercole, A., Battaglia, G., & Gibson, B. K. 2008, , 386, 2173
Marcolini, A., D’Ercole, A., Brighenti, F., & Recchi, S. 2006, , 371, 643
Marigo, P., & Girardi, L. 2007, , 469, 239
Martin, N. F., de Jong, J. T. A., & Rix, H.-W. 2008a, , 684, 1075
Martin, N. F., et al. 2008b, , 672, L13
Mart[í]{}nez-Delgado, D., Alonso-Garc[í]{}a, J., Aparicio, A., & G[ó]{}mez-Flechoso, M. A. 2001, , 549, L63
Mateo, M. L. 1998, , 36, 435
Mateo, M., Olszewski, E. W., & Walker, M. G. 2008, /apj, 675, 201
Mathews, W. G., & Baker, J. C. 1971, , 170, 241
Matteucci, F. 2008, Lectures for the 37th Saas-Fee Advanced Course, arXiv:0804.1492
Matteucci, F., & Brocato, E. 1990, , 365, 539
Matteucci, F., Spitoni, E., Recchi, S., & Valiante, R. 2009, , 501, 531
McWilliam, A. 1998, , 115, 1640
McWilliam, A., & Bernstein, R. A. 2008, , 684, 326
McWilliam, A., Preston, G. W., Sneden, C., & Shectman, S. 1995, , 109, 2736
McWilliam, A., & Smecker-Hane, T. A. 2005a, , 622, L29
———. 2005b, in ASP Conf. Ser. 336, Cosmic Abundances as Records of Stellar Evolution and Nucleosynthesis in honor of D. L. Lambert, ed. T. G. Barnes III & F. N. Bash (San Francisco: ASP), 221
Mighell, K. J. 1990, , 82, 1
———. 1997, , 114, 1458
Mighell, K. J., & Burke, C. J. 1999, , 118, 366
Mighell, K. J., & Rich, R. M. 1996, , 111, 777
Miralda-Escud[é]{}, J., Haehnelt, M., & Rees, M. J. 2000, , 530, 1
Monkiewicz, J., et al. 1999, , 111, 1392
Mori, M., Ferrara, A., & Madau, P. 2002, , 571, 40
Mucciarelli, A., Carretta, E., Origlia, L., & Ferraro, F. R. 2008, , 136, 375
Mu[ñ]{}oz, R. R., et al. 2005, , 631, L137
———. 2006, , 649, 201
Napolitano, N. R., Romanowsky, A. J., & Tortora, C. 2010, , 405, 2351
Nissen, P. E., & Schuster, W. J. 1997, , 326, 751
Nomoto, K., Tominaga, N., Umeda, H., Kobayashi, C., & Maeda, K. 2006, Nuclear Physics A, 777, 424
Norris, J., & Bessell, M. S. 1978, , 225, L49
Norris, J. E., Gilmore, G., Wyse, R. F. G., Wilkinson, M. I., Belokurov, V., Evans, N. W., & Zucker, D. B. 2008, , 689, L113
Norris, J. E., Wyse, R. F. G., Gilmore, G., Yong, D., Frebel, A., Wilkinson, M. I., Belokurov, V., & Zucker, D. B. 2010a, , 723, 1632
Norris, J. E., Yong, D., Gilmore, G., & Wyse, R. F. G. 2010b, , 711, 350
Norris, J., & Zinn, R. 1975, , 202, 335
Olszewski, E. W., & Aaronson, M. 1985, , 90, 2221
Orban, C., Gnedin, O. Y., Weisz, D. R., Skillman, E. D., Dolphin, A. E., & Holtzman, J. A. 2008, , 686, 1030
Padovani, P., & Matteucci, F. 1993, , 416, 26
Pagel, B. E. J. 1997, Nucleosynthesis and Chemical Evolution of Galaxies (Cambridge UP)
Pagel, B. E. J., & Tautvaišienė, G. 1995, , 276, 505
Piatek, S., Pryor, C., Bristow, P., Olszewski, E. W., Harris, H. C., Mateo, M., Minniti, D., & Tinney, C. G. 2005, , 130, 95
———. 2006, , 131, 1445
———. 2007, , 133, 818
Pomp[é]{}ia, L., et al. 2008, , 480, 379
Pont, F., Zinn, R., Gallart, C., Hardy, E., & Winnick, R. 2004, , 127, 840
Prochaska, J. X., Naumov, S. O., Carney, B. W., McWilliam, A., & Wolfe, A. M. 2000, , 120, 2513
Recchi, S., Matteucci, F., & D’Ercole, A. 2001, , 322, 800
Reddy, B. E., Tomkin, J., Lambert, D. L., & Allende Prieto, C. 2003, , 340, 304
Revaz, Y., et al. 2009, , 501, 189
Ricotti, M., & Gnedin, N. Y. 2005, , 629, 259
Robertson, B., Bullock, J. S., Font, A. S., Johnston, K. V., & Hernquist, L. 2005, , 632, 872
Romano, D., Chiappini, C., Matteucci, F., & Tosi, M. 2005, , 430, 491
Romano, D., Karakas, A. I., Tosi, M., & Matteucci, F. 2010, , 522, A32
Romano, D., Tosi, M., & Matteucci, F. 2006, , 365, 75
Ryan, S. G., Norris, J. E., & Beers, T. C. 1996, , 471, 254
Sadakane, K., Arimoto, N., Ikuta, C., Aoki, W., Jablonka, P. & Tajitsu, A., 2004, PASJ, 56, 1041
Saviane, I., Held, E. V., & Bertelli, G. 2000, , 355, 56
Sawala, T., Scannapieco, C., Maio, U., & White, S. 2010, , 402, 1599
Sbordone, L., Bonifacio, P., Buonanno, R., Marconi, G., Monaco, L., & Zaggia, S. 2007, , 465, 815
Schmidt, M. 1959, , 129, 243
———. 1963, , 137, 758
Searle, L., & Zinn, R. 1978, , 225, 357
Shetrone, M. D., Bolte, M., & Stetson, P. B. 1998, , 115, 1888
Shetrone, M. D., C[ô]{}t[é]{}, P., & Sargent, W. L. W. 2001, , 548, 592
Shetrone, M. D., Côté, P., & Stetson, P. B. 2001, , 113, 1122
Shetrone, M. D., Siegel, M. H., Cook, D. O., & Bosler, T. 2009, , 137, 62
Shetrone, M. D., Venn, K. A., Tolstoy, E., Primas, F., Hill, V., & Kaufer, A. 2003, , 125, 684
Silk, J., Wyse, R. F. G., & Shields, G. A. 1987, , 322, L59
Simon, J. D., Frebel, A., McWilliam, A., Kirby, E. N., & Thompson, I. B. 2010, , 716, 446
Smecker-Hane, T. A., Mandushev, G. I., Hesser, J. E., Stetson, P. B., Da Costa, G. S., & Hatzidimitriou, D. 1999, in ASP Conf. Ser. 192, Spectroscopic Dating of Stars and Galaxies, ed. I. Hubeny, S. R. Heap, & R. H. Cornett (San Francisco: ASP), 159
Smecker-Hane, T. A., Marsteller, B., Cole, A., Bullock, J., & Gallagher, J. S. 2009, BAAS, 41, 235
Smecker-Hane, T. A., Stetson, P. B., Hesser, J. E., & VandenBerg, D. A. 1996, in ASP Conf. Ser. 98, From Stars to Galaxies, ed. C. Leitherer, U. F. Alvensleben, & J. Huchra (San Francisco: ASP), 328
Smith, G. H. 1984, , 89, 801
Smith, G. H., & Dopita, M. A. 1983, , 271, 113
Smith, G. H., Siegel, M. H., Shetrone, M. D., & Winnick, R. 2006, , 118, 1361
Smith, H. A., & Stryker, L. L. 1986, , 92, 328
Sohn, S. T., et al. 2007, , 663, 960
Springel, V., et al. 2008, , 391, 1685
Starkenburg, E., et al. 2010, , 513, A34
Steigman, G. 2007, Annual Review of Nuclear and Particle Science, 57, 463
Stephens, A., & Boesgaard, A. M. 2002, , 123, 1647
Stetson, P. B. 1984, , 96, 128
Strigari, L. E., Bullock, J. S., Kaplinghat, M., Simon, J. D., Geha, M., Willman, B., & Walker, M. G. 2008, , 454, 1096
Suntzeff, N. B., Mateo, M., Terndrup, D. M.; Olszewski, E. W.; Geisler, D., & Weller, W. 1993, , 418, 208
Tafelmeyer, M., et al. 2010, , 524, 58
Thornton, K., Gaudlitz, M., Janka, H.-T., & Steinmetz, M. 1998, , 500, 95
Timmes, F. X., Brown, E. F., & Truran, J. W. 2003, , 590, L83
Tinsley, B. M., & Larson, R. B. 1979, , 186, 503
Tolstoy, E., Hill, V., & Tosi, M. 2009, , 47, 371
Tolstoy, E., Irwin, M. J., Cole, A. A., Pasquini, L., Gilmozzi, R., & Gallagher, J. S. 2001, , 327, 918
Tolstoy, E., et al. 2003, , 125, 707
———. 2004, , 617, L119
Vader, J. P. 1986, , 305, 669
van den Bergh, S. 1962, , 67, 486
van den Hoek, L. B., & Groenewegen, M. A. T. 1997, , 123, 305
Venn, K. A., Irwin, M., Shetrone, M. D., Tout, C. A., Hill, V., & Tolstoy, E. 2004, , 128, 1177
Wechsler, R. H., Bullock, J. S., Primack, J. R., Kravtsov, A. V., & Dekel, A. 2002, , 568, 52
Weisz, D. R., Skillman, E. D., Cannon, J. M., Dolphin, A. E., Kennicutt, R. C., Jr., Lee, J., & Walter, F. 2008, , 689, 160
White, S. D. M., & Rees, M. J.1978, , 183, 341
Winnick, R. A. 2003, Ph.D. Thesis, Yale Univ.
Wolf, J., Martinez, G. D., Bullock, J. S., Kaplinghat, M., Geha, M., Mu[ñ]{}oz, R. R., Simon, J. D., & Avedo, F. F. 2010, , 406, 1220
Woo, J., Courteau, S., & Dekel, A. 2008, , 390, 1453
Woosley, S. E., Langer, N., & Weaver, T. A. 1993, , 411, 823
Woosley, S. E., & Weaver, T. A. 1995, , 101, 181
Zinn, R. 1978, , 225, 790
———. 1981, , 251, 52
Zinn, R., & Persson, S. E. 1981, , 247, 849
Zinn, R., & Searle, L. 1976, , 209, 734
Zucker, D. B., et al. 2006, , 643, L103
[^1]: The data from @ven04 is a compilation of data from the following sources: @ben03, @bur00, @edv93, @ful00 [@ful02], @gra88 [@gra91; @gra94], @han98, @iva03, @joh02, @mcw95, @mcw98, @nis97, @pro00, @red03, @rya96, and @ste02.
|
---
abstract: 'Ozawa’s measurement-disturbance relation is generalized to a phase-space noncommutative extension of quantum mechanics. It is shown that the measurement-disturbance relations have additional terms for backaction evading quadrature amplifiers and for noiseless quadrature transducers. Several distinctive features appear as a consequence of the noncommutative extension: measurement interactions which are noiseless, and observables which are undisturbed by a measurement, or of independent intervention in ordinary quantum mechanics, may acquire noise, become disturbed by the measurement, or no longer be an independent intervention in noncommutative quantum mechanics. It is also found that there can be states which violate Ozawa’s universal noise-disturbance trade-off relation, but verify its noncommutative deformation.'
author:
- 'Catarina Bastos[^1]'
- 'Alex E. Bernardini[^2]'
- 'Orfeu Bertolami[^3]'
- '[Nuno Costa Dias and João Nuno Prata]{}[^4]'
title: 'Phase-space noncommutative formulation of Ozawa’s uncertainty principle'
---
1\. [*Introduction*]{}: Recently, there has been a great deal of discussion about the interpretation of the uncertainty principle [@Busch1; @Busch2; @Ozawa1] and its possible experimental violation [@Rozema; @Hasegawa; @Ozawa0]. On the one hand, in his well-known $\gamma$-ray thought experiment [@Heisenberg1], Heisenberg relates the accuracy of an appropriate position measurement to the disturbance of the particle’s momentum, for a system in a state $\psi$, so that $$\epsilon (\widehat{X}, \psi) \chi (\widehat{P}, \psi) \ge {{|\langle \psi | ~ \left[\widehat{X}, \widehat{P} \right] ~ | \psi\rangle|}\over 2}~,
\label{eq0}$$ where $\epsilon (\widehat{X}, \psi)$ is the noise of the $\widehat{X}$ measurement and $\chi (\widehat{P}, \psi)$ is the disturbance on $\widehat{P}$ due to that measurement. Rather recently, Busch [*et al.*]{} [@Busch1] claim to have proved a rigorous version of Heisenberg’s uncertainty principle considering error and disturbance measurements as state independent quantities, which contrasts with the state dependent preparation uncertainty principles. On the other hand, state dependent quantities have also been considered [@Korzekwa; @Ozawa; @Ozawa4; @Cyril1], such as Ozawa’s noise-disturbance trade-off relation [@Ozawa4]. He considers a system composed by an object and a measuring device, the probe, initially prepared as $\Psi = \psi \otimes \xi$, where $\psi$ and $\xi$ describe the object and the probe, respectively. Working in the Heisenberg picture, Ozawa then introduces the noise operator, $\widehat{N}(\widehat{A})$, and the disturbance operator, $\widehat{D}(\widehat{B})$, related to the observables $A$ and $B$, respectively. These are self-adjoint operators, defined as $$\widehat{N}(A) = \widehat{M}^{out} - \widehat{A}^{in}\hspace{0.2cm}, \hspace{0.2cm} \widehat{D}(B) = \widehat{B}^{out} - \widehat{B}^{in}~.
\label{eq1}$$ Here $\widehat{A}^{in}=\widehat{A} \otimes \widehat{I}, \widehat{B}^{in}=\widehat{B} \otimes \widehat{I}$ are observables $\widehat{A}$ and $\widehat{B}$ prior to the measurement interaction, $\widehat{B}^{out} =\widehat{U}^{\dagger} (\widehat{B} \otimes \widehat{I}) \widehat{U}$ is the observable $\widehat{B}$ immediately after the measurement, and $\widehat{M}$ is the probe observable, that is, the measurement corresponding to the observable $\widehat{A}$. $\widehat{U}$ is an unitary time evolution operator that acts during the measuring interaction. Clearly, $\widehat{M}^{in} = \widehat{I} \otimes \widehat{M}$ and $\widehat{M}^{out} = \widehat{U}^{\dagger} (\widehat{I} \otimes \widehat{M}) \widehat{U}$. For more details about the measurement interaction see Ref. [@Ozawa4]. The noise $\epsilon (\widehat{A}, \psi)$ and disturbance $\chi (\widehat{B}, \psi)$ are defined by [@Ozawa4]: $$\epsilon (\widehat{A}, \psi)^2 = \langle \Psi | \widehat{N}(\widehat{A})^2 | \Psi\rangle\hspace{0.2cm} , \hspace{0.2cm} \chi (\widehat{B}, \psi)^2 = \langle\Psi | \widehat{D}(\widehat{B})^2 | \Psi\rangle~.
\label{eq2}$$ Since $\widehat{M}$ and $\widehat{B}$ are observables in different systems, they commute. Thus, using $\left[\widehat{M}^{out}, \widehat{B}^{out} \right]=0$, Eq. (\[eq2\]), the triangle and the Cauchy-Schwartz inequalities, one obtains Ozawa’s uncertainty principle (OUP), $$\epsilon (\widehat{A}, \psi) \chi (\widehat{B}, \psi) + {1\over2}{\left| \langle \left[\widehat{N}(\widehat{A}),\widehat{B}^{in} \right]\rangle + \langle \left[\widehat{A}^{in} , \widehat{D}(\widehat{B}) \right]\rangle\right|}\ge {1\over2}{| \langle\psi| ~\left[\widehat{A}, \widehat{B} \right] ~| \psi\rangle}~,
\label{eq3}$$ where one has used, as suggested in Ref. [@Ozawa4], the notation, $\langle \widehat{C}\rangle$, to denote $\langle\Psi | \widehat{C} | \Psi\rangle$. If $\left| \langle \left[\widehat{N}(\widehat{A}),\widehat{B}^{in} \right]\rangle + \langle \left[\widehat{A}^{in} , \widehat{D}(\widehat{B}) \right]\rangle\right|=0$, then the Heisenberg noise-disturbance uncertainty relation, Eq. (\[eq0\]), holds. Ozawa defined such a measuring interaction to be of [*independent intervention*]{} for the pair $(\widehat{A},\widehat{B})$.
From the triangle and the Cauchy-Schwartz inequalities one has: $$\left| \langle \left[\widehat{N}(\widehat{A}),\widehat{B}^{in} \right]\rangle + \langle \left[\widehat{A}^{in} , \widehat{D}(\widehat{B}) \right]\rangle\right| \le 2 \epsilon (\widehat{A}, \psi) \sigma (\widehat{B}, \psi) + 2 \sigma (\widehat{A}, \psi) \chi (\widehat{B}, \psi)~,
\label{eq4}$$ and upon substitution of Eq. (\[eq2\]) into Eq. (\[eq3\]) one obtains: $$\epsilon (\widehat{A}, \psi) \chi (\widehat{B}, \psi) +\epsilon (\widehat{A}, \psi) \sigma (\widehat{B},\psi) + \sigma (\widehat{A}, \psi) \chi (\widehat{B}, \psi) \ge {{| \langle\psi|
~\left[\widehat{A}, \widehat{B} \right] ~| \psi\rangle |}\over2}~. \label{eq5}$$ Here $\sigma (\widehat{C}, \psi)$ denotes the mean square deviation for observable $C$: $\sigma (\widehat{C}, \psi)= \langle \psi| \left(\widehat{C} - \langle \widehat{C} \rangle \right)^2 | \psi \rangle^{1/2}$. Experimentally, weak measurements [@Rozema] and $3-$state mode systems [@Hasegawa; @Cyril2], indicate that violations of inequality Eq. (\[eq3\]) are found and Eq. (\[eq5\]) is shown to be a more accurate description of the experimental data.
Hereon we investigate whether noncommutative extensions of quantum mechanics (NCQM) yield corrections to the OUP. NCQM corresponds to a non-relativistic one-particle sector of NC quantum field theories [@Douglas], which emerge in the context of string theory and quantum gravity [@Connes]. Actually, the idea that space-time has noncommutativity features was suggested long ago as a way to regularize quantum field theories [@Snyder; @Heisenberg; @Yang]. However, this possibility was disconsidered for a while due to the development of renormalization techniques and to certain undesirable features of NC theories, such as the breakdown of Lorentz invariance [@Carrol; @Bertolami-Guisado]. More recently, noncommutativity was revived in the context of quantization of gravity [@Freidel]. The discovery that the low-energy effective theory of a D-brane in the background of a Neveu-Schwarz B field lives on a space with spatial noncommutativity has triggered further the interest in this putative feature of space-time [@Douglas; @Douglas2; @Connes]. From another perspective, a simple heuristic argument, based on Heisenberg’s uncertainty principle, the equivalence principle and the Schwarzschild metric, shows that the Planck length seems to be a lower bound on the precision of a position measurement [@Rosenbaum]. This reenforces the point of view that a new NC geometry of space-time may emerge at a fundamental level [@Douglas; @Connes2; @Martinetti; @Szabo].
From another perspective, NC deformations of the Heisenberg-Weyl (HW) algebra have been investigated in the context of quantum cosmology and are shown to have relevant implications on the thermodynamic stability of black holes and as a possible regularization of the black hole singularities [@Bastos3]. In the context of QM, phase-space noncommutativity could induce violations of Robertson-Schrödinger uncertainty principle [@Bastos4] and also work as a source of Gaussian entanglement [@Bastos5]. Actually, a matrix and Robertson-Schrödinger version of the OUP has been discussed in Ref. [@Bastos6].
A deformation of the HW algebra may also appear as a consequence of quantizing systems with constraints (see e.g. [@Nakamura]).
Phase-space NCQM follows from the modified HW algebra, \[NCQM\] \[,\]=i, \[\_[X]{},\_Y\]=i, \[,\_X\]=\[,\_Y\]=i , where the NC parameters $\theta$ and $\eta$ are real constants [@Bastos1]. The paper is organized as follows. In order to implement the NC corrections to the OUP predictions, we start by setting our notation and describing the so-called backaction evading (BAE) quadrature amplifiers in the next section. In section 3, we address the noncommutative deformation of the BAE interaction. In section 4, similar calculations are performed for another type of measurement interaction - the noiseless quadrature transducers. Finally, in section 5, we state our conclusions.
2\. [*Backaction evading quadrature amplifiers*]{}: In the sequel, Latin indices $i,j,k, \cdots$ take values in the set $\left\{1, \cdots,n \right\}$, whereas Greek indices $\alpha, \beta, \cdots$ run from $1$ to $2n$. Let $\widehat{A}_i^{in}$ and $\widehat{B}_j^{in}$, $i,j=1, \cdots, n$ denote a set of self-adjoint operators such that $$\left[\widehat{A}_i^{in},\widehat{A}_j^{in} \right] = \left[\widehat{B}_i^{in},\widehat{B}_j^{in} \right] =0, \hspace{1 cm} \left[\widehat{A}_i^{in},\widehat{B}_j^{in} \right] = i C_{ij},
\label{eq5B}$$ for $i,j=1, \cdots, n$, and where $C=\left\{C_{ij} \right\}$ is a real non-vanishing matrix. One may write the operators collectively as $$\widehat{Z}^{in} = \left(\widehat{A}_1^{in}, \cdots, \widehat{A}_n^{in},\widehat{B}_1^{in}, \cdots, \widehat{B}_n^{in} \right),
\label{eq6}$$ satisfying the commutation relations $$\left[\widehat{Z}_{\alpha}^{in},\widehat{Z}_{\beta}^{in} \right] = i G_{\alpha \beta}, \hspace{1 cm} \alpha, \beta =1, \cdots, 2n,
\label{eq7}$$ with $G=\left\{G_{\alpha \beta}\right\}$ the skew-symmetric matrix: $$G= \left(
\begin{array}{c c}
0 & C\\
- C^T & 0
\end{array}
\right).
\label{eq7.1}$$ If $\widehat{A}^{in}$ denotes the object’s position, $\widehat{X}^{in}$, and, $\widehat{B}^{in}$, its momentum, $\widehat{P}^{in}$, then $\left\{C_{ij} \right\}$ becomes the identity matrix $\left\{\delta_{ij} \right\}$ and $\left\{G_{\alpha \beta} \right\}$ becomes the standard symplectic matrix $J=\left\{J_{\alpha \beta} \right\}$: $$J= \left(
\begin{array}{c c}
0 & I\\
- I & 0
\end{array}
\right).
\label{eq7.2}$$ Let $\widehat{M}^{out} = \left(\widehat{M}^{out}_1, \cdots, \widehat{M}^{out}_n \right)$ denote the outputs of the probe observable. The noise operators are defined by $$\widehat{N}_i= \widehat{N}_i (\widehat{A}_i)= \widehat{M}_i^{out} - \widehat{A}_i^{in}, \hspace{1 cm} i=1, \cdots, n.
\label{eq7.3}$$ Similarly, the disturbance on the observable $\widehat{B}$ is given by: $$\widehat{D}_i= \widehat{D}_i (\widehat{B}_i)= \widehat{B}_i^{out} - \widehat{B}_i^{in}, \hspace{1 cm} i=1, \cdots, n.
\label{eq7.4}$$ We can write these quantities collectively as $$\widehat{K} = \left(\widehat{N}_1, \cdots, \widehat{N}_n, \widehat{D}_1, \cdots, \widehat{D}_n \right).
\label{eq10}$$ If we write $$\widehat{Z}^{out} = \left(\widehat{M}^{out}_1, \cdots, \widehat{M}^{out}_n,\widehat{B}_1^{out}, \cdots, \widehat{B}_n^{out} \right),
\label{eq13}$$ then according to Ozawa [@Ozawa; @Ozawa4] one has: $$\left[\widehat{Z}_{\alpha}^{out}, \widehat{Z}_{\beta}^{out} \right]=0, \hspace{1 cm} \alpha , \beta =1, \cdots, 2n,
\label{eq14}$$ and $$\widehat{Z}^{out}= \widehat{Z}^{in} + \widehat{K}.
\label{eq15}$$
To study the validity of Eq. (\[eq5\]), Ozawa [@Ozawa4] considered the system quadrature operators $\widehat{X}_a,\widehat{P}_{X_a}$ and the probe operators $\widehat{X}_b,\widehat{P}_{X_b}$ obeying to the commutation relations $$\left[\widehat{X}_a,\widehat{P}_{X_a} \right]= \left[\widehat{X}_b,\widehat{P}_{X_b} \right] = i\hbar~.
\label{eq16}$$ For the sake of generality, we will keep the Planck constant $\hbar$ arbitrary here. Ozawa [@Ozawa4] considered $\hbar =1/2$. The measuring interaction is given by $$\left\{
\begin{array}{l}
\widehat{X}_a^{out}= \widehat{X}_a^{in}\\
\widehat{X}_b^{out} = \widehat{X}_b^{in} + G \widehat{X}_a^{in}\\
\widehat{P}_{X_a}^{out} = \widehat{P}_{X_a}^{in} - G \widehat{P}_{X_b}^{in}\\
\widehat{P}_{X_b}^{out} = \widehat{P}_{X_b}^{in}~,
\end{array}
\right.
\label{eq17}$$ where the factor $G$ is the gain associated to the measurement. The probe observable is then set to $\widehat{M}= {1\over G} \widehat{X}_b$, and thus $$\widehat{M}^{out} = \widehat{X}_a^{in} + {1\over G} \widehat{X}_b^{in}.
\label{eq18}$$ Moreover, the noise and disturbance are given by $$\begin{aligned}
\widehat{N} (X_a) &=&{1\over G} \widehat{X}_b^{in},\nonumber\\
\widehat{D} (X_a) &=&0,\nonumber\\
\widehat{D} (P_{X_a}) &=& - G \widehat{P}_{X_b}^{in}.
\label{eq19}\end{aligned}$$
This measuring model is referred to as backaction evading quadrature (BAE) amplifier and is considered here for the measurement of the quadrature operator $X_a$ in a $1-$dimensional case.
In order to implement the NC commutation relations, we must first consider a $2-$dimensional extension of the BAE model. We define the additional degrees of freedom $\widehat{Y}_a$ and $\widehat{Y}_b$ for the system and the probe and the conjugate variables $\widehat{P}_{Y_a}$ and $\widehat{P}_{Y_b}$, obeying the same commutation relations (\[eq16\]): $$\left[\widehat{X}_a,\widehat{P}_{X_a} \right]=\left[\widehat{Y}_a,\widehat{P}_{Y_a} \right] =\left[\widehat{X}_b,\widehat{P}_{X_b} \right]=\left[\widehat{Y}_b,\widehat{P}_{Y_b} \right]= i \hbar,
\label{eqcomments2}$$ while all the remaining commutators vanish. Let $\widehat{H}$ be the Hamiltonian operator $$\widehat{H} = \alpha \left(\widehat{P}_{X_b} \widehat{X}_a +\widehat{P}_{Y_b} \widehat{Y}_a \right),
\label{eqcomments1}$$ where $\alpha$ is some constant with dimensions $(time)^{-1}$. It generates the unitary transformation $$\widehat{U} (t) = e^{\frac{it}{\hbar} \widehat{H}}
\label{eqcomments1.1}$$ which models the measurement interaction during a time interval $t \in \left[0, T \right]$.
In view of the commutation relations (\[eqcomments2\]), there are no ordering ambiguities in the Hamiltonian, Eq. (\[eqcomments1\]).
The equations of motion are $${{d \widehat{Z}}\over dt} = {1\over{i \hbar}} \left[\widehat{Z},\widehat{H} \right]
\label{eqcomments3}$$ for the observable $\widehat{Z}$. From Eqs. (\[eqcomments1\])-(\[eqcomments3\]) one obtains: $$\left\{
\begin{array}{l}
{{d\widehat{X}_a}\over dt}= 0\\
{{d \widehat{Y}_a}\over dt} = 0\\
{{d \widehat{X}_b}\over dt} = \alpha \widehat{X}_a\\
{{d \widehat{Y}_b}\over dt} = \alpha \widehat{Y}_a\\
{{d \widehat{P}_{X_a}}\over dt} = -\alpha \widehat{P}_{X_b}\\
{{d \widehat{P}_{Y_a}}\over dt} = -\alpha \widehat{P}_{Y_b}\\
{{d \widehat{P}_{X_b}}\over dt} =0\\
{{d \widehat{P}_{Y_b}}\over dt} =0~.
\end{array}
\right.
\label{eqcomments4}$$ At time $t=0$, just before the measurement interaction is switched on, one has $ \widehat{X}_a (0)= \widehat{X}_a^{in}$, $ \widehat{Y}_a (0)= \widehat{Y}_a^{in}$, $ \widehat{X}_b (0)= \widehat{X}_b^{in}$, etc.
The solution of Eqs. (\[eqcomments4\]) is then: $$\left\{
\begin{array}{l}
\widehat{X}_a (t) = \widehat{X}_a^{in}\\
\widehat{Y}_a (t) = \widehat{Y}_a^{in}\\
\widehat{X}_b (t) = \widehat{X}_b^{in} + t\alpha \widehat{X}_a^{in} \\
\widehat{Y}_b (t) = \widehat{Y}_b^{in} + t\alpha\widehat{Y}_a^{in} \\
\widehat{P}_{X_a} (t) = \widehat{P}_{X_a}^{in} - t \alpha\widehat{P}_{X_b}^{in} \\
\widehat{P}_{Y_a} (t) = \widehat{P}_{Y_a}^{in} - t \alpha \widehat{P}_{Y_b}^{in} \\
\widehat{P}_{X_b} (t) = \widehat{P}_{X_b}^{in} \\
\widehat{P}_{Y_b} (t) = \widehat{P}_{Y_b}^{in}~.
\end{array}
\right.
\label{eqcomments5}$$ Let $T$ be the infinitesimal duration of the interaction. Thus, one sets $ \widehat{X}_a (T)= \widehat{X}_a^{out}$, $ \widehat{Y}_a (T)= \widehat{Y}_a^{out}$, $ \widehat{X}_b (T)= \widehat{X}_b^{out}$, etc. Let also $G= \alpha T$ be a dimensionless constant. Hence: $$\left\{
\begin{array}{l}
\widehat{X}_a^{out} = \widehat{X}_a^{in}\\
\widehat{Y}_a^{out} = \widehat{Y}_a^{in}\\
\widehat{X}_b^{out} = \widehat{X}_b^{in} + G\widehat{X}_a^{in} \\
\widehat{Y}_b^{out} = \widehat{Y}_b^{in} + G\widehat{Y}_a^{in} \\
\widehat{P}_{X_a}^{out} = \widehat{P}_{X_a}^{in} - G\widehat{P}_{X_b}^{in} \\
\widehat{P}_{Y_a}^{out} = \widehat{P}_{Y_a}^{in} - G\widehat{P}_{Y_b}^{in} \\
\widehat{P}_{X_b}^{out} = \widehat{P}_{X_b}^{in} \\
\widehat{P}_{Y_b}^{out} = \widehat{P}_{Y_b}^{in}~,
\end{array}
\right.
\label{eqcomments6}$$ which is the BAE interaction for the $2$-dimensional case.
The probe observables are now set to \[eq21\] =([\_bG]{}, [\_bG]{}) , and then the vector $\widehat{K}$, which is the generalized vector describing the noise and the disturbance of the measurement, is given by \[eq22\] =([1G]{}\_b, [1G]{}\_b, -G \_[x\_b]{}, -G \_[y\_b]{}) .
3\. [*The noncommutative extension of Ozawa’s uncertainty principle*]{}: We now turn to the noncommutative algebra = =i , \[eq23.1\]\
= =i , \[eq23.2\]\
= =i , \[eq23.3\]\
= =i , \[eq23.4\] and all remaining commutators vanish.
It is reasonable to anticipate that, if one performs a measurement of, say, $\widehat{X}_a$, then:
\(a) There may be an associated noise, depending on the nature of measurement interaction, e.g. BAE or noiseless;
\(b) One should expect a disturbance on the canonically conjugate momentum, $\widehat{P}_{X_a}$, due to HW algebra, and consequently on $\widehat{P}_{Y_a}$, since in the NC framework momenta do not commute, i. e. $[\widehat{P}_X,\widehat{P}_Y]=i\eta$;
\(c) Moreover, one has an additional disturbance on $\widehat{Y}_a$ due to the NC relation $[\widehat{X},\widehat{Y}]=i\theta$, and indirectly on $\widehat{P}_{Y_a}$, due to the standard commutation relation $[\widehat{Y}, \widehat{P}_Y]=i\hbar$.
We assume the same Hamiltonian, Eq. (\[eqcomments1\]), as above. Notice that there are still no ordering ambiguities in the Hamiltonian.
The equations of motion then become: $$\left\{
\begin{array}{l}
{{d \widehat{X}_a}\over dt} = {{\alpha \theta}\over\hbar} \widehat{P}_{Y_b}\\
{{d \widehat{Y}_a}\over dt} = - {{\alpha \theta}\over\hbar} \widehat{P}_{X_b}\\
{{d \widehat{X}_b}\over dt} = \alpha \widehat{X}_a \\
{{d \widehat{Y}_b}\over dt} = \alpha \widehat{Y}_a \\
{{d \widehat{P}_{X_a}}\over dt} = - \alpha \widehat{P}_{X_b} \\
{{d \widehat{P}_{Y_a}}\over dt} = - \alpha \widehat{P}_{Y_b} \\
{{d \widehat{P}_{X_b}}\over dt} = {{\alpha \eta}\over\hbar} \widehat{Y}_a \\
{{d \widehat{P}_{Y_b}}\over dt} = - {{\alpha \eta}\over\hbar} \widehat{X}_a~.
\end{array}
\right.
\label{eqcomments8}$$ Again setting $ \widehat{X}_a (0)= \widehat{X}_a^{in}$, $ \widehat{Y}_a (0)= \widehat{Y}_a^{in}$, $ \widehat{X}_b (0)= \widehat{X}_b^{in}$, etc, the solution to the previous system of equations reads: $$\left\{
\begin{array}{l}
\widehat{X}_a (t) = \widehat{X}_a^{in} \cos \left( \frac{\alpha t \sqrt{\theta \eta}}{\hbar}\right) + \sqrt{\frac{\theta}{\eta}} \widehat{P}_{Y_b}^{in} \sin \left( \frac{\alpha t \sqrt{\theta \eta}}{\hbar}\right)\\
\widehat{Y}_a (t) = \widehat{Y}_a^{in} \cos \left( \frac{\alpha t \sqrt{\theta \eta}}{\hbar}\right) - \sqrt{\frac{\theta}{\eta}} \widehat{P}_{X_b}^{in} \sin \left( \frac{\alpha t \sqrt{\theta \eta}}{\hbar}\right)\\
\widehat{X}_b (t) = \widehat{X}_b^{in}+ \frac{\hbar}{\sqrt{\theta \eta}} \widehat{X}_a^{in} \sin \left( \frac{\alpha t \sqrt{\theta \eta}}{\hbar}\right) + \frac{2 \hbar}{\eta} \widehat{P}_{Y_b}^{in} \sin^2 \left( \frac{\alpha t \sqrt{\theta \eta}}{2 \hbar}\right)\\
\widehat{Y}_b (t) = \widehat{Y}_b^{in}+ \frac{\hbar}{\sqrt{\theta \eta}} \widehat{Y}_a^{in} \sin \left( \frac{\alpha t \sqrt{\theta \eta}}{\hbar}\right) - \frac{2 \hbar}{\eta} \widehat{P}_{X_b}^{in} \sin^2 \left( \frac{\alpha t \sqrt{\theta \eta}}{2 \hbar}\right)\\
\widehat{P}_{X_a} (t) = \widehat{P}_{X_a}^{in}- \frac{\hbar}{\sqrt{\theta \eta}} \widehat{P}_{X_b}^{in} \sin \left( \frac{\alpha t \sqrt{\theta \eta}}{\hbar}\right) - \frac{2 \hbar}{\theta} \widehat{Y}_a^{in} \sin^2 \left( \frac{\alpha t \sqrt{\theta \eta}}{2 \hbar}\right)\\
\widehat{P}_{Y_a} (t) = \widehat{P}_{Y_a}^{in}- \frac{\hbar}{\sqrt{\theta \eta}} \widehat{P}_{Y_b}^{in} \sin \left( \frac{\alpha t \sqrt{\theta \eta}}{\hbar}\right) + \frac{2 \hbar}{\theta} \widehat{X}_a^{in} \sin^2 \left( \frac{\alpha t \sqrt{\theta \eta}}{2 \hbar}\right)\\
\widehat{P}_{X_b} (t) = \widehat{P}_{X_b}^{in} \cos \left( \frac{\alpha t \sqrt{\theta \eta}}{\hbar}\right) + \sqrt{ \frac{\eta}{\theta}} \widehat{Y}_a^{in} \sin \left( \frac{\alpha t \sqrt{\theta \eta}}{\hbar}\right)\\
\widehat{P}_{Y_b} (t) = \widehat{P}_{Y_b}^{in} \cos \left( \frac{\alpha t \sqrt{\theta \eta}}{\hbar}\right) - \sqrt{ \frac{\eta}{\theta}} \widehat{X}_a^{in} \sin \left( \frac{\alpha t \sqrt{\theta \eta}}{\hbar}\right)~.
\end{array}
\right.
\label{eqcomments8}$$ As previously, $ \widehat{X}_a (T)= \widehat{X}_a^{out}$, $ \widehat{Y}_a (T)= \widehat{Y}_a^{out}$, $ \widehat{X}_b (T)= \widehat{X}_b^{out}$, etc. We thus obtain: $$\left\{
\begin{array}{l}
\widehat{X}_a^{out} = \widehat{X}_a^{in} \cos \left( \frac{G \sqrt{\theta \eta}}{\hbar}\right) + \sqrt{\frac{\theta}{\eta}} \widehat{P}_{Y_b}^{in} \sin \left( \frac{G \sqrt{\theta \eta}}{\hbar}\right)\\
\widehat{Y}_a^{out} = \widehat{Y}_a^{in} \cos \left( \frac{G \sqrt{\theta \eta}}{\hbar}\right) - \sqrt{\frac{\theta}{\eta}} \widehat{P}_{X_b}^{in} \sin \left( \frac{G \sqrt{\theta \eta}}{\hbar}\right)\\
\widehat{X}_b^{out} = \widehat{X}_b^{in}+ \frac{\hbar}{\sqrt{\theta \eta}} \widehat{X}_a^{in} \sin \left( \frac{G \sqrt{\theta \eta}}{\hbar}\right) + \frac{2 \hbar}{\eta} \widehat{P}_{Y_b}^{in} \sin^2 \left( \frac{G \sqrt{\theta \eta}}{2 \hbar}\right)\\
\widehat{Y}_b^{out}= \widehat{Y}_b^{in}+ \frac{\hbar}{\sqrt{\theta \eta}} \widehat{Y}_a^{in} \sin \left( \frac{G \sqrt{\theta \eta}}{\hbar}\right) - \frac{2 \hbar}{\eta} \widehat{P}_{X_b}^{in} \sin^2 \left( \frac{G \sqrt{\theta \eta}}{2 \hbar}\right)\\
\widehat{P}_{X_a}^{out} = \widehat{P}_{X_a}^{in}- \frac{\hbar}{\sqrt{\theta \eta}} \widehat{P}_{X_b}^{in} \sin \left( \frac{G \sqrt{\theta \eta}}{\hbar}\right) - \frac{2 \hbar}{\theta} \widehat{Y}_a^{in} \sin^2 \left( \frac{G \sqrt{\theta \eta}}{2 \hbar}\right)\\
\widehat{P}_{Y_a}^{out} = \widehat{P}_{Y_a}^{in}- \frac{\hbar}{\sqrt{\theta \eta}} \widehat{P}_{Y_b}^{in} \sin \left( \frac{G \sqrt{\theta \eta}}{\hbar}\right) + \frac{2 \hbar}{\theta} \widehat{X}_a^{in} \sin^2 \left( \frac{G \sqrt{\theta \eta}}{2 \hbar}\right)\\
\widehat{P}_{X_b}^{out} = \widehat{P}_{X_b}^{in} \cos \left( \frac{G \sqrt{\theta \eta}}{\hbar}\right) + \sqrt{ \frac{\eta}{\theta}} \widehat{Y}_a^{in} \sin \left( \frac{G \sqrt{\theta \eta}}{\hbar}\right)\\
\widehat{P}_{Y_b}^{out} = \widehat{P}_{Y_b}^{in} \cos \left( \frac{G \sqrt{\theta \eta}}{\hbar}\right) - \sqrt{ \frac{\eta}{\theta}} \widehat{X}_a^{in} \sin \left( \frac{G \sqrt{\theta \eta}}{\hbar}\right)~.
\end{array}
\right.
\label{eqcomments9}$$
Since the duration of the measurement interaction is infinitesimal ($G<<1$) and the noncommutative parameters are presumably small ($\frac{\sqrt{\theta \eta}}{\hbar} <<1$) [@Bertolami], one keeps only the lowest order terms in the previous expressions to get: $$\left\{
\begin{array}{l}
\widehat{X}_a^{out} \sim \widehat{X}_a^{in} + {{G \theta}\over\hbar} \widehat{P}_{Y_b}^{in}\\
\widehat{Y}_a^{out} \sim \widehat{Y}_a^{in} - {{G \theta}\over\hbar} \widehat{P}_{X_b}^{in} \\
\widehat{X}_b^{out} \sim \widehat{X}_b^{in}+ G \widehat{X}_a^{in} + {{\theta G^2}\over 2 \hbar} \widehat{P}_{Y_b}^{in} \\
\widehat{Y}_b^{out}\sim \widehat{Y}_b^{in}+ G \widehat{Y}_a^{in} - {{\theta G^2}\over 2 \hbar} \widehat{P}_{X_b}^{in} \\
\widehat{P}_{X_a}^{out} \sim \widehat{P}_{X_a}^{in}- G \widehat{P}_{X_b}^{in} - {{\eta G^2}\over2 \hbar} \widehat{Y}_a^{in}\\
\widehat{P}_{Y_a}^{out} \sim \widehat{P}_{Y_a}^{in}- G \widehat{P}_{Y_b}^{in} + {{\eta G^2}\over2 \hbar} \widehat{X}_a^{in} \\
\widehat{P}_{X_b}^{out} \sim \widehat{P}_{X_b}^{in} + {{G \eta}\over\hbar} \widehat{Y}_a^{in} \\
\widehat{P}_{Y_b}^{out} \sim \widehat{P}_{Y_b}^{in} - {{G \eta}\over\hbar} \widehat{X}_a^{in}~.
\end{array}
\right.
\label{eqcomments10}$$
Choosing the probe observables, \[eq75\] =([\_[b]{}G]{}, [\_[b]{}G]{}) , one finally obtains the noise and disturbance operator, \[eq76\] =([\_b\^[in]{}G]{} + [[G]{} 2 ]{} \_[Y\_b]{}\^[in]{}, [\_b\^[in]{}G]{} - [[G]{} 2 ]{} \_[X\_b]{}\^[in]{}, -G \_[X\_b]{}\^[in]{} - [[G\^2]{}2 ]{} \_a\^[in]{}, - G \_[Y\_b]{}\^[in]{} + \_a\^[in]{}) . As anticipated at the beginning of this section, one has additional disturbance terms which are a manifestation of noncommutativity. Notice, in particular, that $\widehat{Y}_a$ is disturbed, unlike the “commutative” case: (\_[X\_a]{}) = - G \_[X\_b]{}\^[in]{} - \_a\^[in]{} \[eq76.1\]\
(\_[Y\_a]{}) = - G \_[Y\_b]{}\^[in]{} + \_a\^[in]{} \[eq76.2\]\
(\_a) = - \_[X\_b]{}\^[in]{} \[eq76.3\] To summarize, the effect of noncommutativity can be detected through extra terms in the noise and disturbance due to the new commutation relations, as well as by a disturbance on observables which were undisturbed without noncommutativity in configuration and momentum spaces.
In what follows, we show that Ozawa’s measurement-disturbance relation also acquires extra terms due to these new commutation relations.
Turning back to the OUP, Eq. (\[eq5\]), and the relations from Eq. (\[eq2\]), one typically has for the commutative case \[eq76\] ||(\_i)||, ||(\_i)||G , where $i=\widehat{X},\widehat{Y}$ for a $2$-dimensional phase space. Eq. (\[eq2\]) then reads \[eq77\] (\_i), (\_i)G . We now evaluate the noise and the disturbance in terms of the gain parameter $G$ and show that the NC corrections add extra terms to the OUP. For the pair $(\widehat{X}_a, \widehat{P}_{X_a})$, \[eq78\] (\_a)&=& ([\_b\^[in]{}G]{} + G[ 2 ]{} \_[Y\_b]{}\^[in]{})\^2\^[1/2]{}\
&=&(+ {\_b\^[in]{},\_[Y\_b]{}\^[in]{} }+G\^2[[\^2]{} 4 \^2]{} \_[Y\_b]{}\^[in\^2]{})\^[1/2]{}\
&=& [[\_b\^[in\^2]{}]{}\^[1/2]{}G]{} (1 + 2G\^2[ ]{} [{\_b\^[in]{} , \_[Y\_b]{}\^[in]{} } ]{} +G\^4[[\^2]{} 4 \^2]{} [\_[Y\_b]{}\^[in\^2]{}]{})\^[1/2]{} . Here $\left\{\widehat{A},\widehat{B} \right\} = \frac{1}{2} (\widehat{A}\widehat{B} + \widehat{B} \widehat{A})$ denotes the anti-commutator. Let us call $\epsilon_C(\widehat{X}_a)={\langle\widehat{X}_b^{in^2}\rangle \over G}$ the “commutative” part of the noise operator, and $k_1= 2 {\langle \left\{\widehat{X}_b^{in} , \widehat{P}_{Y_b}^{in} \right\}\rangle\over {\langle\widehat{X}_b^{in^2}\rangle}}$. Thus, \[eq79\] \_[NC]{}(\_a)= \_C(\_a)( 1+ k\_1 [2]{} G\^2)+O(\^2) . Clearly, the noise has a noncommutative correction due to the noncommutativity between the configuration variables. The disturbance can be evaluated using the same strategy, \[eq80\] (\_[X\_a]{})&=& (-G \_[X\_b]{}\^[in]{} - G\^2[2 ]{} \_a\^[in]{})\^2\^[1/2]{}\
&=&(+ G\^3\_[X\_b]{}\^[in]{}\_[a]{}\^[in]{}+G\^4[[\^2]{} 4 \^2]{} \_[a]{}\^[in\^2]{})\^[1/2]{}\
&=& G \^[1/2]{} (1 + G [ ]{} [\_[X\_b]{}\^[in]{}\_[a]{}\^[in]{}]{} +G\^4[[\^2]{} 4 \^2]{} [\_[a]{}\^[in\^2]{}]{})\^[1/2]{} . Thus, defining the “commutative" part of disturbance as $\chi_C(\widehat{P}_{X_a})=G {\langle{ \widehat{P}_{X_b}^{in^2}}\rangle}^{1/2}$ and $k_2={\langle\widehat{P}_{X_b}^{in}\widehat{Y}_{a}^{in}\rangle\over {\langle{ \widehat{P}_{X_b}^{in^2}\rangle}}} $, then, \[eq81\] \_[NC]{}(\_[X\_a]{})= \_C(\_[X\_a]{}) ( 1+ k\_2 [2]{} G)+O(\^2) . Notice that ==0 \[eq81.1\]\
== - [[i G\^2]{}]{} \[eq81.2\] So, unlike the “commutative" case, the BAE interaction is no longer an independent intervention. However, as before we shall neglect terms of order $O( \eta \theta / \hbar)$.
Of course, the approximations, Eqs. (\[eq79\]) and (\[eq81\]), only make sense, provided $k_1$ and $k_2$ are such that 1+ [[k\_1 G\^2]{}]{} >0 \[eqapprox1\]\
1+ [[k\_2 G]{}]{} >0 \[eqapprox2\] Substituting into Eq.(\[eq3\]), one finally obtains to lowest order in $\theta$ and $\eta$ \[eq82\] \_C(\_a)\_C(\_[X\_a]{}) ( 1+ [[k\_1 G\^2]{}]{} +[[k\_2 G]{}]{} ) .
This result suggests that a bound for the NC parameters can be found as a shift with respect to Ozawa’s result. It is clear here, that the noncommutative corrections to OUP are associated with the coefficients $k_1$ and $k_2$. Notice that, it is always possible to choose states for which $k_2=0$ (which automatically satisfies condition (\[eqapprox2\])). This is a consequence of the fact that the interaction is described in the Heisenberg representation. The initial state is the product state \[eq83\] = , where $\psi$ and $\xi$ are the states describing the object and the probe, respectively. Then, one concludes that \[eq84\] |\_[a,]{}\^[in]{}\_[b,]{}\^[in]{}| = |\_[a,]{}\^[in]{}| |\_[b,]{}\^[in]{}| , for every $\alpha=1,..., 4$ and $\beta=1,..., 4$. So, through a translation, it is possible to find a probe state $\xi$, such that \[eq85\] |\_[b,]{}\^[in]{}| =0, for every $\beta=1,..., 4$. This entails that $k_2=0$. Then, OUP becomes \[eq86\] \_C(\_a)\_C(\_[X\_a]{}) ( 1+[[k\_1 G\^2]{}]{} ) . In this reduced form, all the elements depend only on the probe’s state. It is now manifest that there are probe states for which the OUP is violated for the BAE model, whereas the noncommutative version is not. Indeed, choose any state $\xi$ for the probe with covariance matrix elements $\langle(\widehat{X}_b^{in})^2\rangle$, $\langle(\widehat{P}_{X_b}^{in})^2\rangle$ and $\langle
\widehat{X}_b^{in} \widehat{P}_{Y_b}^{in}\rangle$ such that 0 < [2]{} (1-[[k\_1 G\^2]{}]{} ) (\_b\^[in]{})\^2\^[1/2]{}(\_[X\_b]{}\^[in]{})\^2\^[1/2]{} < [2]{}. \[eq86.1\] Under these circumstances Eq. (\[eq86\]) holds to first order in $\theta$, while the OUP is violated: \[eq86.2\] \_C(\_a)\_C(\_[X\_a]{}) < [2]{}.
4\. [*Noiseless quadrature transducers*]{}: With the same system and probe previously introduced, suppose now that one has now a measurement interaction as follows. Let $0< T_1 < T_2$, where $T_2$ is the total duration of the measurement interaction. During the time interval $\left[0, T_1 \right]$ the interaction is generated by the Hamiltonian operator $$\widehat{H}_1 = {1\over T_1} \left(\widehat{P}_{X_b}^{in} \widehat{X}_a^{in} +\widehat{P}_{Y_b}^{in} \widehat{Y}_a^{in} \right)~.
\label{eqnoiseless1}$$ This is the same Hamiltonian as for the BAE interaction with $\alpha = T_1^{-1}$. In view of this fact, at time $T_1$ one has: $$\left\{
\begin{array}{l}
\widehat{X}_a (T_1) = \widehat{X}_a^{in}\\
\widehat{Y}_a (T_1) = \widehat{Y}_a^{in}\\
\widehat{X}_b (T_1) = \widehat{X}_b^{in} + \widehat{X}_a^{in} \\
\widehat{Y}_b (T_1) = \widehat{Y}_b^{in} + \widehat{Y}_a^{in} \\
\widehat{P}_{X_a} (T_1) = \widehat{P}_{X_a}^{in} - \widehat{P}_{X_b}^{in} \\
\widehat{P}_{Y_a} (T_1) = \widehat{P}_{Y_a}^{in} - \widehat{P}_{Y_b}^{in} \\
\widehat{P}_{X_b} (T_1) = \widehat{P}_{X_b}^{in} \\
\widehat{P}_{Y_b} (T_1) = \widehat{P}_{Y_b}^{in}~.
\end{array}
\right.
\label{eqnoiseless2}$$ During the subsequent time interval $\left[T_1,T_2 \right]$, the unitary transformation is governed by the Hamiltonian $$\widehat{H}_2 = -{1\over T} \left(\widehat{P}_{X_a}^{in} \widehat{X}_b^{in} +\widehat{P}_{Y_a}^{in} \widehat{Y}_b^{in} \right).
\label{eqnoiseless3}$$ The solution for observable $\widehat{Z} (t)$ during the time interval $\left[T_1,T_2 \right]$ is given by the series: $$\widehat{Z} (t)= \widehat{Z} (T_1 )+ {{(t-T_1)}\over i \hbar} \left[ \widehat{Z} (T_1), \widehat{H}_2 \right] + {1\over 2!} \left( {{t-T_1}\over i \hbar}\right)^2 \left[\left[ \widehat{Z} (T_1), \widehat{H}_2 \right] , \widehat{H}_2 \right]+ \cdots~.
\label{eqnoiseless4}$$ A straightforward inspection reveals that only the terms up to order $(t-T_1)$ survive for all observables and thus, one gets: $$\left\{
\begin{array}{l}
\widehat{X}_a (t) = \widehat{X}_a^{in} - {{(t-T_1)}\over T} \widehat{X}_b^{in}\\
\widehat{Y}_a (t) = \widehat{Y}_a^{in}- {{(t-T_1)}\over T}\widehat{Y}_b^{in}\\
\widehat{X}_b (t) = \widehat{X}_b^{in} + \widehat{X}_a^{in} - {{(t-T_1)}\over T} \widehat{X}_b^{in}\\
\widehat{Y}_b (t) = \widehat{Y}_b^{in} + \widehat{Y}_a^{in} -{{(t-T_1)}\over T} \widehat{Y}_b^{in} \\
\widehat{P}_{X_a} (t) = \widehat{P}_{X_a}^{in} - \widehat{P}_{X_b}^{in} -{{(t-T_1)}\over T} \widehat{P}_{X_a}^{in}\\
\widehat{P}_{Y_a} (t) = \widehat{P}_{Y_a}^{in} - \widehat{P}_{Y_b}^{in} - {{(t-T_1)}\over T} \widehat{P}_{Y_a}^{in} \\
\widehat{P}_{X_b} (t) = \widehat{P}_{X_b}^{in} +{{(t-T_1)}\over T} \widehat{P}_{X_a}^{in} \\
\widehat{P}_{Y_b} (t) = \widehat{P}_{Y_b}^{in} +{{(t-T_1)}\over T} \widehat{P}_{Y_a}^{in}~,
\end{array}
\right.
\label{eqnoiseless5}$$ where $T=T_2-T_1$. Setting $\widehat{X}_a (T_2)= \widehat{X}_a^{out} ,\widehat{Y}_a (T_2)= \widehat{Y}_a^{out}$, etc, we obtain: $$\left\{
\begin{array}{l}
\widehat{X}_a^{out} = \widehat{X}_a^{in} - \widehat{X}_b^{in}\\
\widehat{Y}_a^{out} = \widehat{Y}_a^{in}- \widehat{Y}_b^{in}\\
\widehat{X}_b^{out} = \widehat{X}_a^{in} \\
\widehat{Y}_b^{out} = \widehat{Y}_a^{in} \\
\widehat{P}_{X_a}^{out} = - \widehat{P}_{X_b}^{in} \\
\widehat{P}_{Y_a}^{out} = - \widehat{P}_{Y_b}^{in} \\
\widehat{P}_{X_b}^{out} = \widehat{P}_{X_b}^{in} +\widehat{P}_{X_a}^{in} \\
\widehat{P}_{Y_b}^{out} = \widehat{P}_{Y_b}^{in} + \widehat{P}_{Y_a}^{in}~.
\end{array}
\right.
\label{eqnoiseless6}$$
We next resort to the noncommutative algebra, Eqs. (\[eq23.1\])-(\[eq23.4\]). At time $t=T_1$ one has (cf.(\[eqcomments9\]) with $G=1$): $$\left\{
\begin{array}{l}
\widehat{X}_a^{out} \sim \widehat{X}_a^{in} + {{\theta}\over\hbar} \widehat{P}_{Y_b}^{in}\\
\widehat{Y}_a^{out} \sim \widehat{Y}_a^{in} - {{\theta}\over\hbar} \widehat{P}_{X_b}^{in} \\
\widehat{X}_b^{out} \sim \widehat{X}_b^{in}+ \widehat{X}_a^{in} + {{\theta}\over 2 \hbar} \widehat{P}_{Y_b}^{in} \\
\widehat{Y}_b^{out}\sim \widehat{Y}_b^{in}+ \widehat{Y}_a^{in} - {{\theta}\over 2 \hbar} \widehat{P}_{X_b}^{in} \\
\widehat{P}_{X_a}^{out} \sim \widehat{P}_{X_a}^{in}- \widehat{P}_{X_b}^{in} - {{\eta }\over2 \hbar} \widehat{Y}_a^{in}\\
\widehat{P}_{Y_a}^{out} \sim \widehat{P}_{Y_a}^{in}- \widehat{P}_{Y_b}^{in} + {{\eta }\over2 \hbar} \widehat{X}_a^{in} \\
\widehat{P}_{X_b}^{out} \sim \widehat{P}_{X_b}^{in} + {{ \eta}\over\hbar} \widehat{Y}_a^{in} \\
\widehat{P}_{Y_b}^{out} \sim \widehat{P}_{Y_b}^{in} - {{ \eta}\over\hbar} \widehat{X}_a^{in}~.
\end{array}
\right.
\label{eqnoiseless7b}$$
Using the series above, Eq. (\[eqnoiseless4\]), considering only terms up to second order in $(t-T_1)$ [^5], setting $\widehat{X}_a (T_2)= \widehat{X}_a^{out} ,\widehat{Y}_a (T_2)= \widehat{Y}_a^{out}$, etc., and considering again that the noncommutative parameters are small ${\sqrt{\theta\eta}\over\hbar}<<1$, one finally obtains:
$$\left\{
\begin{array}{l}
\widehat{X}_a^{out} = \widehat{X}_a^{in} - \widehat{X}_b^{in}+ {\theta\over\hbar}\left(\widehat{P}_{Y_b}^{in} + {3\over2} \widehat{P}_{Y_a}^{in}\right)\\
\widehat{Y}_a^{out} = \widehat{Y}_a^{in}- \widehat{Y}_b^{in}-{\theta\over\hbar}\left(\widehat{P}_{X_b}^{in} + {3\over2} \widehat{P}_{X_a}^{in}\right)\\
\widehat{X}_b^{out} = \widehat{X}_a^{in} +{\theta\over2\hbar} \widehat{P}_{Y_b}^{in}\\
\widehat{Y}_b^{out} = \widehat{Y}_a^{in}- {\theta\over2\hbar} \widehat{P}_{X_b}^{in} \\
\widehat{P}_{X_a}^{out} = - \widehat{P}_{X_b}^{in}-{\eta\over2\hbar} \widehat{Y}_{a}^{in} \\
\widehat{P}_{Y_a}^{out} = - \widehat{P}_{Y_b}^{in} +{\eta\over2\hbar} \widehat{X}_{a}^{in} \\
\widehat{P}_{X_b}^{out} = \widehat{P}_{X_b}^{in} +\widehat{P}_{X_a}^{in}+ {\eta\over\hbar}\left(\widehat{Y}_{a}^{in} - {3\over2} \widehat{Y}_{b}^{in}\right)\\
\widehat{P}_{Y_b}^{out} = \widehat{P}_{Y_b}^{in} + \widehat{P}_{Y_a}^{in}- {\eta\over\hbar}\left(\widehat{X}_{a}^{in} - {3\over2} \widehat{X}_{b}^{in}\right)~.
\end{array}
\right.
\label{eqnoiseless9}$$
So, one concludes that even for in the noiseless case one has noncommutative corrections. Moreover, as we can see, the noncommutativity introduces a “noise" into the interaction, and so the transformation is no longer noiseless. If one considers a probe $\widehat{M}=(\widehat{X}_b,\widehat{Y}_b)$, then the noise and disturbance are \[eqnoiseless10\] =(\_[Y\_b]{}\^[in]{},-[2]{}\_[X\_b]{}\^[in]{},-\_[X\_a]{}\^[in]{}-\_[X\_b]{}\^[in]{}-\_[a]{}\^[in]{},-\_[Y\_a]{}\^[in]{} -\_[Y\_b]{}\^[in]{} +\_[a]{}\^[in]{} ) . Furthermore, in the configuration variables part, a disturbance also emerges, \[eqnoiseless11\] ()=-\_b\^[in]{}+ ( \_[Y\_b]{}\^[in]{}+\_[Y\_a]{}\^[in]{})\
()=-\_b\^[in]{}- ( \_[X\_b]{}\^[in]{}+\_[X\_a]{}\^[in]{}) . As discussed in Ref. [@Ozawa], $\widehat{D}(\widehat{X_a})=0$ is a typical feature of BAE interactions, while $\widehat{N}(\widehat{X}_a)=0$ is a feature of noiseless interactions. Thus, one concludes that the noncommutative extension of a noiseless quadrature transducer transformation is neither noiseless nor BAE.
In terms of OUP, for the noiseless case, the only non-vanishing term is $\sigma(\widehat{X})\chi(\widehat{P}_X)$ as only the disturbance is non-vanishing. In the noncommutative version, the noise becomes \[eqnoiseless12\] (\_a)&=& ([2]{} \_[Y\_b]{}\^[in]{})\^2\^[1/2]{}\
& =& [2]{} k\_3 , where $k_3= \langle\left(\widehat{P}_{Y_b}^{in}\right)^2\rangle$. The disturbance is \[eqnoiseless13\] (\_[X\_a]{})&=& (-\_[X\_a]{}\^[in]{}- \_[X\_b]{}\^[in]{}- \_[a]{}\^[in]{})\^2\^[1/2]{}\
&=&(\_[X\_a]{}\^[in]{}+ \_[X\_b]{}\^[in]{})\^2\^[1/2]{} (1+[[{\_[X\_a]{}\^[in]{}+ \_[X\_b]{}\^[in]{} ,\_a\^[in]{}}]{}]{}+[\^24\^2]{}[[(\_a\^[in]{})\^2]{} ]{})\^[1/2]{}\
&=&\_C(\_X)(1+k\_4+[\^24\^2]{}k\_5)\^[1/2]{} , where $k_4={{\langle \left\{\widehat{P}_{X_a}^{in}\!\!+\!\! \widehat{P}_{X_b}^{in} ,\widehat{Y}_a^{in}\right\}\rangle}\over{\langle(\widehat{P}_{X_a}^{in}\!\!+\!\! \widehat{P}_{X_b}^{in})^2\rangle}}$ and $k_5={{\langle(\widehat{Y}_a^{in})^2\rangle}
\over{\langle(\widehat{P}_{X_a}^{in}\!\!+\!\! \widehat{P}_{X_b}^{in})^2\rangle}}$. As the noncommutative parameter associated to the momenta, $\eta$, is presumably small [@Bertolami], then \[eqnoiseless14\] (\_[X\_a]{})=\_C(\_[X\_a]{}) (1+[2]{}k\_4)+O(\^2) .
Thus, at the lowest non-trivial order of the noncommutative parameters, the OUP becomes \[eqnoiseless15\] [2]{}k\_3 +(\_a)\_C(\_[X\_a]{}) (1+[2]{}k\_4) .
As before one can find states that violate the commutative OUP, while satisfying the noncommutative version of the OUP, Eq. (\[eqnoiseless15\]).
5\. [*Discussion*]{}: In this work, Ozawa’s noise-disturbance relation is extended to the framework of phase-space NCQM. We considered first a BAE quadrature amplifier system. This system, which refers to an independent intervention for a pair of quadrature operators $\widehat{X}_a,\widehat{P}_{X_a}$, ceases to be so once the phase-space noncommutative algebra is considered. Moreover, a second pair of quadrature operators $\widehat{Y}_a,\widehat{P}_{Y_a}$ are found to be disturbed by a measurement of $\widehat{X}_a$, which is in contrast with what happens in ordinary quantum mechanics. We also found that, as expected, extra terms appear in the OUP.
As for noiseless quadratures transducers, noncommutativity introduces a noise term and so the interaction is no longer noiseless. In fact, the noiseless case transforms into a new form of interaction which is neither noiseless, nor a BAE interaction.
Finally, we have shown in both cases, that there are states that violate the OUP, but are in agreement with the NCOUP. This shows that the NCQM encompasses more states than the standard QM. Thus, experimentally, a tiny imprint of noncommutativity could be identified in quantum systems, if an effective deviation from OUP were detected.
[*Acknowledgements*]{}: The work of CB is supported by Fundação para a Ciência e a Tecnologia (FCT) under the grant SFRH/BPD/62861/2009. The work of AEB is supported by the Brazilian Agency CNPq under the grant 300809/2013-1. The work of OB is partially supported by the FCT project PTDC/FIS/111362/2009. N.C. Dias and J.N. Prata have been supported by the FCT grant PTDC/MAT/099880/2008.
[99]{}
P. Busch, P. Lahti, R.F. Werner, Phys. Rev. Lett. [**111**]{} (2013) 160405.
P. Busch, P. Lahti and R.F. Werner, J. Math. Phys. [**55**]{} (2014) 042111.
M. Ozawa, “Heisenberg’s uncertainty relation: Violation and reformulation”, arXiv: 1402.5601 \[quant-ph\].
L. A. Rozema, A. Darabi, D. H. Mahler, A. Hayat, Y. Soudagar, and A. M. Steinberg, Phys. Rev. Lett. [**109**]{}, 100404 (2012).
G. Sulyok, S. Sponar, J. Erhart, G. Badurek, M. Ozawa and Y. Hasegawa, Phys. Rev. [**A 88**]{}, 022110 (2013).
F. Kaneda, S.-Y. Baek, M. Ozawa and Y. Hasegawa, Phys. Rev. Lett [**112** ]{} (2014) 020402.
W. Heisenberg, Z. Phys. [**43**]{} (1927) 172.
K. Korzekwa, D. Jennings, T. Rudolph, “Operational constraints on any state-dependent formulation of quantum error-disturbance trade-off relations”, arXiv: 1311.5506 \[quant-ph\].
M. Ozawa, Phys. Rev. Lett. [**60**]{} (1988) 385; M. Ozawa, in [*Squeezed and Nonclassical Light*]{}, edited by P. Tombesi and E.R. Pike (Plenum, New York, 1989) pp. 263-268; M. Ozawa, Phys. Lett. [**A 299**]{} (2002) 1.
M. Ozawa, Phys. Rev. [**A 67**]{} (2003) 042105.
C. Branciard, Proc. Nat. Aca. Sci. (USA) [**110**]{} (2013) 6742.
K. Fujikawa, Phys. Rev. [**A 85**]{} (2012) 062117.
M. Ringbauer, D. Biggerstaff, M. Broome, A. Fedrizzi, C. Branciard and A. White, arXiv: 1308.5688 \[quant-ph\].
M.R. Douglas, N.A. Nekrasov, Rev. Mod. Phys. [**73**]{} (2001) 977.
A. Connes, M. R. Douglas, A. Schwarz, JHEP [**02**]{} (1998) 003; N. Seiberg and E. Witten, JHEP [**09**]{} (1999) 032.
H.S. Snyder, Phys. Rev. [**71**]{} (1947) 38; Phys. Rev. [**72**]{} (1947) 68.
Letter of Heisenberg to Peierls (1930), Letter of Pauli to Oppenheimer (1946), Wolfgang Pauli, Scientific correspondence, Ed. Karl von Meyenn, Springer-Verlag, 1993.
C.N. Yang, Phys. Rev. [**72**]{} (1947) 874.
S.M. Carroll, J.A. Harvey, V.A. Kostelecký, C.D. Lane and T. Okamoto, Phys. Rev. Lett. [**87**]{} (2001) 141601.
O. Bertolami, L. Guisado, JHEP [**0312**]{} (2003) 013.
L. Freidel, E.R. Livine, 4-th International Symposium Quantum Theory and Symmetries" in Varna, Bulgaria, August 2005, (Heron Press, Sofia, 2006)
M.R. Douglas, C. Hull, JHEP [**02**]{} (1998) 008.
M. Rosenbaum, J.D. Vergara, Ge. Rel. Grav. [**38**]{} (2006) 607.
A. Connes: Noncommutative Geometry (1994), Academic Press, London.
P. Martinetti, Mod. Phys. Lett. [**A 20**]{} (2005) 1315.
R.J. Szabo, Phys. Rept. [**378**]{} (2003) 207.
C. Bastos, O. Bertolami, N.C. Dias, J.N. Prata, Phys.Rev. [**D 78**]{} (2008) 023516; [**D 80**]{} (2009) 124038; [**D 82**]{} (2010) 041502; [**D 84**]{} (2011) 024005; P. Nicolini, A. Smailagic, E. Spallucci, Phys. Lett. [**B 632**]{} (2006) 547; O. Obregon, I. Quiros, Phys. Rev. [**D 84**]{} (2011) 044005; B. Malekolkalami, M. Farhoudi, Class. Quant. Grav. [**27**]{} (2010) 245009.
C. Bastos, O. Bertolami, N.C. Dias, J.N. Prata, Phys. Rev. [**D 86**]{} (2012) 105030.
C. Bastos, A. E. Bernardini, O. Bertolami, N.C. Dias, J.N. Prata, Phys. Rev. [**D 88**]{} (2013) 085013.
C. Bastos, A. E. Bernardini, O. Bertolami, N.C. Dias, J.N. Prata, Phys. Rev. [**A 89**]{} (2014) 042112.
M. Nakamura: [*Canonical structure of noncommutative quantum mechanics as constraint system*]{}. Arxiv: hep-th/1402.2132.
C. Bastos, O. Bertolami, N.C. Dias, J.N. Prata, J. Math. Phys. [**49**]{} (2008) 072101; C. Bastos, N.C. Dias, J.N. Prata, Commun. Math. Phys. [**299**]{} (2010) 709; N.C. Dias, M. de Gosson, F. Luef, J.N. Prata, J. Math. Phys. [**51**]{} (2010) 072101; J. Math. Pures Appl. [**96**]{} (2011) 423.
O. Bertolami [*et. al*]{}, Phys. Rev. [**D 72**]{} (2005) 025010.
[^1]: E-mail: catarina.bastos@ist.utl.pt
[^2]: E-mail: alexeb@ufscar.br
[^3]: Also at Centro de Física do Porto, Rua do Campo Alegre 687, 4169-007 Porto, Portugal. E-mail: orfeu.bertolami@fc.up.pt
[^4]: Also at Grupo de Física Matemática, UL, Avenida Prof. Gama Pinto 2, 1649-003, Lisboa, Portugal. E-mail: ncdias@meo.pt, joao.prata@mail.telepac.pt
[^5]: Notice that terms beyond $(t-T_1)^2$, will be neglected as they are proportional to ${{\theta\eta}\over\hbar}$ and the noncommutative parameters are presumably small.
|
---
abstract: 'We describe the linear cosmological perturbations of a perfect fluid at the level of an action, providing thus an alternative to the standard approach based only on the equations of motion. This action is suited not only to perfect fluids with a barotropic equation of state, but also to those for which the pressure depends on two thermodynamical variables. By quantizing the system we find that 1) some perturbation fields exhibit a non-commutativity quite analogous to the one observed for a charged particle moving in a strong magnetic field, 2) local curvature and pressure perturbations cannot be measured simultaneously, 3) ghosts appear if the null energy condition is violated.'
author:
- Antonio De Felice
- 'Jean-Marc Gérard'
- Teruaki Suyama
title: 'Cosmological perturbations of a perfect fluid and non-commuting variables'
---
Introduction
============
The theory of cosmological perturbations (TCP) for a perfect fluid has always been an important issue in cosmology. It enables us to understand how small fluctuations seeded in the early universe eventually evolved into the present large scale structure. Also, TCP has been extremely useful to put constraints on various cosmological models.
TCP for a perfect fluid has been developed and studied at the level of the basic equations of motion, i.e., the Einstein equations of general relativity (GR) and the energy-momentum conservation law [@Kodama:1985bj; @Mukhanov:1990me]. Yet, TCP for a perfect fluid can be also studied at the level of the action. Although these two approaches are classically equivalent, the latter gives the following advantage. In TCP, one first has to perturb all the fields appearing in the equations of motion or in the action, such as the metric components and the energy density. However, as is well known, not all the perturbation fields are dynamical. Actually, GR with a perfect fluid has only one dynamical field for the scalar-type perturbation. But an identification of this field as well as a derivation of its closed evolution equation by means of the equations of motion alone are not straightforward.
The situation becomes worse when going to extended gravity models. For illustration, $f(R,G)$ theories ($R$ being the Ricci scalar, and $G$ the Gauss-Bonnet term) with a perfect fluid involve two dynamical fields for the scalar-tensor perturbation. In such theories, the usual approach based on the equations of motion requires a rather strong intuition because the closed evolution equations for those dynamical fields have to be extracted from rather complicated coupled differential equations. On the other hand, the action approach advocated in this paper allows a straightforward identification of the auxiliary fields just by checking the absence of any kinetic terms in the second order action. Once the auxiliary fields are found, they can easily be eliminated through their trivial equations of motion. What is then left is an action containing only the dynamical fields, from which we can derive the closed evolution equations. In [@DeFelice:2009ak; @DeFelice:2009wp], we have explicitly checked that the action approach indeed works for $f(R,G)$ gravity models with no matter and with a scalar field, respectively.
In this paper, we want to describe first-order TCP for GR with a perfect fluid at the level of the action, in a way consistent with the principles of thermodynamics. To this end, we use the action for a perfect fluid proposed by Schutz [@Schutz] and do the quantization of the perturbations, which might also be of some interest beyond a pure academic point of view. Indeed, the quantization of the background universe with a perfect fluid has been discussed by many authors [@Pedram:2007ud; @Pedram:2007er; @Alvarenga:2001nm; @Monerat:2005mx; @Lemos:1995qu; @Ivashchuk:1995uy; @Peter:2006id; @Brown:1989vb]. But here, we prove that quantizing the perturbation fields leads to non-standard commutation relations and, consequently, to unexpected effects upon the physical properties of any perfect fluid in quantum cosmology. A TCP action approach for fluids was first introduced in [@Garriga] in the context of k-inflation. This approach was taken also in [@Boubekeur:2008kn] to study non-linear cosmological perturbations in the matter dominated universe. However, the fluid discussed there is the so-called scalar fluid whose energy-momentum tensor is completely written in terms of a scalar field and its derivative. By construction, the scalar fluid cannot have vector-type perturbations. Although there is an exact correspondence between a perfect fluid and the scalar fluid for the scalar-type perturbation at the linear order, it is no longer true for higher order perturbations because of the mixture of scalar and vector-type perturbations. On the other hand, the Schutz’s action we will use here is for a perfect fluid. Therefore the action exactly describes the dynamics of a perfect fluid at any order. As far as we know, this is the first time TCP is fully developed within the Schutz’s action. We believe our approach is suited for studying cosmology in extended gravity models.
Before introducing the action for a perfect fluid, let us briefly review the thermodynamics needed to describe it. In this paper, we consider a “single” fluid, that is, a fluid whose thermodynamical quantities are completely determined by only two variables, e.g. the chemical potential $\mu$ and the entropy per particle $s$ [@Misner]. In this sense one first needs to give two equations of state, $n=n(\mu,s)$ and $T=T(\mu,s)$, where $n$ is the number density and $T$ is the temperature of the fluid. Using then the first law of thermodynamics, ${\mathrm{d}}p=n{\mathrm{d}}\mu-nT{\mathrm{d}}s$, one obtains the pressure as $p=p(\mu,s)$. Finally, the energy density is given by $\rho\equiv\mu
n-p$. This is enough to describe the system thermodynamically. Single fluids also satisfy particle number conservation, namely $N=n
V$ is a constant. The second law of thermodynamics imposes ${\mathrm{d}}(N
s)=N{\mathrm{d}}s\geq0$ such that ${\mathrm{d}}s=0$ at equilibrium.
A single perfect fluid is also defined through its stress-energy tensor $T_{\mu\nu}=(\rho+p)u_\mu u_\nu+pg_{\mu\nu}$. In a Friedmann-Lemaître-Robertson-Walker (FLRW) background the conservation of energy-momentum, $T^{\mu\nu}{}_{;\nu}=0$, implies that $\dot\rho+3H(\rho+p)=0$ or, equivalently, ${\mathrm{d}}(\rho V)+p\,{\mathrm{d}}V=0$ since $V\propto a^3$ with $a$, the cosmological scale factor, and $H\equiv\dot a/a$, the Hubble parameter. This, in turn, implies that ${\mathrm{d}}N=0$ and ${\mathrm{d}}s=0$. In any FLRW universe we thus have $na^3=N=\mathrm{constant}$ and $\dot s=0$.
Action
======
The action considered here has been introduced by Schutz [@Schutz] and is defined as follows $$\label{eq:act1}
S=\int d^4x\sqrt{-g}\left[\frac R{16\pi G}+p(\mu,s)\right] .$$ Alternative functionals have been proposed, all being physically equivalent as shown in [@sorkino]. We chose the version (\[eq:act1\]) as it was the most convenient for our purpose. The four-velocity of the perfect fluid is defined via potentials [@Schutz]: $$\label{eq:vel1}
u_{\nu}=\frac1\mu\, ( {\partial}_{\nu}\ell+\theta{\partial}_{\nu} s+A{\partial}_{\nu} B)\, ,$$ where $\ell$, $\theta$, $A$ and $B$ are all scalar fields. The normalization for the four-velocity, $u^\nu u_\nu=-1$, gives $\mu$ in terms of the other fields. The fundamental fields over which the action (\[eq:act1\]) will be varied are $g_{\mu\nu}$, $\ell$, $\theta$, $s$, $A$, and $B$.
Having chosen the Lagrangian for gravity to be the one of GR, we recover $G_{\mu\nu}=8\pi G\, T_{\mu\nu}$ by varying with respect to the metric field. Besides the conservation of particle number and entropy already discussed, the other equations of motion derived from Eq. (\[eq:act1\]) are [@Schutz]: $$\label{eq:therm}
u^\alpha{\partial}_\alpha \theta=T,\quad u^\alpha {\partial}_\alpha A=0,\quad u^\alpha{\partial}_\alpha B=0.$$ In a FLRW universe, $u_i=0$ and $u_0=-1$ such that the solutions to Eq. (\[eq:therm\]) are simply $$\label{eq:backA}
A=A(\vec x)\, ,\quad B=B(\vec x)\, ,\quad\theta=\int^t T(t'){\mathrm{d}}t'+\tilde\theta(\vec x)\, .$$ There is a complete freedom for the functions $A$, $B$, and $\tilde\theta$ [^1], any choice leading to the same physical background. We will take advantage of this freedom to simplify our study of the scalar and vector perturbations.
Perturbations
=============
Once and for all, we work within a spatially flat FLRW universe. At first order in perturbation theory we have $\delta u_0=\frac12\delta
g_{00}$ and$$\label{eq:Perto}
\delta u_i = \partial_i \left( \frac{\delta \ell + \theta \delta s + A \delta B}\mu \right)+
\frac{W_i}\mu\, ,$$ with $$W_i\equiv B_{, i} \delta A -A_{,i} \delta B - \tilde{\theta}_{, i}\delta s\equiv
\partial_i w_s+\bar u_i\, .
\label{eq:wi}$$ Note that $W_i$ is gauge invariant since, following ref. [@Weinberg], the perturbation fields transform respectively as $$\begin{aligned}
{6}
\delta\ell&\to\delta\ell
+\mu\xi^0+A\,\partial_i B\,&&\xi^i,&&\\
\delta s&\to\delta s,&&\delta \theta&&\to\delta \theta-T\,\xi^0-\partial_i\tilde\theta\,\xi^i,\\
\delta A&\to\delta A-\partial_iA\,\xi^i,&&\delta B&&\to\delta B-\partial_iB\,\xi^i,
\label{eq:deltB}\end{aligned}$$ under the gauge transformation $x^\alpha\to x^\alpha+\xi^\alpha$. In Eq. (\[eq:wi\]) we have decomposed $W_i$ into scalar ($w_s$) and divergence-less vector modes ($\bar u_i$). So, in general $W_i$ will generate both scalar and vector perturbations. However, we can efficiently use the freedom of choosing the time-independent background quantities $A$, $B$ and $\tilde\theta$ given in Eq. (\[eq:backA\]) to disentangle them. Any such choice does not fix a gauge as no conditions are imposed on the perturbation fields themselves.
Scalar type perturbations
-------------------------
Let us simply consider the choice $$\label{eq:choi}
A=B=\tilde\theta=0\, ,$$ to remove the vector perturbations arising from $W_i$. Regarding the metric, $\delta g_{00}$ and $\delta g_{0i}$ are auxiliary fields such that the only scalar component which will be dynamical is the curvature perturbation $\phi$ defined by $\delta g_{ij}=2a^2\phi\,\delta_{ij}$, with $\phi \to \phi-H \xi^0$ under a gauge transformation.
We introduce the new quantity $v=\delta\ell+\theta(t)\delta s$ such that $\delta u_i=\partial_i \left( v/\mu \right)$. Therefore, $v$ represents the velocity perturbation of a perfect fluid. We then define two gauge invariant fields, $\Phi=\phi+Hv/\mu$ and $\delta\bar\theta=\delta\theta+Tv/\mu$, to expand the action (\[eq:act1\]) at second order, in a gauge-independent way: $$\begin{aligned}
S_S&=\int {\mathrm{d}}t{\mathrm{d}}^3\vec x\left\{ \frac{a^3Q_S}2\left[\dot\Phi^2
-\frac {c_s^2}{a^2}(\vec\nabla\Phi)^2\right]
+C\delta s\dot\Phi-\frac D2 \delta s^2\right.\notag\\
&\qquad\left.{}
-E(\delta\bar\theta\dot{\delta s}
-\delta s\dot{\delta\bar\theta}
+\delta A\dot{\delta B}
-\delta B\dot{\delta A})\right\}.
\label{eq:act2}\end{aligned}$$ The perturbation fields $\Phi$ and $\delta\bar\theta$ are related to the curvature and temperature, respectively. In the comoving gauge $v=0$ where a perfect fluid remains static, $\Phi=\phi$ and ${\delta\bar\theta}=\delta \theta$. The coefficients for the kinetic terms are given by[^2] $$\begin{aligned}
{10}
Q_S&=\frac{\rho+p}{c_s^2 H^2}\, ,&\quad&&
c_s^2&\equiv\frac{\dot p}{\dot \rho}=
\left(\frac{{\partial}p}{{\partial}\rho}\right)_s ,&\quad&& \label{eq:Qs}\end{aligned}$$ whereas the remaining coefficients are $$\begin{aligned}
{4}
C&=\frac{na^3}{H}\left[\mu\left(\frac{{\partial}T}{{\partial}\mu}\right)_{\!s}-T\right],
\quad&
E&=\frac{na^3}{2}\, ,\end{aligned}$$ and $$\begin{aligned}
D&=na^3\left[T\left(\frac{{\partial}T}{{\partial}\mu}\right)_{\!s}+\left(\frac{{\partial}T}{{\partial}s}\right)_{\!\mu}\right] .\end{aligned}$$ The general solution for $\delta s$, $\delta A$, and $\delta B$ is their initial values since Eq. (\[eq:act2\]) forces them to be time-independent. As a consequence, the non trivial equations of motion are $$\begin{aligned}
\label{eq:gPhi}
& \frac{1}{a^3Q_S}\frac{{\mathrm{d}}}{{\mathrm{d}}t}(a^3Q_S\dot\Phi)-\frac{c_s^2}{a^2}\nabla^2\Phi=-\frac{\dot C}{a^3Q_S}\,\delta s\,,\\
& n a^3\dot{\delta\bar\theta}-D\delta s+C\dot\Phi=0\, .\end{aligned}$$ These equations exactly coincide with those derived by perturbing the Einstein equations and the conservation law for the entropy, as it should be. In general, $\Phi$ is sourced by $\delta s$. For example, if the perfect fluid is an ideal non-relativistic gas characterized by $T=\frac25(\mu-m_0)$, $m_0$ being the mass of the particles, then ${\dot C} \neq 0$ and we have to solve two coupled equations to know the time evolution of $\Phi$ and $\delta{\bar\theta}$.
However, if $T=f(s)\,\mu$, which is equivalent to having a barotropic equation of state $p=p(\rho)$ [^3], then $C=0$. (Note that both radiation and dust fulfill this condition, while a cosmological constant has vanishing $Q_S$ so that no contribution for perturbations arises, as is well known). In this case, the sign of $Q_S$ cannot be known from the usual approach based on the equations of motion alone. On the other hand, the action approach advertised here leads to an exact expression for $Q_S$, which will be used to avoid ghost degrees of freedom when quantizing the perturbations.
We also conclude that the fields $\Phi$ completely decouples from $\delta s$ and propagates with a sound speed $c_s$ if $C=0$ and $c_s^2 >0$.
Vector type perturbations
-------------------------
To arrive at the desired action via the shortest path, let us first assume that all the perturbation variables propagate only in one direction, say the $z$-direction. This should be allowed, as we know that perturbations with different wavenumber vectors do not mix in a FLRW universe. Once we obtain the action for this particular mode, we can then easily infer the general action.
The vector contributions come only from the component $\bar u_i$ of $W_i$ defined in Eq. (\[eq:wi\]). It is not easy to extract $\bar
u_i$ from this equation since the functions $A,B$ and $\tilde{\theta}$ depend in general on the spatial coordinates. Yet, taking again advantage of the freedom to select these background functions, we can make the simplest choice that contains all the information needed for the vector modes, namely $$A=\tilde\theta=0\,,\quad
B_{,i}=b_i\, ,$$ where ${\vec b}=(b,0,0)$ is a constant vector orthogonal to the $z$-direction. With this assumption, we have $w_s=0$ and $\bar u_i=b_i
\,\delta A(t,z)$ for $W_i$.
Regarding the vector perturbation of the metric, we follow again ref. [@Weinberg] and denote $\delta g_{0i}=aG_i$, and $\delta g_{ij}=a^2(C_{i,j}+C_{j,i})$, with transverse conditions $G_{i,i}=C_{i,i}=0$, or, in our setup, $G_z=C_z=0$. Then, we impose the gauge condition $\delta
B=0$. However, this condition alone does not completely fix the gauge, as only the component of $\xi^i$ parallel to $\vec b$ gets frozen by Eq. (\[eq:deltB\]). Therefore we can still choose $\xi^y$ such that $C_y=0$, and $\vec C=\vec C^\parallel$ is parallel to $\vec
b$. Finally, we find that the action for the vector perturbations is given by $$\begin{aligned}
\label{eq:VC1}
S_V&=\int{\mathrm{d}}^4 x \left\{ \frac{a}{32\pi G} \bigl[ {\left( \partial_z V_x \right)}^2+{\left( \partial_z V_y \right)}^2 \bigr]+na^3 b\delta A {\dot C_x} \right.\notag \\
&\qquad\left.{} + na^2 b V_x \delta A+2\pi Gb^2n^2a\delta A^2/\dot H \right\},\end{aligned}$$ where $V_i \equiv G_i-a {\dot C_i}$ is a gauge invariant field. This action can be immediately extended to the general case where the perturbation variables depend now on $(x,y,z)$. In the gauge $\delta
B=0$, the result is given by $$\begin{aligned}
\label{eq:VC2}
S_V&=\int{\mathrm{d}}^4 x \left[ \frac{a}{32\pi G} \left( \partial_j V_i \right)\left( \partial_j V_i \right)+a^3(\rho+p) {\dot C_i} \delta u_i \right.\notag \\
&\qquad\left.{} +a^2 (\rho+p) V_i \delta u_i-\tfrac{1}{2}\,a\,(\rho+p)\delta u_i \delta u_i \right],\end{aligned}$$ where we substituted $\delta u_i$ for $b_i\delta A/\mu$. Variations with respect to $V_i$ and $C_i$ yield the following equations, $$\begin{aligned}
\label{eq:vectN}
\triangle V_i&=16\pi G a(\rho+p) \delta u_i, \\
\frac{{\mathrm{d}}}{{\mathrm{d}}t}[(\rho&+p)a^3\delta u_i]=0,
\label{eq:vectT}\end{aligned}$$ respectively. Again, these equations exactly coincide with those derived by perturbing the Einstein equations and the energy-momentum conservation law [@Weinberg]. This provides thus a cross-check that the calculations presented here are correct. In fact, the main novelty in our approach is to be found when we quantize the system.
To summarize section III, the known results on first-order TCP for a perfect fluid can be directly derived from variations of the classical action given in Eq. (\[eq:act1\]). Note that a similar action approach has been already performed in [@Garriga; @Boubekeur:2008kn]. Yet, the system studied there cannot represent a perfect fluid. Indeed, as already mentioned in the introduction, the action proposed in [@Garriga; @Boubekeur:2008kn] is made of a real scalar field. So, this system, by construction, cannot have vector perturbations, as the only new perturbed field is the scalar one. Therefore the system studied there is not a perfect fluid, otherwise a perfect fluid would have no vector perturbation. It can be thought of as a scalar fluid, but, once more, not as a perfect fluid. It is simply a different physical system whose squared sound speed $c_s^2$ is not equal to $\dot p/\dot\rho$.
Quantization
============
The most important advantage of the action approach proposed in this paper is, of course, that it allows us to quantize the system. Although the inhomogeneities of the present universe, such as the galaxy distribution, are clearly described by the classical theory, the quantization of a perfect fluid may have something to do with the early universe if the seeds for structure formation are provided by quantum fluctuations of fields generated during inflation. Yet, besides its practical utility, our action approach also opens new theoretical prospects, as discussed below. In the following, we will again treat the quantization for the scalar and vector type perturbations separately.
Scalar type perturbations
-------------------------
To quantize the scalar perturbations, let us first introduce the canonical field $\psi\equiv\sqrt{a^3 Q_S} \Phi$. To avoid the appearance of a ghost, we assume that $Q_S$ is positive. According to Eq. (\[eq:Qs\]), this means that $(\rho+p)/c_s^2 >0$. Such a constraint, together with the stability of the perturbations, $c^2_s>0$, lead to the null energy condition $\rho+p>0$. Using the new variable $\psi$, the action (\[eq:act2\]) is rewritten as $$\begin{aligned}
\label{eq:act2.5}
S_S=\int {\mathrm{d}}^4x&\left[ \frac{\dot\psi^2}2
-\frac {c_s^2}{2a^2}(\vec\nabla\psi)^2
+C_1 \delta s\dot\psi+C_2 \delta s \psi\right.\notag\\
&-\left.\frac{N}2(\delta\bar\theta\dot{\delta s}
-\delta s\dot{\delta\bar\theta})-\frac D2 \delta s^2\right] ,\end{aligned}$$ where we have neglected $\delta A$ and $\delta B$ as they do not contribute to the Hamiltonian. The field $\psi$ has a canonical kinetic term, whereas the quadratic terms for $\delta s$ and $\delta \bar\theta$ are at most linear in their time derivatives. Yet, it is known [@Jackiw] that a consistent quantization of such a singular Lagrangian can be done provided one introduces the following equal-time commutation conditions, $$\begin{aligned}
\bigl[\hat{\psi}(t,\vec x),\hat{\pi}(t,\vec y)\bigr]&=i\delta(\vec x-\vec y)\, ,\label{comm1}\\
\bigl[\hat{\delta s}(t,\vec x),\hat{\delta\bar\theta}(t,\vec y)\bigr]&=-\frac{i}{N}\delta(\vec x-\vec y)\, . \label{comm2}\end{aligned}$$ All the other commutators are zero and $\pi$ is the canonical conjugate momentum of $\psi$. The corresponding Hamiltonian is given by $$\begin{aligned}
\label{eq:hamiltonian}
{\hat H}=\int {\mathrm{d}}^3\vec x&\left[\frac12\,{\left({\hat \pi}-C_1 \hat{\delta s} \right)}^2+
\frac {c_s^2}{2a^2}(\vec\nabla {\hat \psi})^2\right.\notag\\
&-\left.C_2 \hat{\delta s} {\hat \psi}
+\frac{D}{2} {\hat {\delta s}}^2 \right].\end{aligned}$$ One can easily check that the Heisenberg equations, with the help of the commutation relations, yield the same equations of motion as the classical ones derived from the variation of Eq. (\[eq:act2.5\]).
The relation (\[comm2\]) shows that $\hat{\delta s}$ and $\hat{\delta\bar\theta}$ become non-commuting variables at the quantum level. In Quantum Field Theory, different fields (i.e., different particles) can be simultaneously observed at the same position. Here the perturbation fields related to the entropy and the temperature, to which we may individually attribute arbitrary numbers at the classical level, cannot be measured at the same space-time point. That this non-commutativity arises from the action of a perfect fluid is thus intriguing.
We should concede that consequences directly linked to present observations are missing. However, at this level it is quite interesting to compare the action (\[eq:act2.5\]) with the one of the Landau problem [@Jackiw], an archetype of non-commutative geometry. Regarding $\hat{\delta s}$ and $\hat{\delta\bar\theta}$, the action (\[eq:act2.5\]) is essentially the same as the one for a charged particle moving on a two-dimensional surface with a constant magnetic field background in the transverse direction: $$\label{eq:LPR}
S=\int {\mathrm{d}}t\left[\frac m2(\dot x^2+\dot y^2)-\frac{\cal B}2(\dot x y-\dot y x)-V(x,y)\right].$$ Within this analogy, the perturbation fields $(\delta s, \delta \bar
\theta)$ correspond to the $(x,y)$ space coordinates for the particle, and the number of particles $N=na^3$ plays the role of the constant magnetic field ${\cal B}$. Interestingly enough, while the quite heuristic non-commutative relation $[\hat x,\hat y]=-i/{\cal B}$ in the Landau problem [@Jackiw] holds only in the absence of the kinetic term in Eq. (\[eq:LPR\]), which is valid in the large magnetic field limit, the non-commutative relation (\[comm2\]) of a perfect fluid is exact for any finite number of particles. So, perfect fluids provide a nice example of non-commutativity between different fields.
The other non-commutation relation (\[comm1\]) leads also to an interesting physical consequence. By using once more the Einstein equations and the energy-momentum conservation law, we find that the pressure perturbation in the comoving gauge ($v=0$) is given by $\hat{\delta p}=-(\rho+p){\dot {\hat \phi}}/H$. Then, the commutator between $\phi$ and $\delta p$ becomes $$\bigl[\hat{\phi}(t,\vec x),\hat{\delta p}(t,\vec y)\bigr]=-i c_s^2 H\delta(\vec x-\vec y)/a^3.$$ Consequently, local curvature and pressure perturbations cannot be measured simultaneously.
Vector type perturbations
-------------------------
Time derivatives of $V_i$ and $\delta u_i$ do not appear in the action (\[eq:VC2\]). Therefore, those are auxiliary fields which can be eliminated through their equations of motion. The action (\[eq:VC2\]) becomes then a functional which depends only on $C_i$. To make this action canonical, we introduce a new variable $F_i ({\vec
k},t)=\sqrt{a^3 Q_V (k,t)} C^{\parallel}_i({\vec k},t)$, where $C^\parallel_i({\vec k},t)$ is the Fourier transform of $C^\parallel_i
({\vec x},t)$ and $Q_V$ is given by $$Q_V (k,t)=\frac{a^2 k^2 (\rho+p)}{k^2+16\pi G a^2 (\rho+p)}.$$ To avoid the appearance of ghosts, $Q_V$ must be positive. So, as for the scalar modes we require $\rho+p>0$, i.e. the null energy condition to hold. In terms of $F_i$, the canonical action in Fourier space is given by $$S_V= \int {\mathrm{d}}t {\mathrm{d}}^3k \, \left( \tfrac{1}{2} \dot{F_i^\ast} \dot{F_i} -\tfrac{1}{2} m_k^2 F_i^\ast F_i \right), \\$$ with $$m_k^2=-\frac12\frac{{\mathrm{d}}^2}{{\mathrm{d}}t^2} \log (a^3 Q_V)-\frac{1}{4} {\left( \frac{{\mathrm{d}}}{{\mathrm{d}}t} \log a^3 Q_V \right)}^2.$$ Now the quantization is done by imposing the following canonical condition for $F_i$ and its conjugate momentum $$\bigl[\hat{F_i}(t,\vec k),{\hat{\pi_j}}^\dagger (t,\vec k')\bigr]=i \delta(\vec k-\vec k') \left( \delta_{ij}-\frac{k_i k_j}{k^2} \right). \label{comvec}$$ The corresponding Hamiltonian is given by $${\hat H}= \int {\mathrm{d}}^3k \left( \tfrac{1}{2} {\hat \pi_i}^\dagger {\hat \pi_i} +\tfrac{1}{2} m_k^2 {\hat F_i}^\dagger \hat F_i \right),$$ and the evolution of the operators is given by the Heisenberg equation with the help of the commutation relation (\[comvec\]). The quantum version of Eq. (\[eq:vectN\]) implies $\bigl[\hat{V_i}(t,\vec x),\hat{\delta u_j}(t,\vec y)\bigr]=0$. Therefore, the gauge invariant metric perturbation and the vorticity of the perfect fluid can be measured at the same time, at the same position.
As for the tensor perturbations, they come only from the metric perturbation. The action for the tensor perturbations and its quantum aspects have been widely studied in the literature (e.g. [@Weinberg]), mainly in connection with the quantum generation during inflation. So, we do not discuss it any longer.
Conclusions
===========
We have studied the theory of cosmological perturbations for a perfect fluid in GR at the action level. Starting from the action proposed by Schutz, we first reproduced the known results derived from the equations of motion alone. This enabled us to illustrate the advantage of our action approach at the classical level. Quantizing then the perturbation fields, we found that some of them do not commute, leading thus to a non-commutative field-geometry. In particular, we pointed out that a simultaneous measurement of local curvature perturbations and pressure inhomogeneities is not allowed at the quantum level. Finally, we proved that both the null energy condition and a positive squared sound speed have to hold at all times in order to avoid ghost degrees of freedom.
Another advantage of our action approach is that one can easily obtain the second order action depending only on the dynamical fields. Such an approach is thus suited to study cosmology in extended gravity models with more than one dynamical field. In particular, we expect that the approach presented here will be quite useful for the perturbation analysis of $f(R,G)$ gravity models [@ANTONIO], or for the treatment of non-gaussianities for the entropy and vector perturbations on perfect fluids following [@Boubekeur:2008kn].
We thank Sean Murray for helpful discussions. This work is supported by the Belgian Federal Office for Scientific, Technical and Cultural Affairs through the Interuniversity Attraction Pole P6/11.
[99]{} H. Kodama and M. Sasaki, Prog. Theor. Phys. Suppl. [**78**]{} (1984) 1.
V. F. Mukhanov, H. A. Feldman and R. H. Brandenberger, Phys. Rept. [**215**]{}, 203 (1992).
A. De Felice and T. Suyama, JCAP [**0906**]{}, 034 (2009) \[arXiv:0904.2092 \[astro-ph.CO\]\]. A. De Felice and T. Suyama, Phys. Rev. D [**80**]{}, 083523 (2009) \[arXiv:0907.5378 \[astro-ph.CO\]\]. B. F. Schutz, Phys. Rev. D [**2**]{}, 2762 (1970)
P. Pedram and S. Jalalzadeh, Phys. Lett. B [**659**]{}, 6 (2008) \[arXiv:0711.1996 \[gr-qc\]\]. P. Pedram, S. Jalalzadeh and S. S. Gousheh, Phys. Lett. B [**655**]{}, 91 (2007) \[arXiv:0708.4143 \[gr-qc\]\]. F. G. Alvarenga, J. C. Fabris, N. A. Lemos and G. A. Monerat, Gen. Rel. Grav. [**34**]{}, 651 (2002) \[arXiv:gr-qc/0106051\]. G. A. Monerat, E. V. Correa Silva, G. Oliveira-Neto, L. G. Ferreira Filho and N. A. Lemos, Phys. Rev. D [**73**]{}, 044022 (2006) \[arXiv:gr-qc/0508086\]. N. A. Lemos, J. Math. Phys. [**37**]{}, 1449 (1996) \[arXiv:gr-qc/9511082\]. V. D. Ivashchuk and V. N. Melnikov, Grav. Cosmol. [**1**]{}, 133 (1995) \[arXiv:hep-th/9503223\]. P. Peter, E. J. C. Pinho and N. Pinto-Neto, Phys. Rev. D [**73**]{}, 104017 (2006) \[arXiv:gr-qc/0605060\]. J. D. Brown, Phys. Rev. D [**41**]{}, 1125 (1990). C. W. Misner, K. S. Thorne and J. A. Wheeler, [*San Francisco 1973, 1279p*]{}
B. F. Schutz and R. Sorkin, Annals Phys. [**107**]{} (1977) 1.
S. Weinberg, [*Oxford, UK: Oxford Univ. Pr. (2008) 593 p*]{}
J. Garriga and V. F. Mukhanov, Phys. Lett. B [**458**]{}, 219 (1999)
L. Boubekeur, P. Creminelli, J. Norena and F. Vernizzi, JCAP [**0808**]{}, 028 (2008) \[arXiv:0806.1016 \[astro-ph\]\]. R. Peierls, Z. Phys. [**80**]{}, 763 (1933); L. Faddeev and R. Jackiw, Phys. Rev. Lett. [**60**]{}, 1692 (1988); G. V. Dunne and R. Jackiw, Nucl. Phys. Proc. Suppl. [**33C**]{}, 114 (1993); R. Jackiw, arXiv:hep-th/9306075.
A. De Felice, J. M. Gérard and T. Suyama, in preparation.
[^1]: Since $u_\nu=(-1,\vec 0)$, we also have that $\ell=-\int^t \mu(t'){\mathrm{d}}t'+\tilde\ell$, and $\vec\nabla\tilde\ell=-A\vec\nabla B$, which implies that $\vec\nabla A\times\vec\nabla B=0$.
[^2]: For $c_s^2$ we used the fact that $\dot p= \left({\partial}p/{\partial}\rho\right)_s\dot\rho+\left({\partial}p/{\partial}s\right)_\rho\dot s$.
[^3]: In this case, we obtain $\left({\partial}\mu/{\partial}s\right)_\rho= T$ such that $({\partial}p/{\partial}s)_\rho=n[\left({\partial}\mu/{\partial}s\right)_\rho-T]=0$.
|
---
abstract: 'Millimeter Wave (mmWave) communications with full-duplex (FD) have the potential of increasing the spectral efficiency, relative to those with half-duplex. However, the residual self-interference (SI) from FD and high pathloss inherent to mmWave signals may degrade the system performance. Meanwhile, hybrid beamforming (HBF) is an efficient technology to enhance the channel gain and mitigate interference with reasonable complexity. However, conventional HBF approaches for FD mmWave systems are based on optimization processes, which are either too complex or strongly rely on the quality of channel state information (CSI). We propose two learning schemes to design HBF for FD mmWave systems, i.e., extreme learning machine based HBF (ELM-HBF) and convolutional neural networks based HBF (CNN-HBF). Specifically, we first propose an alternating direction method of multipliers (ADMM) based algorithm to achieve SI cancellation beamforming, and then use a majorization-minimization (MM) based algorithm for joint transmitting and receiving HBF optimization. To train the learning networks, we simulate noisy channels as input, and select the hybrid beamformers calculated by proposed algorithms as targets. Results show that both learning based schemes can provide more robust HBF performance and achieve at least $22.1\%$ higher spectral efficiency compared to orthogonal matching pursuit (OMP) algorithms. Besides, the online prediction time of proposed learning based schemes is almost 20 times faster than the OMP scheme. Furthermore, the training time of ELM-HBF is about 600 times faster than that of CNN-HBF with $64$ transmitting and receiving antennas.'
author:
- 'Shaocheng Huang, Yu Ye, and Ming Xiao, [^1]'
bibliography:
- 'ref.bib'
title: 'Learning Based Hybrid Beamforming Design for Full-Duplex Millimeter Wave Systems'
---
Millimeter wave, full-duplex, hybrid beamforming, convolutional neural network, extreme learning machine.
Introduction
============
With the development of various emerging applications (e.g., virtual reality, augmented reality, autonomous driving and big data analysis), data traffic has explosively increased and caused growing demands for very high communication rates in future wireless communications, e.g., the fifth generation (5G) and beyond[@Ming2017xiao]. A common approach to meet the requirements of high rates is to explore the potential for improvements in bandwidth and spectral efficiency. Millimeter wave (mmWave) communications have recently received increasing research attention because of the large available bandwidth at the mmWave carrier frequencies (e.g., more than $150$ GHz available bandwidth)[@zhang2019precoding]. Thus, mmWave communications can potentially provide high data rates. For instance, IEEE 802.11ad working on the carrier frequency of $60$ GHz, can support a maximum data rate of $7$ Gbps[@Wang2018]. Thanks to the short wavelength of mmWave radio, large antenna arrays can be packed into mmWave transceivers with limited sizes, thereby resulting highly directional signals and high array gains[@el2014spatially]. Despite of high data rates, mmWave communications suffer from severe pathloss and penetration loss, which limit the coverage of mmWave signals. For example, the pathloss is about $110$ dB when transmitter-receiver separation distance is $100$ meters [@rappaport2017overview].
Full-duplex (FD) communications, which can support simultaneous transmission and reception on the same channels, have the potential to double the throughput and reduce latency compared to half-duplex (HD) communications. To provide high data rates and improve the coverage of wireless networks, FD relays have been recently applied in mmWave communications as wireless backhauls[@atzeni2017full; @han2018precoding; @sharma2017dynamic]. Since FD systems suffer from severe self-interference (SI), SI cancellation (SIC) is one of the main challenges for FD mmWave systems. For example, a FD transceiver at $60$ GHz with a typical transmit power of $14$ dBm, receiver noise figure of $5$ dB and channel bandwidth of $2.16$ GHz will require $96$ dB of SIC [@dinc2017millimeter]. Generally, SI can be suppressed by making use of the physical methods, which enhances the propagation loss for the SI signals and maintains a high gain for the desired signals [@han2018precoding; @satyanarayana2018hybrid; @he2017spatiotemporal; @dinc2017millimeter]. For instance, narrow-beam antennas or beamforming techniques [@han2018precoding; @satyanarayana2018hybrid] can be used to separate the communication channel and SI channel in directions, and polarization isolation and antenna spacing[@he2017spatiotemporal; @dinc2017millimeter] can also be applied. Recent measurement in [@dinc2017millimeter] shows that almost $80$ dB of SI suppression can be achieved with polarization based antennas in $60$ GHz bands. In addition, conventional microwave FD systems only consider normal cancellation of the line-of-sight (LOS) SI and ignore the non-line-of-sight (NLOS) SI. However, in mmWave FD systems, the NLOS SI will be enhanced due to the high-gain beamforming [@zhang2019precoding]. Besides, since circuit and hardware complexity scales up with frequency, conventional full digital processing, which controls both the phases and amplitudes of original signals, becomes very expensive in mmWave systems. Thus, hybrid beamforming (HBF) that consists of digital and analog processing is promising to achieve an optimal trade-off between performance and complexity.
There have been few results on the HBF design and SIC for FD mmWave systems [@abbas2016full; @xiao2017full; @han2018precoding; @zhang2019precoding]. Based on the sparsity of mmWave channels, orthogonal matching pursuit (OMP) based HBF algorithm is proposed for FD mmWave systems [@abbas2016full]. However, SI is not considered in [@abbas2016full], which might significantly affect system performance. In [@xiao2017full], a near-field propagation model is adopted for line-of-sight (LOS) SI. It is shown that the SIC performance can be improved by increasing the number of transmitter (TX) or receiver (RX) radio frequency (RF) chains. In [@han2018precoding], a combined LOS and non-line-of-sight (NLOS) SI channel is proposed, and a decoupled analog-digital (DAD) HBF algorithm is provided to suppress both LOS and NLOS SI. By jointly optimizing the analog and digital precoders, an OMP based SIC beamforming algorithm for FD mmWave relays is proposed in [@zhang2019precoding]. It is shown that the OMP based HBF algorithm can achieve higher spectral efficiency than DAD HBF algorithm in [@han2018precoding]. Although the approaches in [@zhang2019precoding; @han2018precoding] can perfectly eliminate SI, the SIC of FD mmWave systems is based on the null space of the effective SI channel after designing optimal hybrid beamformers, which will cause a significant degradation in system spectral efficiency. Furthermore, these approaches can perform SIC only when the number of TX-RF chains is greater or equal to the sum of the number of RX-RF chains and the number of transmitting streams. Moreover, in realistic communication systems, since we cannot always obtain perfect channel state information (CSI) through channel estimation, the existing optimization-based FD mmWave HBF approaches cannot provide robust performance in the presence of imperfect CSI.
Recent development in machine learning (ML) provides a new way for addressing problems in physical layer communications (e.g., direction-of-arrival estimation[@huang2018deep], analog beam selection [@long2018data] and signal detection[@samuel2017deep]). ML based techniques have several advantages such as low complexity when solving non-convex problems and the ability to extrapolate new features from noisy and limited training data [@elbir2019cnn]. In [@huang2019deep; @lin2019beamforming], precoders are designed based on ML techniques, in which a learning network with multiple fully connected layers is used. However, dense multiple fully connected layers may increase the computational complexity, and these works only optimize the precoder with fixed combiners. In [@elbir2019cnn], a convolutional neural network (CNN) framework is first proposed to jointly optimize the precoder and combiner, in which the network takes the channel matrix as the input and produces the analog and digital beamformers as outputs. To reduce the complexity in training stage in [@elbir2019cnn], an equivalent channel HBF algorithm is proposed to provide accurate labels for training samples [@bao2020deep]. To further reduce the computational complexity, joint antenna selection and HBF design is studied in [@elbir2019joint] based on quantized CNN with the cost of prediction accuracy degradation. Though above results can achieve good performance of ML based HBF, all of them consider single-hop scenarios. To the best of our knowledge, the joint SIC and HBF design for FD mmWave relay systems, being of practical importance, has not been investigated in the context of ML.
Motivated by above observations, we investigate the joint HBF and SIC optimization for FD mmWave relay systems based on ML techniques. The main contributions of this paper are summarized as follows:
- We decouple the joint SIC and HBF optimization problem into two sub-problems. We first propose an alternating direction method of mul-tipliers (ADMM) based algorithm to jointly eliminate residual SI and optimize unconstrained beamformers. With perfect SIC and unconstrained beamformers, many existing algorithms (e.g., PE-AltMin [@yu2016alternating], GEVD [@lin2019hybrid] and methods in [@sohrabi2016hybrid]) cannot be directly used for HBF design since the unconstrained beamformers may not be mutually orthogonal. Thus, we propose a majorization-minimization (MM) based algorithm that jointly optimizes the transmitting and receiving HBF. To the best of our knowledge, the ADMM based SIC beamforming and MM based joint transmitting and receiving HBF optimization for FD mmWave systems have not been previously studied. Unlike the works in [@zhang2019precoding; @han2018precoding], our proposed approaches can perform perfect SIC even if the number of TX RF chains is smaller than the sum of the number of RX RF chains and the number of transmitting streams. Finally, the convergence and computational complexity of proposed algorithms are analyzed.
- Two learning frameworks for HBF design are proposed (i.e., extreme learning machine based HBF (ELM-HBF) and CNN based HBF (CNN-HBF)). We utilize ELM and CNN to estimate the precoders and combiners of FD mmWave systems. To support robust HBF performance, noisy channel input data is generated and fed into the learning machine for training. Different from existing optimization based HBF methods, of which the performance strongly relies on the quality of CSI, our learning based approaches can achieve more robust performance since ELM and CNN are effective at handling the imperfections and corruptions in the input channel information.
- To the best of our knowledge, HBF design with ELM has not been studied before. Also, the performance of ELM-HBF with different activation functions is tested. Since the optimal weight matrix of hidden layer is derived in a closed-form, the complexity of ELM-HBF is much lower than that of CNN-HBF and easier for implementation. Results show that ELM-HBF can achieve near-optimal performance, which outperforms CNN-HBF and other conventional HBF methods. The training time of ELM is about 600 times faster than that of CNN with $64$ transmitting and receiving antennas. While the conventional methods require an optimization process, our learning based approaches can estimate the beamformers by simply feeding the learning machines with channel matrices. Results also show that, the online prediction time of proposed learning based approaches is almost 20 times faster than the OMP approach.
The remainder of this paper is organized as follows. We first present the system model of the FD mmWave relay in Section II. For SIC and HBF design, we present an ADMM based SIC beamforming algorithm and an MM based HBF algorithm in Section III. The ELM-HBF and CNN-HBF learning schemes are presented in Section IV. To validate the efficiency of proposed methods, we provide numerical simulations in Section V. Finally, Section VI concludes the paper.
*Notations*: Bold lowercase and uppercase letters denote vectors and matrices, respectively. $\text{Tr}( {\bf A})$, $\left| {\bf A} \right|$, $\left\| {\bf A} \right\|_\text{F}$, ${\bf A}^*$, ${\bf A}^T$ and ${\bf A}^H$ denote trace, determinant, Frobenius norm, conjugate, transpose and conjugate transpose of matrix ${\bf A}$, respectively. $\otimes$ presents the Kronecker product. $\arg ({\bf a})$ denotes the argument/phase of vector ${\bf a}$.
system model and problem formulation {#systemmodel}
====================================
System model
------------
We consider a one-way FD mmWave relay system shown in Fig. 1, in which the source node and the destination node are different base stations connected with the FD mmWave relay. We assume that there is no direct link between the source and destination, which is typical for a mmWave system due to the high pathloss[@zhang2019precoding; @han2018precoding]. All the nodes in this system adopt hybrid analog and digital precoding architecture. The source node is equipped with $N_\text{t}$ antennas with $N_{\text{RFS}}$ RF chains, and transmits $N_\text{s}$ data streams simultaneously. To enable multi-stream transmission, we assume $N_\text{s} \le N_\text{RFS} \le N_\text{t}$. For the relay and destination nodes, the numbers of antennas and RF chains and the data streams at the relay are defined in the same way, depicted in Fig. 1.
At the source node, the $N_\text{s} \times 1$ symbol vector, denoted by ${\bf{s}}_\text{S}$ with $\mathbb{E}[{\bf{s}}_\text{S}{\bf{s}}_\text{S}^H]=N_\text{s}^{-1}{\bf I}_{N_\text{s}}$, is firstly precoded through an $N_\text{RFS} \times N_\text{s}$ digital precoding matrix ${\bf V}_\text{BB}$, and then processed by an $N_\text{t} \times N_\text{RFS} $ analog precoding matrix ${\bf V}_\text{RF}$, which is implemented in the analog circuitry using phase shifters. Thus, the $N_\text{t} \times 1 $ transmitted signal of the source node is given as $$\label{tran_signal_s}
{\bf x}_\text{S} = \sqrt{P_\text{S}}{\bf V}_\text{RF} {\bf V}_\text{BB} {\bf{s}}_\text{S},$$ where ${P_\text{S}}$ is the transmit power of the source node. The power constraint of the precoding matrices is denoted by $\left\| {\bf V}_\text{RF} {\bf V}_\text{BB} \right\|_\text{F}^2 = N_\text{s}$. Then, the signal received at the relay can be express as
0.2in
{width="160mm"}
-0.2in
$$\label{receive_signal_r}
{\bf y}_\text{R} = {\bf H}_\text{SR} {\bf x}_\text{S} + {\bf H}_\text{SI} {\bf x}_\text{R}+ {\bf n}_\text{R},$$
where $ {\bf H}_\text{SR} \in \mathbb{C}^{n_\text{r} \times N_\text{t}}$ denotes the source-to-relay channel matrix, $ {\bf H}_\text{SI} \in \mathbb{C}^{n_\text{r} \times n_\text{t}}$ denotes the SI channel of relay, ${\bf x}_\text{R} \in \mathbb{C}^{n_\text{t} \times1}$ denotes the transmitted signal at the relay, and ${\bf n}_\text{R} \sim \mathcal{CN}(0, \sigma_\text{n}^2 {\bf I}_{n_\text{R}})$ denotes the noise vector at the relay. At the relay, the received signal ${\bf y}_\text{R}$ is firstly combined with the analog precoding matrix $ {\bf F}_\text{RFR} \in \mathbb{C}^{n_\text{r} \times N_\text{RFR} }$ and digital precoding matrix $ {\bf F}_\text{BBR} \in \mathbb{C}^{N_\text{RFR}\times n_\text{s}}$. Then, it is precoded by digital precoding matrix $ {\bf F}_\text{BBT} \in \mathbb{C}^{N_\text{RFT} \times n_\text{s} }$ and analog precoding matrix $ {\bf F}_\text{RFT} \in \mathbb{C}^{n_\text{t}\times N_\text{RFT} }$. Thus, the transmitted signal at the relay is expressed as $$\label{tran_signal_r}
{\bf x}_\text{R} = \sqrt{P_\text{R}}{\bf F}_\text{RFT} {\bf F}_\text{BBT} {\bf F}_\text{BBR}^H {\bf F}_\text{RFR}^H {\bf y}_\text{R},$$ where ${P_\text{R}}$ is the transmit power of the relay. Plugging and into , we obtain $$\begin{aligned}
{\bf x}_\text{R}) = &{\bf F}_\text{RFT} {\bf F}_\text{BBT}\left( {\bf I}_{n_\text{s}} - \sqrt{P_\text{R}} {\bf F}_\text{BBR}^H {\bf F}_\text{RFR}^H{\bf H}_\text{SI} {\bf F}_\text{RFT} {\bf F}_\text{BBT} \right)^{-1} \\
&\times \sqrt{P_\text{R}} {\bf F}_\text{BBR}^H {\bf F}_\text{RFR}^H \left( \sqrt{P_\text{S}} {\bf H}_\text{SR} {\bf V}_\text{RF} {\bf V}_\text{BB} {\bf{s}}_\text{S} + {\bf n}_\text{R} \right).
\end{aligned}$$ At the destination, the received signal is multiplied by the analog combining matrix $ {\bf U}_\text{RF} \in \mathbb{C}^{N_\text{r} \times N_\text{RFD} }$ and digital combining matrix $ {\bf U}_\text{BB} \in \mathbb{C}^{N_\text{RFD} \times N_\text{s} }$, which is expressed as $$\label{receive_destination}
\begin{split}
{\bf y}_\text{D})=&
\sqrt{ P_\text{S}P_\text{R} } {\bf U}^H{\bf H}_\text{RD} {\bf F}_\text{T} {\bf \Xi}_\text{R}^{-1} {\bf F}_\text{R}^H {\bf H}_\text{SR} {\bf V} {\bf{s}}_\text{S} \\
&+\sqrt{P_\text{R} } {\bf U}^H {\bf H}_\text{RD} {\bf F}_\text{T} {\bf \Xi}_\text{R}^{-1} {\bf F}_\text{R}^H {\bf n}_\text{R}
+ {\bf U}^H {\bf n}_\text{D},
\end{split}$$ where ${\bf V}={\bf V}_\text{RF}{\bf V}_\text{BB}$, ${\bf U} ={\bf U}_\text{RF}{\bf U}_\text{BB}$, ${\bf F}_\text{T} ={\bf F}_\text{RFT} {\bf F}_\text{BBT}$, ${\bf F}_\text{R} ={\bf F}_\text{RFR} {\bf F}_\text{BBR}$, and ${\bf \Xi }_\text{R}= {\bf I}_{n_\text{s}} - \sqrt{P_\text{R}} {\bf F}_\text{R}^{H} {\bf H}_\text{SI} {\bf F}_\text{T} $. $ {\bf H}_\text{RD} \in \mathbb{C}^{N_\text{r} \times n_\text{t}}$ denotes the relay-to-destination channel matrix and ${\bf n}_\text{D} \sim \mathcal{CN}(0, \sigma_\text{n}^2 {\bf I}_{N_\text{r}})$ denotes the noise vector at destination.
From the above, the spectral efficiency of the system is given by $$\label{SE}
\begin{split}
R =&\log_2 \left| {\bf I}_{N_\text{s}} +\frac{P_\text{S}P_\text{R}}{N_\text{s}}{\bf \Sigma}^{-1} \left( {\bf U}^{H} {\bf H}_\text{RD} {\bf F}_\text{T} {\bf \Xi}_\text{R}^{-1} {\bf F}_\text{R}^{H} {\bf H}_\text{SR} {\bf V} \right) \right.\\
& \left. \times \left( {\bf U}^{H} {\bf H}_\text{RD} {\bf F}_\text{T} {\bf \Xi}_\text{R}^{-1} {\bf F}_\text{R}^{H} {\bf H}_\text{SR} {\bf V} \right)^{H} \right|,
\end{split}$$ where ${\bf \Sigma} = \sigma_\text{n}^2 \big[ {P_\text{R} } \left({\bf U}^{H} {\bf H}_\text{RD} {\bf F}_\text{T} {\bf \Xi}_\text{R}^{-1} {\bf F}_\text{R}^{H} \right) \left({\bf U}^{H} {\bf H}_\text{RD} {\bf F}_\text{T} {\bf \Xi}_\text{R}^{-1} {\bf F}_\text{R}^{H} \right)^{H} +{\bf U}^{H} {\bf U} \big]$ is the covariance matrix of the noise term in .
Channel model
-------------
For the desired link channels (i.e., ${\bf H}_\text{SR}$ and ${\bf H}_\text{RD}$), we assume that sufficient far-field conditions have been met, and employ the extended Saleh-Valenzuela mmWave channel model [@Ming2017xiao; @zhang2019precoding] to characterize the limited scattering features of mmWave channels. This model is described as the sum of the contributions from $N_\text{c}$ scattering clusters, each of which contributes $N_\text{p}$ propagation paths. This model expresses the mmWave channel as $$\label{channelmodel_SD}
{\bf H} =\sqrt{\frac{N_\text{R}N_\text{T}}{ N_\text{c} N_\text{p}}}\sum_{k=1}^{N_\text{c}} \sum_{l=1}^{N_\text{p}} \alpha_{k,l} {\bf{a}}(\theta_{k,l}^{\text{r}}) {\bf{a}}(\theta_{k,l}^{\text{t}})^{H},$$ where $N_\text{T}$ denotes the number of transmit antennas, $N_\text{R}$ denotes the number of receive antennas, $\alpha_{k,l}$ denotes the complex gain of the $l$-th ray in the $k$-th propagation cluster. The functions ${\bf{a}}(\theta_{k,l}^{\text{r}})$ and $ {\bf{a}}(\theta_{k,l}^{\text{t}})^H$ respectively represent the normalized receive and transmit array response vectors, where $\theta_{k,l}^{\text{r}}$ and $\theta_{k,l}^{\text{t}}$ are the azimuth angles of arrival and departure, respectively. The array response can be expressed as $$\label{ULA}
{\bf{a}}(\theta ) = \frac{1}{{\sqrt {{N}} }}{\left[1,{e^{ - j\frac{{2\pi d}}{\lambda }\sin(\theta) }},...,{e^{ - j({N} - 1)\frac{{2\pi d}}{\lambda }\sin(\theta) }}\right]^T},$$ where $d$ is the antenna spacing and $\lambda$ is the carrier wave-length.
$$\label{distance_d}
d_{m,n}=\sqrt{(a_0+(m-1)d)^2 + (b_0+(n-1)d)^2 -2(a_0 +(m-1)d)(b_0+(n-1)d)\cos(\phi) },$$ $$\label{Problem_Formulation}
\begin{split}
\mathop {\max }\limits_{\scriptstyle {\bf V}_\text{RF}, {\bf U}_\text{RF},{\bf F}_\text{RFT},{\bf F}_\text{RFR} \hfill\atop
\scriptstyle {\bf V}_\text{BB}, {\bf U}_\text{BB},{\bf F}_\text{BBT},{\bf F}_\text{BBR} \hfill}
%\mathop {\max }\limits_{{\bf V}, {\bf F}_{\text{R}},{\bf F}_{\text{T}},{\bf U}}
&R = \log_2 \left| {\bf I}_{N_\text{s}} +\frac{P_\text{S}P_\text{R}}{N_\text{s}}\Sigma^{-1} \left[ {\bf U}^{H} {\bf H}_\text{RD} {\bf F}_\text{T} {\bf F}_\text{R}^{H} {\bf H}_\text{SR} {\bf V} \right] \left[ {\bf U}^{H} {\bf H}_\text{RD} {\bf F}_\text{T} {\bf F}_\text{R}^{H} {\bf H}_\text{SR} {\bf V} \right]^{H} \right|\\
\text{s.t.} \qquad~~ &{\bf V}_\text{RF}\in \mathcal{W}_\text{RF}, {\bf U}_\text{RF}\in \mathcal{G}_\text{RF}, {\bf F}_\text{RFR}\in \mathcal{F}_\text{RFR}, {\bf F}_\text{RFT}\in \mathcal{F}_\text{RFT},\\
& \left\| {\bf F}_\text{RFT} {\bf F}_\text{BBT} \right\|_\text{F}^2 = N_\text{s},\\
& \left\| {\bf V}_\text{RF} {\bf V}_\text{BB} \right\|_\text{F}^2 = N_\text{s},\\
& {\bf F}_\text{R}^{H} {\bf H}_\text{SI} {\bf F}_\text{T} =0,
\end{split}$$
Based on the widely used mmWave SI channel in [@zhang2019precoding; @han2018precoding; @satyanarayana2018hybrid], the residual SI consists of two parts: LOS SI and NLOS SI. With the high beam gain of mmWave signals, the NLOS SI results from refection from nearby obstacles. We can utilize the far-filed clustered mmWave channel in to model the NLOS SI channel. By contrast, in the FD scenario, since the transmitter and the local receiver are closely placed, LOS SI channels are near-field channels and clustered mmWave channels cannot be used. Therefore, a more realistic SI channel model, which is the spherical wave propagation model, is considered for the near-field LOS channel matrix. According to [@zhang2019precoding; @satyanarayana2018hybrid], the LOS SI channel coefficient of the $m$-th row and $n$-th column entry is given by $$\left[{\bf H}_{\text{LOS}}\right]_{m,n} =\frac{\rho}{d_{m,n}} \exp\left(-j\frac{2\pi}{\lambda}d_{m,n}\right),$$ where $\rho$ is a normalization constant such that $\mathbb{E} [ \left\|{\bf H}_{\text{LOS}} \right\|_\text{F}^2 ] = n_\text{t} n_\text{r} $ and $d_{m,n}$ is the distance from the $m$-th element of the transmit array to the $n$-th element of the receive array, expressed in , with $\phi$ corresponding to the angle between the two antenna arrays, and $a_0$ and $b_0$ denoting the initial distances of the antennas to a common reference point. Finally, the FD mmWave SI channel is constructed as $${\bf H}_{\text{SI}}=\kappa_{\text{LOS}} {\bf H}_{\text{LOS}} + \kappa_{\text{NLOS}} {\bf H}_{\text{NLOS}},$$ where $\kappa_{\text{LOS}}$ and $\kappa_{\text{NLOS}}$ denote the intensity coefficients of LOS and NLOS components, respectively.
Problem formulation
-------------------
The main objective of this work is to maximize the system spectral efficiency by jointly designing beamforming matrices (i.e., ${\bf V}_\text{RF}$, ${\bf V}_\text{BB}$, ${\bf F}_\text{RFT} $, ${\bf F}_\text{BBT}$, ${\bf F}_\text{RFR}$, ${\bf F}_\text{BBR}$, ${\bf U}_\text{RF}$ and ${\bf U}_\text{BB}$) and SIC in the presence of noisy CSI. The HBF design problem of maximizing the system spectral efficiency is given by , where $\mathcal{V}_\text{RF}$, $\mathcal{U}_\text{RF}$, $\mathcal{F}_\text{RFR}$ and $\mathcal{F}_\text{RFT}$ are the feasible sets of the analog precoders included by unit modulus constraints. Due to the non-convex constraints of the analog precoders and ${\bf F}_\text{R}^{H} {\bf H}_\text{SI} {\bf F}_\text{T} =0$, it is in general intractable to solve the optimization problem. Meanwhile, the HBF design with conventional optimization-based approaches are not robust in the presence of noisy CSI. To solve these problems, we first develop algorithms that maximize the spectral efficiency with joint HBF and SIC design. Then, learning machines (e.g., ELM and CNN) are adopted such that the hybrid beamformers are predicted by feeding the machines with noisy CSI.
FD mmWave beamforming design {#FD_mmWave_beamforming_design}
============================
In order to train the ELM and CNN learning machines, we first need to solve the optimization problem in to provide accurate labels of the training data samples. Generally, to make the problem tractable and reduce the communication overhead, the beamformers can be designed individually at different nodes [@zhang2019precoding; @han2018precoding]. The main challenge is at the relay node since the beamformers of the transmitting part and receiving part should be jointly designed, and SIC needs to be guaranteed. Consequently, we first propose efficient algorithms to design the hybrid beamformers at the relay, and then HBF design at the source and destination can be obtained following similar approaches at relay.
SIC and HBF algorithm design
----------------------------
According to the method studied in [@el2014spatially; @lee2014af; @tsinos2018hybrid], the HBF design problem at the relay can be transferred into minimizing the Frobenius norm of the difference between the optimal fully digital beamformer and hybrid beamformers as $$\label{Problem_relay}
\begin{split}
(\text{P}1):~ \mathop {\min }\limits_{{\bf F}_\text{RFT}, {\bf F}_\text{BBT},{\bf F}_\text{RFR},{\bf F}_\text{BBR}}& \left\| {\bf F}_\text{opt} - {\bf F}_\text{RFT} {\bf F}_\text{BBT} {\bf F}_\text{BBR}^{H} {\bf F}_\text{RFR}^{H} \right\|_\text{F}^2 \\
\text{s.t.} ~~ \qquad & {\bf F}_\text{RFR}\in \mathcal{F}_\text{RFR}, {\bf F}_\text{RFT}\in \mathcal{F}_\text{RFT},\\
&\left\| {\bf F}_\text{RFT} {\bf F}_\text{BBT} \right\|_\text{F}^2 = n_\text{s},\\
& {\bf F}_\text{BBR}^{H} {\bf F}_\text{RFR}^{H} {\bf H}_\text{SI} {\bf F}_\text{RFT} {\bf F}_\text{BBT} =0,
\end{split}$$ where ${\bf F}_\text{opt} = {\bf F}_\text{TSVD} {\bf F}_\text{RSVD}^H $, ${\bf F}_\text{TSVD} $ and ${\bf F}_\text{RSVD}$ are formed by the right-singular vectors of ${\bf H}_\text{RD} $ and left-singular vectors of ${\bf H}_\text{SR} $, respectively.
Due to the non-convex constraints, it is still hard to find a solution of problem (P1) that guarantees both SIC and optimal hybrid beamformers. Moreover, according to the results in [@zhang2019precoding; @han2018precoding], the spectral efficiency of FD mmWave systems degrades significantly if residual SI cannot be efficiently suppressed. Thus, we will solve this problem following two steps. In the first step, we mainly focus on perfectly eliminating SI (i.e., ${\bf F}_\text{BBR}^{H} {\bf F}_\text{RFR}^{H} {\bf H}_\text{SI} {\bf F}_\text{RFT} {\bf F}_\text{BBT} =0$) by designing unconstrained beamformers for both transmitting and receiving parts. Then, the problem (P1) can be rewritten as $$\label{Problem_relay_step1}
\begin{split}
(\text{P}2):~\mathop {\min }\limits_{{\bf F}_\text{T}, {\bf F}_\text{R} }~ &\left\| {\bf F}_\text{opt} - {\bf F}_\text{T} {\bf F}_\text{R}^{H} \right\|_\text{F}^2 \\
\quad \text{s.t.} ~~ &{\bf F}_\text{R}^{H} {\bf H}_\text{SI} {\bf F}_\text{T} =0. \\
\end{split}$$ After solving (P2), we obtain the unconstrained beamformers that can ensure perfect SIC. In the second step, hybrid beamformers of the transmitting part and receiving part are jointly designed by minimizing the Frobenius norm of the difference between the unconstrained beamformers and hybrid beamformers. The problem is formulated as $$\label{Problem_relay_step2}
\begin{split}
(\text{P}3):~ \mathop {\min }\limits_{{\bf F}_\text{RFT}, {\bf F}_\text{BBT},{\bf F}_\text{RFR},{\bf F}_\text{BBR}} ~&\left\| \hat{\bf F}_\text{opt} - {\bf F}_\text{RFT} {\bf F}_\text{BBT} {\bf F}_\text{BBR}^{H} {\bf F}_\text{RFR}^{H} \right\|_\text{F}^2 \\
\qquad \text{s.t.} ~~~~ \quad\quad &{\bf F}_\text{RFR}\in \mathcal{F}_\text{RFR}, {\bf F}_\text{RFT}\in \mathcal{F}_\text{RFT},\\
\qquad \quad \qquad & \left\| {\bf F}_\text{RFT} {\bf F}_\text{BBT} \right\|_\text{F}^2 = n_\text{s},\\
\end{split}$$ where $\hat{\bf F}_\text{opt} = \hat{\bf F}_\text{T} \hat{\bf F}_\text{R}^{H}$ with $ \hat{\bf F}_\text{T}$ and $ \hat{\bf F}_\text{R}$ corresponding to the solutions of problem (P2). Though problems (P2) and (P3) are simpler than the original problem (P1), both problems are still non-convex problems and efficient approaches should be proposed to solve them.
Let us start with problem (P2). It is obvious that this problem is convex when one of the variables is fixed. This property enables ADMM utilization, and consequently, the augmented Lagrangian function of (P2) is given by $$\mathcal{L}({\bf F}_\text{T},{\bf F}_\text{R}, {\bf Z})= \left\| {\bf F}_\text{opt} - {\bf F}_\text{T} {\bf F}_\text{R}^{H} \right\|_\text{F}^2 + \varrho \left\| {\bf F}_\text{R}^{H} {\bf H}_\text{SI} {\bf F}_\text{T} + \frac{1}{\varrho} {\bf Z} \right\|_\text{F}^2,$$ where ${\bf Z} \in \mathbb{C}^{n_\text{s} \times n_\text{s}}$ is the Lagrange multiplier matrix and $\varrho$ is the ADMM step-size. According to ADMM approaches[@boyd], the solution to (P2) can be obtained iteratively, where in iteration $i+1$, the variables and multiplier are updated as follows
$$\begin{aligned}
{\bf F}_\text{T}^{(i+1)}:=& \arg \mathop{\min}\limits_{{\bf F}_\text{T}} \mathcal{L}\left({\bf F}_\text{T} ,{\bf F}_\text{R}^{(i)},{\bf Z}^{(i)}\right); \label{Problem_FT}\\
{\bf F}_\text{R}^{(i+1)}:=& \arg \mathop{\min}\limits_{{\bf F}_\text{R}} \mathcal{L}\left({\bf F}_\text{T}^{(i+1)},{\bf F}_\text{R} ,{\bf Z}^{(i)} \right); \label{Problem_FR}\\
{\bf Z}^{(i+1)} :=& {\bf Z}^{(i)} +\varrho {\bf F}_\text{R}^{H{(i+1)}} {\bf H}_\text{SI} {\bf F}_\text{T}^{(i+1)}.\label{eq_Z}
\end{aligned}$$
By solving the problems in and , we have the following theorem.
The closed-form expressions for ${\bf F}_{\rm{T}}^{(i+1)}$ and ${\bf F}_{\rm{R}}^{(i+1)}$ are receptively given by $$\label{Problem_FT_solve}
\begin{split}
\text{vec}\left({\bf F}_{\rm{T}}^{(i+1)}\right)=&\left[ {\bf I}_{n_{\rm{s}}} \otimes \left( \varrho {\bf H}_{\rm{SI}}^{H} {\bf F}_{\rm{R}}^{(i)} {\bf F}_{\rm{R}}^{H(i)} {\bf H}_{\rm{SI}} \right) +\left( {\bf F}_{\rm{R}}^{H(i)} {\bf F}_{\rm{R}}^{(i)} \right)
\otimes {\bf I}_{n_{\rm{t}}} \right]^{-1} \\
&\times\text{vec}\left({\bf F}_{\rm{opt}} {\bf F}_{\rm{R}}^{(i)} - {\bf H}_{\rm{SI}}^{H} {\bf F}_{\rm{R}}^{(i)} {\bf Z}^{(i)} \right), \\
\end{split}$$ and $$\label{Problem_FR_solve}
\begin{split}
&\text{vec}\left({\bf F}_{\rm{R}}^{(i+1)}\right)\\&=\left[ \left( \varrho {\bf H}_{\rm{SI}} {\bf F}_{\rm{T}}^{(i+1)} {\bf F}_{\rm{T}}^{H(i+1)} {\bf H}_{\rm{SI}}^{H}\right) \otimes {\bf I}_{n_{\rm{s}}} + {\bf I}_{n_{\rm{r}}}\otimes \left( {\bf F}_{\rm{T}}^{H(i+1)} {\bf F}_{\rm{T}}^{(i+1)}\right) \right]^{-1}\\&~~~\times \text{vec}\left( {\bf F}_{\rm{T}}^{H(i+1)} {\bf F}_{\rm{opt}} - {\bf Z}^{(i)} {\bf F}_{\rm{T}}^{H(i+1)} {\bf H}_{\rm{SI}}^{H} \right). \\
\end{split}$$
We first solve problem . By taking the derivative of $\mathcal{L}\big({\bf F}_\text{T},{\bf F}_\text{R}^{(i)},{\bf Z}^{(i)}\big)$ over ${\bf F}_\text{T}$ and letting it to be zero, we have ${\bf F}_\text{T}{\bf F}_\text{R}^{H(i)}{\bf F}_\text{R}^{(i)}+ \varrho {\bf H}_\text{SI}^{H} {\bf F}_\text{R}^{(i)} {\bf F}_\text{R}^{H(i)} {\bf H}_\text{SI}{\bf F}_\text{T} ={\bf F}_\text{opt}{\bf F}_\text{R}^{(i)}-{\bf H}_\text{SI}^{H} {\bf F}_\text{R}^{(i)} {\bf Z}^{(i)}$. Then, utilizing the vectorization property, the result in can be obtained. For the problem , we can use a similar approach of the problem and obtain ${\bf F}_\text{R}^{(i+1)}$ in .
The above alternating procedure is initialized by setting the entries of matrices ${\bf F}_\text{R}^{(0)}$ to random values, and multiplier ${\bf Z}^{(0)}$ to zeros. After obtaining the updated variables in each steps, we summarize the ADMM-based beamforming design and SIC approach in Algorithm \[ALG1\].
\[ALG1\]
**Input**: ${\bf F}_\text{opt}$, ${\bf H}_\text{SI}$, $\varrho$; **Output**: ${\bf F}_\text{T}$, ${\bf F}_\text{R}$; **Initialize**: ${\bf F}_\text{R}^{(0)}$, ${\bf Z}^{(0)}$ and $i=0$; ${\bf F}_\text{T}^{(i+1)}$ using ; ${\bf F}_\text{R}^{(i+1)}$ using ; ${\bf Z}^{(i+1)}$ using ; $i\leftarrow i+1$;
After obtaining the solutions of problem (P2), we then turn to solve problem (P3). The main challenges for this problem are the constant modulus non-convex constraints and the joint optimization of transmitting and receiving hybrid beamformers. Furthermore, since the beamformers derived by Algorithm \[ALG1\] for SIC may not be mutually orthogonal, many existing approaches (e.g., PE-AltMin [@yu2016alternating], GEVD [@lin2019hybrid] and methods in [@sohrabi2016hybrid]) cannot be directly used for this case. To make this problem tractable and deal with the non-convex constraints, we utilize majorization-minimization (MM) methods [@sun2016majorization; @wu2017transmit; @arora2019hybrid]. In stead of minimizing the original objective function directly, the MM procedure consists of two steps. In the first majorization step, we find an easy implemented surrogate function that should be a tight upper bound of the original objective function and exist closed-from minimizers. Then in the minimization step, we minimize the surrogate function with closed-from minimizers. To achieve a fast convergence rate, a surrogate function that tries to follow the shape of the objective function is preferable[@sun2016majorization]. To jointly design the hybrid beamformers at transmitting and receiving parts with MM methods, we will solve problem (P3) based on the alternating minimization framework. We first solve problem (P3) for the transmitting precoder ${\bf F}_\text{RFT}$ by fixing ${\bf F}_\text{RFR}$, ${\bf F}_\text{BBR}$ and ${\bf F}_\text{BBT}$. Problem (P3) can be rewritten as $$\label{Problem_relay_step2_1}
\begin{split}
(\text{P}4):~ &\mathop {\min }\limits_{{\bf F}_\text{RFT}} ~ \left\| \hat{\bf F}_\text{opt} - {\bf F}_\text{RFT} {\bf Y}_\text{T} \right\|_\text{F}^2 \\
& ~ \text{s.t.} ~~~~ {\bf F}_\text{RFT}\in \mathcal{F}_\text{RFT},\\
\end{split}$$ where $ {\bf Y}_\text{T}={\bf F}_\text{BBT} {\bf F}_\text{BBR}^H {\bf F}_\text{RFR}^H$. Then, we rewrite the objective function of problem (P4) as $$\label{J_T}
\begin{split}
J({\bf F}_\text{RFT}; {\bf Y}_\text{T} )&= \text{Tr}\left( \hat{\bf F}_\text{opt} \hat{\bf F}_\text{opt}^H\right)+ \text{Tr}\left( {\bf F}_\text{RFT}^H {\bf F}_\text{RFT} {\bf Y}_\text{T} {\bf Y}_\text{T}^H \right)\\
&\quad - \text{Tr}\left( \hat{\bf F}_\text{opt} {\bf Y}_\text{T}^H {\bf F}_\text{RFT}^H \right)- \text{Tr}\left( {\bf F}_\text{RFT} \left(\hat{\bf F}_\text{opt} {\bf Y}_\text{T}^H\right)^H \right)\\
& \mathop = \limits^{(a)} \text{Tr}\left( \hat{\bf F}_\text{opt} \hat{\bf F}_\text{opt}^H \right)+{\bf f}_\text{RFT}^H {\bf Q}_\text{T} {\bf f}_\text{RFT} -2\text{Re}\left({\bf f}_\text{RFT}^H {\bf e}_\text{T}\right),
\end{split}$$ where ${\bf f}_\text{RFT}= \text{vec}({\bf F}_\text{RFT})$, ${\bf E}_\text{T}=\hat{\bf F}_\text{opt} {\bf Y}_\text{T}^H$, ${\bf e}_\text{T}=\text{vec}({\bf E}_\text{T})$, ${\bf Q}_\text{T}=({\bf Y}_\text{T}{\bf Y}_\text{T}^H )^T \otimes {\bf I}_{n_\text{t}} $, and (a) follows from the identity $\text{Tr}({\bf A}{\bf B}{\bf C}{\bf D})=\text{vec}({\bf A}^T)^T ( {\bf D}^T\otimes{\bf B} )\text{vec}({\bf C})$.
To solve problem (P4) with MM methods, we should find a majorizer of $J({\bf F}_\text{RFT}; {\bf Y}_\text{T} )$ according to the following lemma.
\[Quad\] Let ${\bf Q} \in \mathbb{C}^{N\times N}$ and ${\bf S} \in \mathbb{C}^{N\times N}$ be two Hermitian matrices satisfying ${\bf S} \ge {\bf Q}$. Then the quadratic function ${\bf a}^H {\bf Q} {\bf a}$, is majorized by ${\bf a}^H {\bf S} {\bf a} + 2\text{Re}( {\bf a}^H ( {\bf Q} -{\bf S} ) {\bf a}_i )+{\bf a}_i^H ( {\bf S}-{\bf Q} ) {\bf a}_i $ at point ${\bf a}_i\in \mathbb{C}^{N}$.
The proof can be found in [@wu2017transmit].
According to Lemma \[Quad\], we can obtain a valid majorizer of $J({\bf F}_\text{RFT}; {\bf Y}_\text{T} )$ at point ${\bf F}_\text{RFT}^{(i)} \in \mathcal{F}_\text{RFT}$ given by $$\label{J_T_major}
\begin{split}
&\bar J\left({\bf F}_\text{RFT}; {\bf Y}_\text{T},{\bf F}_\text{RFT}^{(i)} \right)\\
% &= \text{Tr}( \hat{\bf F}_\text{opt} \hat{\bf F}_\text{opt}^H)+{\bf f}_\text{RFT}^H {\bf S}_\text{T} {\bf f}_\text{RFT} + 2\text{Re}( {\bf f}_\text{RFT}^H ( {\bf Q}_\text{T}\\
% &\quad-{\bf S}_\text{T} ){\bf f}_\text{RFT}^{(i)} )+{\bf f}_\text{RFT}^{H(i)}( {\bf S}_\text{T} -{\bf Q}_\text{T} ){\bf f}_\text{RFT}^{(i)} -2\text{Re}({\bf f}_\text{RFT}^H {\bf e}_\text{T}),
= & \text{Tr}\left( \hat{\bf F}_\text{opt} \hat{\bf F}_\text{opt}^H\right)+\lambda_\text{T} {\bf f}_\text{RFT}^H {\bf f}_\text{RFT} + 2\text{Re}\left( {\bf f}_\text{RFT}^H \left( {\bf Q}_\text{T} -\lambda_\text{T}{\bf I}\right) {\bf f}_\text{RFT}^{(i)} \right)\\
& +{\bf f}_\text{RFT}^{H(i)}\left( \lambda_\text{T}{\bf I} -{\bf Q}_\text{T} \right){\bf f}_\text{RFT}^{(i)} -2\text{Re}\left({\bf f}_\text{RFT}^H {\bf e}_\text{T}\right)\\
= & 2\text{Re}\left( {\bf f}_\text{RFT}^H \left( \left( {\bf Q}_\text{T} -\lambda_\text{T}{\bf I}\right){\bf f}_\text{RFT}^{(i)} - {\bf e}_\text{T} \right) \right) +C_\text{T},
\end{split}$$ where ${\bf F}_\text{RFT}^{(i)}$ is the iterate available at $i$-th iteration, $\lambda_\text{T}$ denotes the maximum eigenvalue of ${\bf Q}_\text{T}$, and the constant term $C_\text{T}=\text{Tr}( \hat{\bf F}_\text{opt} \hat{\bf F}_\text{opt}^H)+\lambda_\text{T} {\bf f}_\text{RFT}^H {\bf f}_\text{RFT}+{\bf f}_\text{RFT}^{H(i)}( \lambda_\text{T}{\bf I} -{\bf Q}_\text{T} ){\bf f}_\text{RFT}^{(i)}$. Thus, we can guarantee $\lambda_\text{T}{\bf I} \ge {\bf Q}_\text{T}$. Then, utilizing the majorizer in , the solution of problem (P4) can be obtained by iteratively solving the following problem $$\label{Problem_relay_step2_12}
\begin{split}
(\text{P}5):~ &{\bf F}_\text{RFT}^{(i+1)}=\arg \mathop {\min }\limits_{{\bf F}_\text{RFT}} ~ \bar J\left({\bf F}_\text{RFT}; {\bf Y}_\text{T},{\bf F}_\text{RFT}^{(i)} \right) \\
& ~~ \text{s.t.} ~~~ {\bf F}_\text{RFT}\in \mathcal{F}_\text{RFT}.\\
\end{split}$$ The close-form solution of problem (P5) is given by $$\label{frft}
{\bf f}_\text{RFT}^{(i+1)}=-\exp\left(j\arg\left( \left( {\bf Q}_\text{T} -\lambda_\text{T}{\bf I}\right){\bf f}_\text{RFT}^{(i)} - {\bf e}_\text{T} \right)\right).$$
Similarly, we can solve problem (P3) for the receiving combiner ${\bf F}_\text{RFR}$ by fixing ${\bf F}_\text{RFT}$, ${\bf F}_\text{BBR}$ and ${\bf F}_\text{BBT}$. Then, problem (P3) can be rewritten as $$\label{Problem_relay_step2_2}
\begin{split}
(\text{P}6):~ &\mathop {\min }\limits_{{\bf F}_\text{RFR}} ~ \text{Tr}\left( \hat{\bf F}_\text{opt} \hat{\bf F}_\text{opt}^H\right)+{\bf f}_\text{RFR}^H {\bf Q}_\text{R} {\bf f}_\text{RFR} -2\text{Re}\left({\bf f}_\text{RFR}^H {\bf e}_\text{R}\right) \\
& ~~ \text{s.t.} ~~~ {\bf F}_\text{RFR}\in \mathcal{F}_\text{RFR},\\
\end{split}$$ where ${\bf Q}_\text{R}=({\bf Y}_\text{R}^H {\bf Y}_\text{R} )^T \otimes {\bf I}_{n_\text{r}}$, ${\bf Y}_\text{R} = {\bf F}_\text{RFT} {\bf F}_\text{BBT} {\bf F}_\text{BBR}^H$, ${\bf f}_\text{RFR}= \text{vec}({\bf F}_\text{RFR})$, ${\bf E}_\text{R}=\hat{\bf F}_\text{opt}^H {\bf Y}_\text{R} $, and ${\bf e}_\text{R}=\text{vec}({\bf E}_\text{R})$. According to Lemma \[Quad\], we can obtain a valid majorizer of the objective function in at point ${\bf F}_\text{RFR}^{(i)} \in \mathcal{F}_\text{RFR}$, which is given by $$\label{J_R_major}
\begin{split}
\tilde J\left({\bf F}_\text{RFR}; {\bf Y}_\text{R}, {\bf F}_\text{RFR}^{(i)} \right) = 2\text{Re}\left( {\bf f}_\text{RFR}^H \left( \left( {\bf Q}_\text{R} -\lambda_\text{R}{\bf I}\right){\bf f}_\text{RFR}^{(i)} - {\bf e}_\text{R} \right) \right) +C_\text{R},
\end{split}$$ where $\lambda_\text{R}$ denotes the maximum eigenvalue of ${\bf Q}_\text{R}$ and the constant term $C_\text{R}=\text{Tr}( \hat{\bf F}_\text{opt} \hat{\bf F}_\text{opt}^H)+\lambda_\text{R} {\bf f}_\text{RFR}^H {\bf f}_\text{RFR}+{\bf f}_\text{RFR}^{H(i)}( \lambda_\text{R}{\bf I} -{\bf Q}_\text{R} ){\bf f}_\text{RFR}^{(i)}$. Following a similar procedure of solving problem (P4), the solution of problem (P6) can be obtained by iteratively updating $ {\bf F}_\text{RFR}$ according to the following close-form expression $$\label{frfr}
{\bf f}_\text{RFR}^{(i+1)}=-\exp\left(j\arg\left( \left( {\bf Q}_\text{R} -\lambda_\text{R}{\bf I}\right){\bf f}_\text{RFT}^{(i)} - {\bf e}_\text{R} \right)\right).$$
Then, we turn to design digital beamformers (i.e., ${\bf F}_\text{BBR}$ and ${\bf F}_\text{BBT}$) with fixed analog beamformers (i.e., ${\bf F}_\text{RFR}$ and ${\bf F}_\text{RFT}$). By fixing ${\bf F}_\text{BBT}$, ${\bf F}_\text{RFR}$ and ${\bf F}_\text{RFT}$, a globally optimal solution of problem (P3) is given by $$\label{fbbr}
{\bf F}_\text{BBR} = {\bf F}_\text{RFR}^{-1} \hat{\bf F}_\text{opt}^H \left( {\bf F}_\text{BBT}^H {\bf F}_\text{RFT}^H \right)^{-1}.$$ Similarly, by fixing ${\bf F}_\text{BBR}$, ${\bf F}_\text{RFR}$ and ${\bf F}_\text{RFT}$, a solution of problem (P3) without considering the power constraint in is given by $$\label{fbbt}
{\bf F}_\text{BBT} = {\bf F}_\text{RFT}^{-1} \hat{\bf F}_\text{opt} \left( {\bf F}_\text{BBR}^H {\bf F}_\text{RFR}^H \right)^{-1}.$$ Then, to satisfy the power constraint in problem (P3), we can normalize ${\bf F}_\text{BBT}$ by a factor of $\frac{\sqrt{n_\text{s}}}{ \left\| {\bf F}_\text{RFT} {\bf F}_\text{BBT} \right\|_\text{F}} $[@yu2016alternating; @zhang2019precoding]. Letting $J({\bf F}_\text{RFT},{\bf F}_\text{BBT}, {\bf F}_\text{BBR}, {\bf F}_\text{RFR})$ denote the objective function of problem (P3), the effectiveness of the normalization step is shown in the following remark.
\[remark1\] If $J({\bf F}_{\rm{RFT}},{\bf F}_{\rm{BBT}}, {\bf F}_{\rm{BBR}}, {\bf F}_{\rm{RFR}}) \le \delta$ when ignoring the power constraint in , $J({\bf F}_{\rm{RFT}}, \hat {\bf F}_{\rm{BBT}}, {\bf F}_{\rm{BBR}}, {\bf F}_{\rm{RFR}}) \le 4\delta$, where $\hat {\bf F}_{\rm{BBT}} =\frac{\sqrt{n_\text{s}}}{ \left\| {\bf F}_\text{RFT} {\bf F}_\text{BBT} \right\|_\text{F}} {\bf F}_{\rm{BBT}}$.
The proof of Remark 1 is omitted here since it is similar to that in [@yu2016alternating].
In Remark 1, it demonstrates that after minimizing the objective function of (P3) to a sufficiently small value $\delta$ when ignoring the power constraint in , the normalization step will also guarantee a small value $4\delta$ for minimizing the objective function.
With above close-form solutions in , , and , the MM based HBF design for both transmuting and receiving parts at relay is summarized in Algorithm \[ALG2\].
\[ALG2\]
**Input**: $\hat{\bf F}_\text{opt}$; **Output**: ${\bf F}_\text{RFT}$, ${\bf F}_\text{BBT}$, ${\bf F}_\text{RFR}$, ${\bf F}_\text{BBR}$; **Initialize**: ${\bf F}_\text{RFT}^{(0)} $, ${\bf F}_\text{RFR}^{(0)}$, ${\bf F}_\text{BBR}^{(0)}$ and outer iteration $k_\text{o}=0$; Fix ${\bf F}_\text{RFT}^{(k_\text{o})}$, ${\bf F}_\text{RFR}^{(k_\text{o})}$ and ${\bf F}_\text{BBR}^{(k_\text{o})}$, compute ${\bf F}_\text{BBT}^{(k_\text{o}+1)} $\
according to ; Use MM method to compute ${\bf F}_\text{RFT}^{(k_\text{o}+1)}$: **Initialize**: ${\bf F}_\text{RFT}^{(0)}={\bf F}_\text{RFT}^{(k_\text{o})}$, and inner iteration $k_\text{i}=0$; Compute ${\bf F}_\text{RFT}^{(k_\text{i}+1)}$ according to ; $k_\text{i} \leftarrow k_\text{i}+1$; **Update**: ${\bf F}_\text{RFT}^{(k_\text{o}+1)}={\bf F}_\text{RFT}^{(k_\text{i})}$; Fix ${\bf F}_\text{RFT}^{(k_\text{o}+1)}$, ${\bf F}_\text{BBT}^{(k_\text{o}+1)}$ and ${\bf F}_\text{RFR}^{(k_\text{o})}$, compute ${\bf F}_\text{BBR}^{(k_\text{o}+1)} $ according to ; Use MM method to compute ${\bf F}_\text{RFR}^{(k_\text{o}+1)}$: **Initialize**: ${\bf F}_\text{RFR}^{(0)}={\bf F}_\text{RFR}^{(k_\text{o})}$, and inner iteration $k_\text{i}=0$; Compute ${\bf F}_\text{RFR}^{(k_\text{i}+1)}$ according to ; $k_\text{i} \leftarrow k_\text{i}+1$; **Update**: ${\bf F}_\text{RFR}^{(k_\text{o}+1)}={\bf F}_\text{RFR}^{(k_\text{i})}$; $k_\text{o} \leftarrow k_\text{o}+1$; Compute ${\bf F}_\text{BBT}= \frac{\sqrt{n_\text{s}}}{ \left\| {\bf F}_\text{RFT} {\bf F}_\text{BBT} \right\|_\text{F} } {\bf F}_\text{BBT}$.
Convergence analysis and computational complexity
-------------------------------------------------
The general formulation for problem (P2) can be given as $$\label{22}
\min_{\bm{x},\bm{y}}~F(\bm{x},\bm{y}),~~\text{s.t.}~G(\bm{x},\bm{y})=0,$$ where $F(\cdot,\cdot)$ is bi-convex and $G(\cdot,\cdot)$ is bi-affine[^2]. According to [@boyd], ADMM can be applied to solve problem (\[22\]). The convergence of Algorithm 1 is under research and would not be analyzed in this paper. Instead, we will show the convergence of Algorithm 1 by simulation result in Section \[Num\_alg\_per\]. We then summarize the main complexity of Algorithm 1 in the following theorem.
If $N_\text{t}=N_\text{r}= n_\text{t}= n_\text{r}$, the main complexity of Algorithm 1 is $\mathcal{O}( 2 K_\text{A} (n_\text{s}^3+1) N_\text{t}^3 )$, where $K_\text{A}$ is the number of iterations.
For Algorithm 1, the main complexity in each iteration includes two parts:
1\) Derive ${\bf F}_\text{opt}$ based on the singular value decomposition of two channel matrices (i.e., ${\bf H}_\text{SR}$ and ${\bf H}_\text{RD}$). According to [@comon1990tracking], the main complexity in this part is $\mathcal{O}( \max( N_\text{t}, n_\text{r}) \min( N_\text{t}, n_\text{r})^2 +\max( N_\text{r}, n_\text{t}) \min( N_\text{r}, n_\text{t})^2) $.
2\) Compute $ {\bf F}_\text{T}$ and $ {\bf F}_\text{R}$ according to and , respectively. The main complexity in this part comes from the inversion operations in and , which is $\mathcal{O}( n_\text{s}^3 (n_\text{t}^3 + n_\text{r}^3))$.
Thus, the main complexity of Algorithm 1 is given by $\mathcal{O}( K_\text{A}( \max( N_\text{t}, n_\text{r}) \min( N_\text{t}, n_\text{r})^2 +\max( N_\text{r}, n_\text{t}) \min( N_\text{r}, n_\text{t})^2+ n_\text{s}^3 (n_\text{t}^3 + n_\text{r}^3) ))$.
The convergence and main complexity of Algorithm 2 are summarized in the following theorem.
\[theorem\_alg2\] The convergence of Algorithm 2 is guaranteed. If $n_\text{t}=n_\text{r}$ and $n_\text{s}=N_\text{RFT}= N_\text{RFR}$, the main complexity of Algorithm 2 is $\mathcal{O}( 2K_\text{out}(K_\text{in} n_\text{t}^3 N_\text{RFT}^3 + n_\text{t}N_\text{RFT}^2 + n_\text{t}n_\text{s}^2 ) )$, where $ K_\text{out}$ and $K_\text{in}$ are the numbers of outer and inner iterations, respectively
See Appendix \[appendix\_A\]
For the hybrid beamformers design at source and destination, we can also utilize MM methods and follow similar procedures in Algorithm 2, and details are omitted for space limitation. Above proposed HBF algorithms are iterative algorithms and suffer from high computational complexity as the number of antennas increases. Further, the proposed HBF algorithms and existing optimization based algorithms are linear mapping from the channel matrix and the hybrid beamformers which require a real-time computation and are not robust to noisy channel input data. Driven by following advantages of ML[@elbir2019cnn]: (1) low complexity when solving optimization-based problem and (2) capable to extrapolate new features form noisy and limited training data, we will propose two learning based approaches to address these problems in the following section.
learning based FD mmwave beamforming design
===========================================
In this section, we will present our learning frameworks for HBF design. Firstly, we present the framework of ELM to design hybrid beamformers and the training data generation approach for robust HBF design. Then, we briefly introduce the HBF design based on CNN.
FD mmwave beamforming design with ELM
-------------------------------------
Feedforward Neural Network (FNN) is a powerful tool for regression and classification [@ITP98; @17; @yy]. As a special FNN, the single layer feedforward network (SLFN) has also been investigated for low complexity [@18]. In the SLFN, the weights of input nodes are optimized through the training procedure. Moreover, ELM is developed in [@20; @21; @22], which consists of only one hidden layer. The weights of input nodes and bias for the hidden nodes are generated randomly. It is shown that ELM can achieve fast running speed with acceptable generalization performance. Since the processing time is one of the bottleneck for low latency communication, using ELM for HBF design can significantly reduce the overall delay. Moreover, due to the hardware constraint (e.g., limited computational capability and memory resources) of mobile terminals, it is easy to implement ELM based component because of its simple architecture.
Thus, in what follows, we will utilize ELM to extract the features of FD mmWave channels and predict the hybrid beamformers for all nodes. As mentioned in Sec. \[FD\_mmWave\_beamforming\_design\], we will design different ELM networks for different nodes. We mainly focus on the ELM network design for HBF of relay, since the ELM network designs for source and destination are straightforward following a similar approach to that of relay node.
We assume that the training dataset is $\mathcal{D}= \left\{(\bm{x}_j,\bm{t}_j)|j=1, \ldots,N \right\} $, where $\bm{x}_j$ and $\bm{t}_j$ are sample and target for the $j$-th training data. Specifically, considering the $j$-th training data, we have the input as $\bm{x}_j=[\text{Re}(\text{vec}(\overline {\bf{H}}^{(j)}_{\text{RD}})), \text{Im}(\text{vec}(\overline{\bf{H}}^{(j)}_{\text{RD}})),\text{Re}(\text{vec}(\overline{\bf{H}}^{(j)}_{\text{SR}})),$ $\text{Im}(\text{vec}(\overline{\bf{H}}^{(j)}_{\text{SR}})),\text{Re}(\text{vec}(\overline{\bf{H}}^{(j)}_{\text{SI}})), \text{Im}(\text{vec}(\overline{\bf{H}}^{(j)}_{\text{SI}}))]^T \in\mathbb{R}^{N_\text{I}} $ with dimension ${N_\text{I}}= 2(n_\text{r}(N_\text{t}+N_\text{t} )+N_\text{r}n_\text{t} )$, where $\overline{\bf{H}}^{(j)}_\Omega \in \mathcal{CN}({\bf{H}}_{\Omega},\Gamma_\Omega)$, $\Omega \in \{\text{SR},\text{RD},\text{SI} \}$ is the index set for different links. And $\Gamma_\Omega $ denotes the variance of additive white Gaussian noise (AWGN), with its $(m,n)$-th entry as $[\Gamma_\Omega ]_{m,n} = |[\overline{\bf{H}}^{(j)}_\Omega ]_{m,n}|^2 -\text{SNR}_{\text{Train}} $ (dB), where $\text{SNR}_{\text{Train}}$ is the SNR for the training data[@elbir2019cnn]. The target of $j$-th data is $\bm{t}_j=[\text{Re}(\text{vec}({\bf{F}}^{(j)}_{\text{BBT}})),\text{Im}(\text{vec}({\bf{F}}^{(j)}_{\text{BBT}})),\text{Re}(\text{vec}({\bf{F}}^{(j)}_{\text{BBR}})),$ $\text{Im}(\text{vec}({\bf{F}}^{(j)}_{\text{BBR}})),\arg(\text{vec}({\bf{F}}^{(j)}_{\text{RFT}})),\arg(\text{vec}({\bf{F}}^{(j)}_{\text{RFR}}))] \in\mathbb{R}^{N_\text{o}}$ with dimension $ N_\text{o}= n_\text{t} N_\text{RFT}+n_\text{r} N_\text{RFR}+ 2N_\text{s}(N_\text{RFR}+N_\text{RFT})$, which is from the corresponding beamformers obtained by Algorithms 1 and 2 with input $\overline{\bf{H}}^{(j)}_\Omega$. The ELM with $L$ hidden nodes and activation function $g(x)$ is shown in Fig. \[fig\_ELM\]. The blocks in input layer and output layer consist of neurons which have the number of the dimensions of corresponding input and output, respectively. According to [@22], the output of ELM related to sample $\bm{x}_j$ can be mathematically modeled as $$\label{eq18}
\sum_{i=1}^{L}\beta_ig_i(\bm{x}_j)=\sum_{i=1}^{L}\beta_ig(\bm{w}_i^T\bm{x}_j+b_i)={\bf g}(\bm{x}_j)\bm{\beta},$$
0.2in
![ELM network for HBF design at relay.[]{data-label="fig_ELM"}](ELM_model1.pdf){width="85mm"}
-0.2in
where $\bm{w}_i=[w_{i,1},\ldots,w_{i, N_\text{I}}]^T$ is the weight vector connecting the $i$-th hidden node and the input nodes, $\bm{\beta}=\left[\beta_1,\ldots, \beta_L\right]^T \in \mathbb{R}^{L\times N_\text{o}}$, and $\beta_i=[\beta_{i,1},\ldots,\beta_{i,N_\text{o}}]^T$ is the weight vector connecting the $i$-th hidden node and the output nodes, and $b_i$ is the bias of the $i$-th hidden node. Considering all the samples in $\mathcal{D}$, we stack (\[eq18\]) to obtain the hidden-layer output as
$${\bf G}=\left[\begin{matrix}
{\bf g}(\bm{x}_1)\\\vdots\\ {\bf g}(\bm{x}_N)
\end{matrix} \right]=\left[\begin{matrix}
g_1(\bm{x}_1) & \cdots &g_L(\bm{x}_1) \\
\vdots & \cdots & \vdots \\
g_1(\bm{x}_N) & \cdots &g_L(\bm{x}_N)
\end{matrix}\right]_{L\times N}.$$
Actually, we can regard ${\bf G}$ as the feature mapping from the training data, which maps the data from the $N_\text{I}$-dimensional space into the $L$-dimensional hidden-layer feature space.
0.2in
![CNN network for HBF design at relay.[]{data-label="fig_CNN"}](CNN_model.pdf){width="88mm"}
-0.2in
Since there is only one hidden layer in ELM, with randomized weights $\{\bm{w}_i\}$ and biases $\{b_i\}$, the goal is to tune the output weight $\bm{\beta}$ with training data $\mathcal{D}$ through minimizing the ridge regression problem $$(\text{P}7):~ \bm{\beta}^*=\arg\min_{\bm{\beta}}~\frac{\lambda}{2}\left\| {\bf G}\bm{\beta}-{\bf T}\right\|^2 + \frac{1}{2} \left\| \bm{\beta} \right\|^2,$$ where $ {\bf T}=\left[\bm{t}_{1},\ldots, \bm{t}_N\right]^T_{N\times N_\text{o}}$ is the concatenated target, $\lambda$ is the trade-off parameter between the training error and the regularization. According to [@21], the closed-form solution for (P7) is $$\label{beta1}
\bm{\beta}^*={\bf G}^T\left(\frac{\bf{I}}{\lambda}+{\bf G}{\bf G}^T \right)^{-1}{\bf T},~~N\leq L,$$ or $$\label{beta2}
\bm{\beta}^*=\left(\frac{\bf{I}}{\lambda}+{\bf G}^T {\bf G}\right)^{-1}{\bf G}^T{\bf T},~~N>L,$$ where $\bm{\beta}^*$ in is derived for the case where the number of training samples is small, while $\bm{\beta}^*$ in is derived for the case where the number of training samples is huge. From above, we can see that ELM is with very low complexity since there is only one layer’s parameters to be trained and the weight of output layer (i.e., $\bm{\beta}$ ) is given in closed-form.
FD mmwave beamforming design with CNN
-------------------------------------
Due to the advantages of data compression, CNN is another promising learning network to solve communication problems at the physical layer. Some latest CNN based HBF designs are presented in [@elbir2019cnn; @bao2020deep], but these works are only for single-hop wireless communications. Based on the CNN-based hybrid beamforming model in [@elbir2019cnn; @bao2020deep], we extend it to FD mmWave systems. The CNN-based architecture is shown in Fig. \[fig\_CNN\], which has a total eleven layers, including an input layer, two convolution layers, three fully connected layers, a regression output layer and four activation layers after each convolutional layer and fully connected layer. Detailed parameters in each layer are shown in Fig. \[fig\_CNN\]. Different from ELM, the $j$-th input data ${\bf{X}}_j$ of CNN is a three-dimensional (3D) real matrix with size $N_\text{r}^\text{m} \times N_\text{t}^\text{m} \times2 $ where $N_\text{t}^\text{m} = \max(N_t, n_t) $ and $N_\text{r}^\text{m} = \max(N_r, n_r) $. We define the first channel of the input as the element-wise real value of the input channel matrix given by $ [ {\bf{X}}_j]_{:,:,1} =[\text{Re}(\overline {\bf{H}}^{(j)}_{\text{SR}}),
\text{Re}(\overline {\bf{H}}^{(j)}_{\text{RD}}), \text{Re}(\overline {\bf{H}}^{(j)}_{\text{SI}}) ]$, and the second channel of the input as the element-wise imaginary value of the input channel matrix given by $ [ {\bf{X}}_j]_{:,:,2} =[\text{Im}(\overline {\bf{H}}^{(j)}_{\text{SR}}), \text{Im}(\overline {\bf{H}}^{(j)}_{\text{RD}}), \text{Im}(\overline {\bf{H}}^{(j)}_{\text{SI}}) ]$. The output of the CNN is the same as that of ELM, which can be obtained from Algorithm \[ALG2\]. More details of CNN can be found in [@elbir2019cnn].
Numerical simulations
=====================
In this section, we will numerically evaluate the performance of the proposed MO and ADMM based HBF algorithm (MM-ADMM-HBF), ELM-based HBF method (ELM-HBF) and CNN-based HBF method (CNN-HBF). We compared our results with four benchmark algorithms: SI-free fully digital beamforming (Full-D), fully digital beamforming with SI (Full-D with SI), HD fully digital beamforming (HD Full-D) and OMP-based HBF method (OMP-HBF) [@zhang2019precoding]. The channel parameters are set to $N_\text{c}= 5$, $N_\text{p}= 10$, $d=\frac{\lambda}{2}$ and $\alpha_{k,l} \sim \mathcal{CN}(0,1)$[@zhang2019precoding]. The bandwidth of this system is $2$ GHz with central carrier frequency $f_c=28$ GHz. According to [@zhang2019precoding], the pathloss is $P_\text{loss}= 61.5+20\log(r)+\varepsilon $ (dB) where $\varepsilon \sim N(0,5.8) $ and $r$ denotes the distance between transmitter and receiver. We assume that the distance between source and relay, and the distance between relay and destination are $r_\text{sr}=100$ m and $r_\text{rd}=100$ m, respectively. We assume that all nodes in FD mmWave systems have the same hardware constraints, and $N_\text{t}=n_\text{t}$, $N_\text{r}=n_\text{r}$, $N_\text{RFR}=N_\text{RFD}$, $N_\text{s}=n_\text{s}$ and $N_\text{RFT}=N_\text{RFS}$. In both training and testing stages, each channel realization is added by AWGN with different powers of $\text{SNR}_{\text{Train}}=\text{SNR}_{\text{Test}} \in \{15, 20, 25\}$ dB.
Performance of ADMM and MM based beamforming {#Num_alg_per}
--------------------------------------------
Fig. \[fig:4\] summarizes the performance of proposed beamforming algorithms versus the number of iterations and different numbers of antennas. Fig. shows the MSE performance (i.e., $\left\| {\bf F}_\text{opt} - {\bf F}_\text{T} {\bf F}_\text{R}^{H} \right\|_\text{F}^2$) of the ADMM based beamforming algorithm (Algorithm 1). We can observe a fast convergence of the proposed algorithm and the convergence rate decreases as the number of antennas increases. Results also show that the MSE of Algorithm 1 at convergence decreases as the number of antennas increases. Fig. shows the SIC performance (i.e., $ {\|{\bf F}_{\rm R}^{H} {\bf H}_{\rm SI} {\bf F}_{\rm T} \|}_{\rm{F}}^2$) of Algorithm 1. It is shown that the power of the SI decreases with increasing numbers of algorithmic iteration. We can also see that using a large number of antennas can eliminate SI faster. Fig. shows the MSE performance (i.e., $\left\| \hat{\bf F}_\text{opt} - {\bf F}_\text{RFT} {\bf F}_\text{BBT} {\bf F}_\text{BBR}^{H} {\bf F}_\text{RFR}^{H} \right\|_\text{F}^2$) of the MM-based HBF algorithm (Algorithm 2). It is shown that a very fast convergence rate of Algorithm 2, even for the case with a large number of antennas (e.g., $N_t=64$).
0.2in
![Spectral efficiency of various HBF algorithms vs SNR with $N_t=N_r=36$, $N_\text{RFR}=6$, $N_\text{RFT}=8$ and $N_\text{s}=2$.[]{data-label="fig_SE_all_1a"}](fig36_1.pdf){width="82mm"}
-0.2in
0.2in
![Spectral efficiency of various HBF algorithms vs SNR with $N_t=N_r=36$, $N_\text{RFR}=6$, $N_\text{RFT}=8$ and $N_\text{s}=4$.[]{data-label="fig_SE_all_1b"}](fig36_2.pdf){width="82mm"}
-0.2in
Hybrid beamforming performance
------------------------------
Fig. \[fig\_SE\_all\_1a\] and Fig. \[fig\_SE\_all\_1b\] show the spectral efficiency of the proposed HBF methods and existing methods versus SNR and different numbers of transmitting streams. From Fig. \[fig\_SE\_all\_1a\], we can see that the proposed MM-ADMM based HBF algorithm can approximately achieve the performance of fully digital beamforming without SI, which means that the proposed algorithm can achieve near-optimal HBF and guarantee efficient SIC. We can also see that the proposed algorithm significantly outperforms OMP based algorithm. The reason is that the OMP-based algorithm in [@zhang2019precoding] eliminates SI by adjusting the derived optimal beamformers, which will significantly degrade the spectral efficiency. Furthermore, the proposed CNN-based and ELM-based HBF methods outperform other methods. The performance of learning based methods is attributed to extracting the features of noisy input data (i.e., imperfect channels for different links) and be robust to the imperfect channels. We can see that all proposed methods can approximately achieve twice the spectral efficiency to the HD system. Fig. \[fig\_SE\_all\_1b\] shows that our proposed methods can also achieve high spectral efficiency with increasing number of transmitting streams. However, the spectral efficiency of OMP-based algorithm becomes even lower than the FD fully-digital beamforming with SI. The reason is that OMP-based algorithm can eliminate SI only when $N_\text{RFT}\ge N_\text{RFR}+N_\text{s}$. Comparing the result in Fig. \[fig\_SE\_all\_1a\] to that in Fig. \[fig\_SE\_all\_1b\], we can find that the spectral efficiency is significantly increased as the number of transmitting streams increases.
Fig. \[fig\_Nt\] shows two groups of spectral efficiency with different numbers of antennas. In each group, simulation results of three proposed methods together with OMP-HBF and Full-D beamforming methods are presented. It is shown that the spectral efficiency increases as the number of antennas increases, and the gap of the results within a group decreases simultaneously. Moreover, the proposed methods can achieve higher spectral efficiency than the OMP-HBF method. We can also see that the proposed methods can approximately achieve the performance of Full-D beamforming without SI when $N_\textbf{t}=N_\textbf{r}=64$. Finally, the proposed learning based HBF methods (i.e., ELM-HBF and CNN-HBF) outperform the optimization-based HBF methods (i.e., OMP-HBF and MM-ADMM-HBF).
0.2in
![Spectral efficiency of various HBF algorithms vs SNR and different numbers of antennas with $N_t=N_r$, $N_\text{RFR}=4$, $N_\text{RFT}=6$.[]{data-label="fig_Nt"}](fig_Nt.pdf){width="82mm"}
-0.2in
0.2in
![Spectral efficiency of various HBF algorithms vs $\text{SNR}_{\text{Test}}$ with $N_t=N_r=36$, $N_\text{RFR}=4$, $N_\text{RFT}=6$ and $\text{SNR}=-8$ dB.[]{data-label="fig_robust"}](fig_robust.pdf){width="82mm"}
-0.2in
In order to evaluate the performance of algorithms on the robustness, we present the spectral efficiency of various HBF algorithms versus different noise levels (i.e., $\text{SNR}_{\text{Test}}$) in Fig. \[fig\_robust\]. Note that SI-free fully digital beamforming (Full-D) is fed with perfect CSI which can achieve the best performance. From Fig. \[fig\_robust\], we can see that the performance of all methods increase with increasing $\text{SNR}_{\text{Test}}$. We can also see that both ELM-HBF and CNN-HBF are more robust against the corruption in the channel data compared to other methods. The reason is that proposed learning based methods estimate the beamformers by extracting the features of noisy input data, while MM-ADMM and OMP methods require optimal digital beamformers which are derived from noisy channels. Furthermore, it is shown that ELM-HBF outperforms CNN-HBF. The reason is that the optimal weight matrix of ELM network can be derived in a close-form, while the multi-layer parameters of CNN are hard to be optimized. Finally, we can see that the ELM-HBF can approximately achieve optimal as $\text{SNR}_{\text{Test}}$ increases.
Computational Complexity
------------------------
In this part, we measure the computation time of our proposed HBF approaches and compared them with OMP-HBF. The computation time of a learning machine includes offline training time and online prediction time. Since the learning network performance and training time are relative to the activation function in the hidden node, we make a performance comparison among following three common activation functions for ELM:
\(1) Sigmoid function $$g(\bm{w},\bm{x},b)=\frac{1}{1+\exp (-\bm{w}^T\bm{x}-b )};$$
\(2) Multi-quadratic radial basis function (RBF) $$g(\bm{w},\bm{x},b)=\sqrt{\left\| \bm{x}- \bm{w}\right\|^2 +b^2};$$
\(3) Parametric rectified linear unit (PReLU) function $$g(\bm{w},\bm{x},a)=\max (0,\bm{w}^T\bm{x} )+ a \min (0,\bm{w}^T\bm{x} ).$$ For CNN, the multi-layer structure will lead to high computational complexity and a simple rectified linear unit (ReLU) activation function (i,e, $g(\bm{w},\bm{x})=\max(0,\bm{w}^T\bm{x})$) is commonly used to reduce the training complexity. Results in [@bao2020deep; @elbir2019joint; @elbir2019cnn] show that CNN with ReLU can achieve good classification performance. Thus, we consider ReLU activation function for CNN. We select $N_\text{s}=2$, $N_\text{RFR}=4$, $N_\text{RFT}=6$ and $\text{SNR}=-8$ dB. $1000$ channel samples for $10$ channel realizations are fed into the learning machines, and $100$ channel samples are used for testing. For different approaches, we summarize the spectral efficiency (SE), training time and prediction time in Table \[complexity\_SE\].
We can see that the proposed ELM-HBF and CNN-HBF methods can achieve higher spectral efficiency and less prediction time than the optimization-based methods (i.e., MM-ADMM and OMP). In addition, we can observe that the prediction time increases with the number of antennas. It is shown that CNN and ELM with PReLU can achieve very low prediction time (e.g., less than $0.06$ s for the case with $N_\text{t}=64$). Although ELM with multi-quadric RBF can achieve a slightly higher spectral efficiency than that with PReLU, it requires almost ten times the prediction time and a hundred times the training time compared to that with PReLU. For instance, the training time of multi-quadric RBF is about $1146.1$ s while it is about $6.1633$ s of PReLU for the case with $N_\textbf{t}=64$. Results show that CNN always spends longer training time and achieves lower spectral efficiency than ELM. For instance, CNN takes about 600 times the training time compared to ELM with PReLU for the case with $N_\textbf{t}=64$.
Conclusions
===========
We proposed two learning schemes for HBF design of FD mmWave systems, i.e., ELM-HBF and CNN-HBF. The learning machines use noisy channels of different nodes as inputs and output the hybrid beamformers. To provide accurate labels of input channel data, we first proposed an ADMM based algorithm to achieve SIC beamforming, and then proposed an MM based algorithm for joint transmitting and receiving HBF optimization. The convergence and complexity for both algorithms were analyzed. The effectiveness of the proposed methods was evaluated through several experiments. Results illustrate that both ADMM and MM based algorithms can converge and the SI can be effectively suppressed. Results also show that both proposed ELM-HBF and CNN-HBF methods can achieve higher spectral efficiency and much lower prediction time than the convectional optimization-based methods. Furthermore, the proposed learning based methods can achieve more robust HBF performance than conventional methods. In addition, ELM-HBF with PReLU activation function can achieve much lower training time than that with Sigmoid or RBF activation function. Since ELM-HBF can achieve much lower computation time and more robust HBF performance than CNN-HBF, it might be more efficient to use ELM-HBF for practical implementation.
Proof of Theorem \[theorem\_alg2\] {#appendix_A}
===================================
To prove the convergence of Algorithm 2, we first analyze the convergence of the MM algorithm when calculating ${\bf F}_\text{RFT}^{(k_\text{o}+1)}$ in step 6. According to the majorizer of $J({\bf F}_\text{RFT}; {\bf Y}_\text{T} )$ in , we have the following four properties,
$$\begin{aligned}
J\left({\bf F}_\text{RFT}^{\left(k_\text{i}\right)}; {\bf Y}_\text{T}^{(k_\text{o})} \right)& = \bar J\left({\bf F}_\text{RFT}^{(k_\text{i})}; {\bf Y}_\text{T}^{(k_\text{o})},{\bf F}_\text{RFT}^{(k_\text{i})} \right), \label{Property1}\\
\nabla_{{\bf F}_\text{RFT}} J\left({\bf F}_\text{RFT}; {\bf Y}_\text{T}^{(k_\text{o})} \right) &=\nabla_{{\bf F}_\text{RFT}} \bar J\left({\bf F}_\text{RFT}; {\bf Y}_\text{T}^{(k_\text{o})},{\bf F}_\text{RFT}^{(k_\text{i})} \right), \label{Property2}\\
J\left({\bf F}_\text{RFT}^{(k_\text{i}+1)}; {\bf Y}_\text{T}^{(k_\text{o})} \right)& \mathop \le \limits^{(a)} \bar J\left({\bf F}_\text{RFT}^{(k_\text{i}+1)}; {\bf Y}_\text{T}^{(k_\text{o})},{\bf F}_\text{RFT}^{(k_\text{i})} \right), \label{Property3}\\
\bar J\left({\bf F}_\text{RFT}^{(k_\text{i}+1)}; {\bf Y}_\text{T}^{(k_\text{o})},{\bf F}_\text{RFT}^{(k_\text{i})} \right) & \mathop \le \limits^{(b)} \bar J\left({\bf F}_\text{RFT}^{(k_\text{i})}; {\bf Y}_\text{T}^{(k_\text{o})},{\bf F}_\text{RFT}^{(k_\text{i})} \right), \label{Property4}
\end{aligned}$$
where (a) follows from $\lambda_\text{T}{\bf I} \ge {\bf Q}_\text{T}$, (b) follows from $ \bar J({\bf F}_\text{RFT}^{(k_\text{i}+1)}; {\bf Y}_\text{T}^{(k_\text{o})},{\bf F}_\text{RFT}^{(k_\text{i})} )= \mathop{\min}\limits_{{\bf F}_\text{RFT}} \bar J({\bf F}_\text{RFT}; {\bf Y}_\text{T}^{(k_\text{o})},{\bf F}_\text{RFT}^{(k_\text{i})} )$, and $ {\bf Y}_\text{T}^{(k_\text{o})}={\bf F}_\text{BBT}^{(k_\text{o}+1)} {\bf F}_\text{BBR}^{H(k_\text{o})} {\bf F}_\text{RFR}^{H(k_\text{o})}$. Based on properties , and , we obtain $$\label{inequality1}
\begin{split}
J\left({\bf F}_\text{RFT}^{(k_\text{i}+1)}; {\bf Y}_\text{T}^{(k_\text{o})} \right) &\le \bar J\left({\bf F}_\text{RFT}^{(k_\text{i}+1)}; {\bf Y}_\text{T}^{(k_\text{o})},{\bf F}_\text{RFT}^{(k_\text{i})} \right) \\&\le \bar J\left({\bf F}_\text{RFT}^{(k_\text{i})}; {\bf Y}_\text{T}^{(k_\text{o})},{\bf F}_\text{RFT}^{(k_\text{i})} \right)\\&=J\left({\bf F}_\text{RFT}^{(k_\text{i})}; {\bf Y}_\text{T}^{(k_\text{o})} \right).
\end{split}$$ Thus, $\{ J({\bf F}_\text{RFT}^{(k_\text{i})}; {\bf Y}_\text{T}^{(k_\text{o})} )\}$ is a non-increasing sequence and thus it converges since $J({\bf F}_\text{RFT}; {\bf Y}_\text{T} )$ is lower bounded. Further, since $J({\bf F}_\text{RFT}; {\bf Y}_\text{T}^{(k_\text{o})} )$ and $ \bar J({\bf F}_\text{RFT}; {\bf Y}_\text{T}^{(k_\text{o})},{\bf F}_\text{RFT}^{(k_\text{i})} )$ have the same gradient at point ${\bf F}_\text{RFT}^{(k_\text{i})} \in \mathcal{F}_\text{RFT}$ according to , ${\bf F}_\text{RFT}^{(k_\text{i})}$ can converge to a stationary point solution of original problem (P4). After the converge of step 6, we have $J({\bf F}_\text{RFT}^{(k_\text{o}+1)}; {\bf Y}_\text{T}^{(k_\text{o})} ) \le J({\bf F}_\text{RFT}^{(k_\text{o})}; {\bf Y}_\text{T}^{(k_\text{o})} )$. Similarly, the convergence of the MM algorithm when calculating ${\bf F}_\text{RFR}^{(k_\text{o})}$ in step 14 can be proved with the following inequalities $$\label{inequality2}
\begin{split}
J\left({\bf F}_\text{RFR}^{(k_\text{i}+1)}; {\bf Y}_\text{R}^{(k_\text{o}+1)} \right) &\le \tilde J\left({\bf F}_\text{RFR}^{(k_\text{i}+1)}; {\bf Y}_\text{R}^{(k_\text{o}+1)},{\bf F}_\text{RFR}^{(k_\text{i})} \right) \\&\le \tilde J\left({\bf F}_\text{RFR}^{(k_\text{i})}; {\bf Y}_\text{R}^{(k_\text{o}+1)},{\bf F}_\text{RFR}^{(k_\text{i})} \right)\\&=J\left({\bf F}_\text{RFR}^{(k_\text{i})}; {\bf Y}_\text{T}^{(k_\text{o}+1)} \right),
\end{split}$$ where ${\bf Y}_\text{R}^{(k_\text{o}+1)} = {\bf F}_\text{RFT}^{(k_\text{o}+1)} {\bf F}_\text{BBT}^{H(k_\text{o}+1)} {\bf F}_\text{BBR}^{H(k_\text{o}+1)}$. After the converge of step 14, we obtain $J({\bf F}_\text{RFR}^{(k_\text{o}+1)}; {\bf Y}_\text{R}^{(k_\text{o}+1)} ) \le J({\bf F}_\text{RFR}^{(k_\text{o})}; {\bf Y}_\text{R}^{(k_\text{o}+1)} )$. Then, based on above observations, we have $$\begin{split}
& J\left({\bf F}_\text{RFT}^{(k_\text{o})},{\bf F}_\text{BBT}^{(k_\text{o})}, {\bf F}_\text{BBR}^{(k_\text{o})}, {\bf F}_\text{RFR}^{(k_\text{o})} \right) =J\left({\bf F}_\text{RFR}^{(k_\text{o})};{\bf F}_\text{BBT}^{(k_\text{o})} {\bf F}_\text{BBR}^{H(k_\text{o})} {\bf F}_\text{RFR}^{H(k_\text{o})} \right) \\
&\qquad \qquad\qquad \qquad\quad \mathop \ge \limits^{(a)} J\left({\bf F}_\text{RFT}^{(k_\text{o})};{\bf F}_\text{BBT}^{(k_\text{o}+1)} {\bf F}_\text{BBR}^{H(k_\text{o})} {\bf F}_\text{RFR}^{H(k_\text{o})} \right)\\
&\qquad \qquad\qquad\quad \qquad\mathop \ge \limits^{(b)} J\left({\bf F}_\text{RFT}^{(k_\text{o}+1)};{\bf F}_\text{BBT}^{(k_\text{o}+1)} {\bf F}_\text{BBR}^{H(k_\text{o})} {\bf F}_\text{RFR}^{H(k_\text{o})} \right)\\
&\qquad \qquad\qquad\quad \qquad= J\left({\bf F}_\text{RFR}^{(k_\text{o})}; {\bf F}_\text{RFT}^{(k_\text{o}+1)} {\bf F}_\text{BBT}^{H(k_\text{o}+1)} {\bf F}_\text{BBR}^{H(k_\text{o})}\right)\\
&\qquad \qquad\qquad\quad \qquad\mathop \ge \limits^{(c)} J\left({\bf F}_\text{RFR}^{(k_\text{o})}; {\bf F}_\text{RFT}^{(k_\text{o}+1)} {\bf F}_\text{BBT}^{H(k_\text{o}+1)} {\bf F}_\text{BBR}^{H(k_\text{o}+1)}\right)\\
&\qquad \qquad\qquad\quad \qquad\mathop \ge \limits^{(d)} J\left({\bf F}_\text{RFR}^{(k_\text{o}+1)}; {\bf F}_\text{RFT}^{(k_\text{o}+1)} {\bf F}_\text{BBT}^{H(k_\text{o}+1)} {\bf F}_\text{BBR}^{H(k_\text{o}+1)}\right)\\
&\qquad \qquad\qquad\quad \qquad= J\left({\bf F}_\text{RFT}^{(k_\text{o}+1)},{\bf F}_\text{BBT}^{(k_\text{o}+1)}, {\bf F}_\text{BBR}^{(k_\text{o}+1)}, {\bf F}_\text{RFR}^{(k_\text{o}+1)} \right), \\
\end{split}$$ where (a) and (c) respectively follow from $$\begin{split}
J\left({\bf F}_\text{RFT}^{(k_\text{o})};{\bf F}_\text{BBT}^{(k_\text{o}+1)} \right.& \left.{\bf F}_\text{BBR}^{H(k_\text{o})} {\bf F}_\text{RFR}^{H(k_\text{o})} \right) \\
& = \mathop{\min}\limits_{{\bf F}_\text{BBT}} J\left({\bf F}_\text{RFT}^{(k_\text{o})};{\bf F}_\text{BBT} {\bf F}_\text{BBR}^{H(k_\text{o})} {\bf F}_\text{RFR}^{H(k_\text{o})} \right),
\end{split}$$ and $$\begin{split}
J\left({\bf F}_\text{RFR}^{(k_\text{o})}; {\bf F}_\text{RFT}^{(k_\text{o}+1)} \right.& \left.{\bf F}_\text{BBT}^{H(k_\text{o}+1)} {\bf F}_\text{BBR}^{H(k_\text{o}+1)}\right) \\
&= \mathop{\min}\limits_{{\bf F}_\text{BBR}} J\left({\bf F}_\text{RFR}^{(k_\text{o})}; {\bf F}_\text{RFT}^{(k_\text{o}+1)} {\bf F}_\text{BBT}^{H(k_\text{o}+1)} {\bf F}_\text{BBR}^H\right),
\end{split}$$ (b) and (d) follow from and , respectively. Thus, $\{J({\bf F}_\text{RFT}^{(k_\text{o})},{\bf F}_\text{BBT}^{(k_\text{o})}, {\bf F}_\text{BBR}^{(k_\text{o})}, {\bf F}_\text{RFR}^{(k_\text{o})} ) \}$ is a non-increasing sequence and thus it converges since $J({\bf F}_\text{RFT},{\bf F}_\text{BBT}, {\bf F}_\text{BBR}, {\bf F}_\text{RFR})$ is lower bounded. The proof of convergence of Algorithm 2 is completed.
For Algorithm 2, the main complexity in each iteration includes the following three parts:
1\) Compute ${\bf F}_\text{BBT}$ and ${\bf F}_\text{BBR}$. The complexity of pseudo inversion can be measured by the complexity of singular value decomposition. Thus, the main complexity for this part is $\mathcal{O}( n_\text{t}N_\text{RFT}^2 +n_\text{r} N_\text{RFR}^2+ n_\text{s}^2( n_\text{r}+n_\text{t} ) )$.
2\) Compute ${\bf F}_\text{RFT}$ and ${\bf F}_\text{RFR}$ with MM methods. The main complexity comes from finding the maximum eigenvalue of ${\bf Q}_\text{T}$ and the maximum eigenvalue of ${\bf Q}_\text{R}$. The main complexity of this part is $\mathcal{O}( (n_\text{r}N_\text{RFR})^3+(n_\text{t}N_\text{RFT})^3 )$.
Thus, the main complexity for Algorithm 2 is given by $\mathcal{O}( K_\text{out}(K_\text{in}( (n_\text{r}N_\text{RFR})^3+(n_\text{t}N_\text{RFT})^3 )+ n_\text{t}N_\text{RFT}^2 +n_\text{r} N_\text{RFR}^2+ n_\text{s}^2( n_\text{r}+n_\text{t} ) ) ) $.
[^1]: S. Huang, Y. Ye and M. Xiao are with the Division of Information Science and Engineering, KTH Royal Institute of Technology, Stockholm, Sweden (e-mail: {shahua, yu9, mingx}@kth.se).
[^2]: In other words, for any fixed $\bm{x},\bm{y}$, $F(\cdot,\bm{y})$ and $F( \bm{x},\cdot)$ are convex; while $G(\cdot,\bm{y})$ and $G(\bm{x},\cdot)$ are affine.
|
---
abstract: 'We report on the design, verification and performance of [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}, an open-source GPU-accelerated micromagnetic simulation program. This software solves the time- and space dependent magnetization evolution in nano- to micro scale magnets using a finite-difference discretization. Its high performance and low memory requirements allow for large-scale simulations to be performed in limited time and on inexpensive hardware. We verified each part of the software by comparing results to analytical values where available and to micromagnetic standard problems. [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}also offers specific extensions like MFM image generation, moving simulation window, edge charge removal and material grains.'
author:
- 'Arne Vansteenkiste$^1$, Jonathan Leliaert$^1$, Mykola Dvornik$^1$, Felipe Garcia-Sanchez$^{2,3}$ and Bartel Van Waeyenberge$^1$\'
bibliography:
- 'bibliography.bib'
title: The design and verification of Mumax3
---
Introduction
============
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}is a GPU-accelerated micromagnetic simulation program. It calculates the space- and time-dependent magnetization dynamics in nano- to micro-sized ferromagnets using a finite-difference discretization. A similar technique is used by the open-source programs OOMMF[@oommf] (CPU) and MicroMagnum[@micromagnum] (GPU), and the commercial GpMagnet[@Lopez-Diaz2012] (GPU).\
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}is open-source software written in Go[@go] and CUDA[@cuda], and is freely available under the GPLv3 license on <http://mumax.github.io>. In addition to the terms of the GPL, we kindly request that any work using [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}refers to the latter website and this paper. An nVIDIA GPU and a Linux, Windows or Mac platform is required to run the software. Apart from nVIDIA’s GPU driver, no other dependencies are required to run [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}.\
Finite-element micromagnetic software exists as well like, e.g., NMag[@nmag], TetraMag[@Kakay2010], MagPar[@scholz03] and FastMag[@Chang2011]. They offer more geometrical flexibility than finite-difference methods, at the expense of performance.\
In this paper we first describe each of [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}’s components and assert their individual correctness and accuracy. Then we address the micromagnetic standard problems [@mumag], where all software components have to work correctly together to solve real-world simulations. We typically compare against OOMMF[@oommf] which has been widely used and profoundly tested for over more than a decade. Finally, we report on the performance in terms of speed and memory consumption.\
The complete input files used to generate the graphs in this paper are available in appendix \[appendixA\], allowing for each of the presented results to be reproduced independently. The scripts were executed with <span style="font-variant:small-caps;">MuMax</span> version 3.6.\
Design
======
Material Regions
----------------
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}employs a finite difference (FD) discretization of space using a 2D or 3D grid of orthorhombic cells. Volumetric quantities, like the magnetization and effective field, are treated at the center of each cell. On the other hand, interfacial quantities, like the exchange coupling, are considered on the faces in between the cells ([Fig.\[figRegions\]]{}).\
In order to preserve memory, space-dependent material parameters are not explicitly stored per-cell. Instead, each cell is attributed a *region index* between 0 and 256. Different region indices represent different materials. The actual material parameters are stored in 256-element look-up tables, indexed by the cell’s region index.\
Interfacial material parameters like the exchange coupling are stored in a triangular matrix, indexed by the region numbers of the interacting cells. This allows arbitrary exchange coupling between all pairs of materials (Section \[Bexch\]).\
![Each simulation cell is attributed a region index representing the cell’s material type. Material parameters like the saturation magnetization [$M_\mathrm{sat}$]{}, anisotropy constants, etc are stored in 1D look-up tables indexed by the region index. Interfacial parameters like the exchange coupling [$A_\mathrm{ex}$]{}/[$M_\mathrm{sat}$]{}are stored in a 2D lower triangular matrix indexed by the interface’s two neighbor region indices.[]{data-label="figRegions"}](regions){width="0.5\linewidth"}
#### Time-dependent parameters {#time-dependent-parameters .unnumbered}
In addition to region-wise space-dependence, material parameters in each region can be time-dependent, given by one arbitrary function of time per region.\
Excitations like the externally applied field or electrical current density can be set region- and time-wise in the same way as material parameters. Additionally they can have an arbitrary number of extra terms of the form $f(t)\times g(x,y,z)$, where $f(t)$ is any function of time multiplied by a continuously varying spatial profile $g(x,y,z)$. This allows to model smooth time- and space dependent excitations like, e.g., an antenna’s RF field or an AC electrical current.
Geometry
--------
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}uses *Constructive Solid Geometry* to define the shape of the magnet and the material regions inside it. Any shape is represented by a function $f(x,y,z)$ that returns true when $(x,y,z)$ lies inside the shape or false otherwise. E.g. a sphere is represented by the function $x^2+y^2+z^2\leq r^2$. Shapes can be rotated, translated, scaled and combined together with boolean operations like AND, OR, XOR. This allows for complex, parametrized geometries to be defined programmatically. E.g., [Fig.\[figCSG\]]{} shows the magnetization in the logical OR of an ellipsoid and cuboid.
![Geometry obtained by logically combining an ellipsoid and rotated cuboid. The arrows depict the magnetization direction in this complex shape.[]{data-label="figCSG"}](csg){width="0.5\linewidth"}
Interface
---------
#### Input scripts {#input-scripts .unnumbered}
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides a dedicated scripting language that resembles a subset of the Go programming language. The script provides a simple means to define fairly complex simulations. This is illustrated by the code snippet below where we excite a Permalloy ellipse with a 1GHz RF field:
``` {frame="single"}
setgridsize(128, 32, 1)
setcellsize(5e-9, 5e-9, 8e-9)
setGeom(ellipse(500e-9, 160e-9))
Msat = 860e3
Aex = 13e-12
alpha= 0.05
m=uniform(1, 0, 0)
relax()
f := 1e9 // 1GHz
A := 0.01 // 10mT
B_ext = vector(0.1, A*sin(2*pi*f*t), 0)
run(10e-9)
```
#### Programming {#programming .unnumbered}
The [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}libraries can also be called from native Go. In this way, the full Go language and libraries can be leveraged for more powerful input generation and output processing than the built-in scripting.\
#### Web interface {#web-interface .unnumbered}
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides web-based HTML5 user interface. It allows to inspect and control simulations from within a web browser, whether they are running locally or remotely. Simulations may also be entirely constructed and run from within the web GUI. In any case an input file corresponding to the user’s clicks is generated, which may later be used to repeat the simulation in an automated fashion.\
#### Data format {#data-format .unnumbered}
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}uses OOMMF’s “OVF” data format for input and output of all space-dependent quantities. This allows to leverage existing tools. Additionally a tool is provided to convert the output to several other data formats like paraview’s VTK[@paraview], gnuplot[@gnuplot], comma-separated values (CSV), Python-compatible JSON, …, and to image formats like PNG, JPG and GIF. Finally, the output is compatible with the 3D rendering software <span style="font-variant:small-caps;">MuView</span>, contributed by Graham Rowlands[@muview].\
Dynamical terms
===============
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}calculates the evolution of the reduced magnetization ${{\ensuremath{\vec{\textbf{m}}}}\xspace}{\ensuremath{\left({\ensuremath{\vec{\textbf{r}}}},t \right)}}$, which has unit length. In what follows the dependence on time and space will not be explicitly written down. We refer to the time derivative of [[$\vec{\textbf{m}}$]{}]{}as the [torque]{} [[$\vec{\textbf{\ensuremath{\tau}}}$]{}$_\mathrm{}$]{} (units 1/s):
$${\frac{\partial {{\ensuremath{\vec{\textbf{m}}}}\xspace}}{\partial t}} = {{\ensuremath{\vec{\textbf{\ensuremath{\tau}}}}}\ensuremath{_\mathrm{}}} \label{eqDyn}$$
[[$\vec{\textbf{\ensuremath{\tau}}}$]{}$_\mathrm{}$]{} has three contributions:
- Landau-Lifshitz torque [[$\vec{\textbf{\ensuremath{\tau}}}$]{}$_\mathrm{LL}$]{} (Section \[tqLL\])
- Zhang-Li spin-transfer torque [[$\vec{\textbf{\ensuremath{\tau}}}$]{}$_\mathrm{ZL}$]{} (Section \[tqZL\])
- Slonczewski spin-transfer torque [[$\vec{\textbf{\ensuremath{\tau}}}$]{}$_\mathrm{SL}$]{} (Section \[tqSL\]).
Landau-Lifshitz torque {#tqLL}
----------------------
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}uses the following explicit form for the Landau-Lifshitz torque [@landau35; @gilbert55]:
$${{\ensuremath{\vec{\textbf{\ensuremath{\tau}}}}}\ensuremath{_\mathrm{LL}}} = \gamma_\mathrm{LL} \frac{1}{1+{\ensuremath{\alpha}\xspace}^2} \left( {{\ensuremath{\vec{\textbf{m}}}}\xspace}\times {{{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{eff}}}}+{\ensuremath{\alpha}\xspace}\left( {{\ensuremath{\vec{\textbf{m}}}}\xspace}\times \left( {{\ensuremath{\vec{\textbf{m}}}}\xspace}\times {{{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{eff}}}}\right)\right) \right) \label{eqLLG}$$
with $\gamma_\mathrm{LL}$ the [gyromagnetic ratio]{} (rad/Ts), [$\alpha$]{}the dimensionless [damping parameter]{} and [[[$\vec{\textbf{B}}$]{}$_\mathrm{eff}$]{}]{} the [effective field]{} (T). The default value for $\gamma_\mathrm{LL}$ can be overridden by the user. [[[$\vec{\textbf{B}}$]{}$_\mathrm{eff}$]{}]{} has the following contributions:
- externally applied field [[$\vec{\textbf{B}}$]{}$_\mathrm{ext}$]{}
- magnetostatic field [[$\vec{\textbf{B}}$]{}$_\mathrm{demag}$]{} (\[Bdemag\])
- Heisenberg exchange field [[$\vec{\textbf{B}}$]{}$_\mathrm{exch}$]{} (\[Bexch\])
- Dzyaloshinskii-Moriya exchange field [[$\vec{\textbf{B}}$]{}$_\mathrm{dm}$]{} (\[Bdm\])
- magneto-crystalline anisotropy field [[$\vec{\textbf{B}}$]{}$_\mathrm{anis}$]{} (\[Banis\])
- thermal field [[$\vec{\textbf{B}}$]{}$_\mathrm{therm}$]{} (\[Btherm\]).
[Fig.\[figtqLL\]]{} shows a validation of the Landau-Lifshitz torque for a single spin precessing without damping in a constant external field.\
![Validation of Eq. \[eqLLG\] for a single spin precessing without damping in a 0.1T field along $z$, perpendicular to m. Analytical solution: $m_x = \cos(0.1\mathrm{T} \gamma_{LL} t)$.[]{data-label="figtqLL"}](precession){width="0.5\linewidth"}
Magnetostatic field {#Bdemag}
-------------------
#### Magnetostatic convolution {#magnetostatic-convolution .unnumbered}
A finite difference discretization allows the magnetostatic field to be evaluated as a (discrete) convolution of the magnetization with a demagnetizing kernel ${\hat{{\textbf{K}}}}$:
$${{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{demag}}}\ _i = {\hat{{\textbf{K}}}}_{ij} * {{\ensuremath{\vec{\textbf{M}}}}\xspace}_{j}$$
where [[$\vec{\textbf{M}}$]{}]{}= [$M_\mathrm{sat}$]{}[[$\vec{\textbf{m}}$]{}]{}is the unnormalized magnetization, with [$M_\mathrm{sat}$]{} the saturation magnetization (A/m). This calculation is FFT-accelerated based on the well-known convolution theorem. The corresponding energy density is provided as:\
$${\ensuremath{\mathcal{E}_\mathrm{demag}}} = -\frac{1}{2}{\ensuremath{\vec{\textbf{M}}}} \cdot {{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{demag}}}$$
#### Magnetostatic kernel {#magnetostatic-kernel .unnumbered}
We construct the demagnetizing kernel ${\hat{{\textbf{K}}}}$ assuming constant magnetization[@McMichael1999a] in each finite difference cell and we average the resulting [[$\vec{\textbf{B}}$]{}$_\mathrm{demag}$]{} over the cell volumes. The integration is done numerically with the number of integration points automatically chosen based on the distance between source and destination cells and their aspect ratios. The kernel is initialized on CPU in double precision, and only truncated to single before transferring to GPU.\
The kernel’s mirror symmetries and zero elements are exploited to reduce storage and initialization time. This results in a 9[$\times$]{}or 12[$\times$]{}decrease in kernel memory usage for 2D and 3D simulations respectively, and is part of the reason for [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}’s relatively low memory requirements (Section \[perf\]).\
#### Accuracy {#accuracy .unnumbered}
The short-range accuracy of ${\hat{{\textbf{K}}}}$ is tested by calculating the demagnetizing factors of a uniformly magnetized cube, analytically known to be -1/3 in each direction. The cube was discretized in cells with varying aspect ratios along $z$ to stress the numerical integration scheme. The smallest possible number of cells was used to ensure that the short-range part of the field has an important contribution. The results presented in Table \[tabCube\] are accurate to 3 or 4 digits. Standard Problem \#2 (\[std2\]) is another test sensitive to the short-range kernel accuracy[@Donahue2000].\
**aspect** $N\,_{xx}$ $N\,_{yy}$ $N\,_{zz}$
------------ ------------ ------------ ------------
8/1 -0.333207 -0.333207 -0.333176
4/1 -0.333149 -0.333149 -0.333144
2/1 -0.333118 -0.333118 -0.333118
1/1 -0.333372 -0.333372 -0.333372
1/4 -0.333146 -0.333146 -0.333145
1/16 -0.333176 -0.333176 -0.333280
1/64 -0.333052 -0.333052 -0.333639
: \[tabCube\] Demagnetizing factors ($N_{ij} = H_i/M_j$) calculated for a cube discretized in the smallest possible number of cells with given aspect ratio along $z$. The results lie close to the analytical value of $1/3$, even for very elongated (aspect$>$1) or flat (aspect$<$1) cells. The off-diagonal elements (not shown) are consistent with zero within the single-precision limit.
The long-range accuracy of the magnetostatic convolution is assessed by comparing kernel and the field of a single magnetized cell to the corresponding point dipole. The fields presented in [Fig.\[figLong\]]{}, show perfect long-range accuracy for the kernel, indicating accurate numerical integration in that range. The resulting field, obtained by convolution of a single magnetized cell ([$B_\mathrm{sat}$]{}=1T) with the kernel, is accurate down to about 0.01$\mu$T — the single-precision noise floor introduced by the FFT’s.\
![\[figLong\] Kernel element ${\hat{{\textbf{K}}}}\ _{xx}$ (top) and $\vec{B}_\mathrm{x}$, the field of a single magnetized cell (1nm$^3$, $B_\mathrm{sat}$=1T) (bottom) along the $x$ axis (1nm cells), compared to the field of a corresponding dipole. The long-range field remains accurate down to the single-precision numerical limit ($\propto 10^{-7}$T).](demaglong){width="0.5\linewidth"}
#### Periodic boundary conditions {#periodic-boundary-conditions .unnumbered}
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides optional periodic boundary conditions (PBCs) in each direction. PBCs imply magnetization wrap-around in the periodic directions, felt by stencil operations like the exchange interaction. A less trivial consequence is that the magnetostatic field of repeated magnetization images has to be added to [[$\vec{\textbf{B}}$]{}$_\mathrm{demag}$]{}.\
In contrast to OOMMF’s PBC implementation[@Lebecki2008], [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}employs a so-called macro geometry approach[@nmag; @Fangohr2009] where a finite (though usually large) number of repetitions is taken into account, and that number can be freely chosen in each direction. [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}’s [`setPBC(Px, Py, Pz)`]{} command enables $P_x, P_y, P_z$ additional images *on each side* of the simulation box, given that $P$ is sufficiently large.\
To test the magnetostatic field with PBC’s, we calculate the demagnetizing tensors of a wide film and long rod in two different ways: either with a large grid without PBC’s, or with a small grid but with PBC’s equivalent to the larger grid. In our implementation, a gridsize $(N_x, N_y, N_z)$ with PBC’s $(P_x, P_y, P_z)$ should approximately correspond to a gridsize $(2P_xN_x, 2P_yN_y, 2P_zN_z)$ without PBC’s. This is verified in tables \[tabPBC1\] and \[tabPBC2\] where we extend in plane for the film and along $z$ for the rod. Additionally, for very large sizes both results converge to the well-known analytical values for infinite geometries.\
**PBC** $N_{zz}$ **grid** $N_{zz}$
----------------- ---------- ---------------- ----------
[$\times$]{}1 -0.71257 $\times$2 -0.76368
[$\times$]{}4 -0.93879 $\times$8 -0.94224
[$\times$]{}16 -0.98438 $\times$32 -0.98460
[$\times$]{}64 -0.99514 $\times$128 -0.99515
[$\times$]{}256 -0.99713 $\times$512 -0.99779
$\infty$ -1 $\times\infty$ -1
: \[tabPBC1\] Out-of-plane demagnetizing factors for a thin film with grid size 2[$\times$]{}2[$\times$]{}1 and 2D PBC’s [$\times$]{}$P$ (column 1) or without PBC’s but with a corresponding grid size $\times 2P$ (column 2). Both give comparable results for sufficiently large $P$, verifying the PBC implementation.
**PBC** $N{xx}$ **grid** $N{xx}$
----------------- ----------- ---------------- ------------
[$\times$]{}1 -0.251960 $\times$2 -0.3331182
[$\times$]{}4 -0.476766 $\times$8 -0.4809398
[$\times$]{}16 -0.498280 $\times$32 -0.4983577
[$\times$]{}64 -0.499517 $\times$128 -0.4995183
[$\times$]{}256 -0.499590 $\times$512 -0.4995911
$\infty$ -0.5 $\times\infty$ -0.5
: \[tabPBC2\] in-plane demagnetizing factors for a long rod with grid size 1[$\times$]{}1[$\times$]{}2 and 1D PBC’s [$\times$]{}$P$ (column 1) or without PBC’s but with a corresponding grid size $\times 2P$ (column 2). Both give comparable results for sufficiently large $P$, verifying the PBC implementation.
Heisenberg exchange interaction {#Bexch}
-------------------------------
The effective field due to the Heisenberg exchange interaction [@brown63]:
$${{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{exch}}} = 2\frac{A_\mathrm{ex}}{{\ensuremath{M_\mathrm{sat}}}} \Delta {{\ensuremath{\vec{\textbf{m}}}}\xspace}\label{eqBexch1}$$
is evaluated using a 6-neighbor small-angle approximation[@Donahue1998; @Donahue2004]:
$${{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{exch}}} = 2\frac{A_\mathrm{ex}}{{\ensuremath{M_\mathrm{sat}}}} \sum_i \frac{({{\ensuremath{\vec{\textbf{m}}}}\xspace}_i - {{\ensuremath{\vec{\textbf{m}}}}\xspace})}{\Delta_i^2} \label{eqBexch1}$$
where $i$ ranges over the six nearest neighbors of the central cell with magnetization [[$\vec{\textbf{m}}$]{}]{}. $\Delta_i$ is the cell size in the direction of neighbor $i$.\
At the boundary of the magnet some neighboring magnetizations ${{\ensuremath{\vec{\textbf{m}}}}\xspace}_i$ are missing. In that case we use the cell’s own value [[$\vec{\textbf{m}}$]{}]{}instead of ${{\ensuremath{\vec{\textbf{m}}}}\xspace}_i$, which is equivalent to employing Neumann boundary conditions [@Donahue1998; @Donahue2004].\
The corresponding energy density is provided as:\
$$\begin{aligned}
{\ensuremath{\mathcal{E}_\mathrm{exch}}} &=& A_\mathrm{ex}(\nabla {\ensuremath{\vec{\textbf{m}}}})^2\label{eqEdensGrad}\\
&=& -\frac{1}{2}{\ensuremath{\vec{\textbf{M}}}} \cdot {{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{exch}}}\label{eqEdensExch}\end{aligned}$$
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}calculates the energy from the effective field using Eqns. \[eqBexch1\], \[eqEdensExch\]. The implementation is verified by calculating the exchange energy of a 1D magnetization spiral, for which the exact form (Eq.\[eqEdensGrad\]) is easily evaluated. [Fig.\[figExchE\]]{} shows that the linearized approximation is suited as long as the angle between neighboring magnetizations is not too large. This can be achieved by choosing a sufficiently small cell size compared to the exchange length.\
![Numerical (Eq.\[eqBexch1\],\[eqEdensExch\]) and analytical (Eq.\[eqEdensGrad\]) exchange energy denisty (in units $K_m=1/2\mu_0 M_\mathrm{sat}^2$) for spiral magnetizations as a function of the angle between neighboring spins (independent of material parameters). To ensure an accurate energy, spin-spin angles should be kept below $\propto$20–30$^\circ$ by choosing a sufficiently small cell size.[]{data-label="figExchE"}](exchange1d){width="0.5\linewidth"}
#### Inter-region exchange {#inter-region-exchange .unnumbered}
The exchange interaction between different materials deserves special attention. [$A_\mathrm{ex}$]{} and [$M_\mathrm{sat}$]{} are defined in the cell volumes, while Eq. \[eqBexch1\] requires a value of [$A_\mathrm{ex}$]{}/[$M_\mathrm{sat}$]{} properly averaged out between the neighboring cells. For neighboring cells with different material parameters [$A_\mathrm{ex}$]{}$_1$, [$A_\mathrm{ex}$]{}$_2$ and [$M_\mathrm{sat}$]{}$_1$, [$M_\mathrm{sat}$]{}$_2$ [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}uses a harmonic mean:
$${{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{exch}}} = 2S\frac{2{\frac{A_\mathrm{ex1}}{M_\mathrm{sat1}}}{\frac{A_\mathrm{ex2}}{M_\mathrm{sat2}}}}{{\frac{A_\mathrm{ex1}}{M_\mathrm{sat1}}}+{\frac{A_\mathrm{ex2}}{M_\mathrm{sat2}}}} \sum_i \frac{({{\ensuremath{\vec{\textbf{m}}}}\xspace}_i - {{\ensuremath{\vec{\textbf{m}}}}\xspace})}{\Delta_i^2} \label{eqBexch}$$
which can easily be derived, and where we set $S=1$ by default. $S$ is an arbitrary scaling factor which may be used to alter the exchange coupling between regions, e.g., to lower the coupling between grains or antiferromagnetically couple two layers.
Dzyaloshinskii-Moriya interaction {#Bdm}
---------------------------------
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides induced Dzyaloshinskii-Moriya interaction for thin films with out-of-plane symmetry breaking according to [@Bogdanov2001], yielding an effective field term:
$${{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{DM}}} = \frac{2D}{{\ensuremath{M_\mathrm{sat}}}} \left(\frac{\partial m_z}{\partial x},\ \frac{\partial m_z}{\partial y},\ -\frac{\partial m_x}{\partial x}-\frac{\partial m_y}{\partial y}\right)\label{eqBDMI}$$
where we apply boundary conditions[@Rohart2013]:
$$\begin{aligned}
\left.\frac{\partial m_z}{\partial x}\right|_{\partial V} &=& \frac{D}{2A}m_x\label{eqDMIBC1} \\
\left.\frac{\partial m_z}{\partial y}\right|_{\partial V} &=& \frac{D}{2A}m_y\\
\left.\frac{\partial m_x}{\partial x}\right|_{\partial V} = \left.\frac{\partial m_y}{\partial y}\right|_{\partial V} &=& -\frac{D}{2A}m_z \\
\left.\frac{\partial m_x}{\partial y}\right|_{\partial V} = \left.\frac{\partial m_y}{\partial x}\right|_{\partial V} &=& 0\\
\left.\frac{\partial m_x}{\partial z}\right|_{\partial V} = \left.\frac{\partial m_y}{\partial z}\right|_{\partial V} = \left.\frac{\partial m_z}{\partial z}\right|_{\partial V} &=& 0\label{eqDMIBC2}\end{aligned}$$
Numerically, all derivatives are implemented as central derivatives, i.e., the difference between neighboring magnetizations over their distance in that direction: $\partial{{\ensuremath{\vec{\textbf{m}}}}\xspace}/ \partial i = ({{\ensuremath{\vec{\textbf{m}}}}\xspace}_{i+1} - {{\ensuremath{\vec{\textbf{m}}}}\xspace}_{i-1})/(2\Delta_i)$. When a neighbor is missing at the boundary ($\partial V$), its magnetization is replaced by ${{\ensuremath{\vec{\textbf{m}}}}\xspace}+ \frac{\partial m}{\partial i}|_{\partial V} \Delta_i{\ensuremath{\vec{\textbf{n}}}}$ where ${{\ensuremath{\vec{\textbf{m}}}}\xspace}$ refers to the central cell and the relevant partial derivative is selected from Eq. \[eqDMIBC1\]–\[eqDMIBC2\].\
In case of nonzero $D$, these boundary conditions are simultaneously applied to the Heisenberg exchange field.\
The effective field in Eq.\[eqBDMI\] gives rises to an energy density:
$$\begin{aligned}
{\ensuremath{\mathcal{E}_\mathrm{exch(DM)}}} &=& m_z(\nabla\cdot{\ensuremath{\vec{\textbf{m}}}}) - ({\ensuremath{\vec{\textbf{m}}}}\cdot\nabla)m_z\label{eqEdDMIex}\\
&=& -\frac{1}{2}{\ensuremath{\vec{\textbf{M}}}} \cdot {{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{exch(DM)}}}\label{eqEdensDMI}\end{aligned}$$
Similar to the anisotropic exchange case, [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}calculates the energy density from Eqns.\[eqEdensDMI\], \[eqBDMI\]. Eq.\[eqEdDMIex\] is the exact form, well approximated for sufficiently small cell sizes.\
In [Fig.\[figThiaville\]]{}, the DMI implementation is compared to the work of Thiaville [*et al.*]{}[@Thiaville2012], where the transformation of a Bloch wall into a Néel wall by varying [$D_\mathrm{ex}$]{} is studied.
![\[figThiaville\] Simulated domain wall magnetization in a 250nm wide, 0.6nm thick Co/Pt film ([$M_\mathrm{sat}$]{}=11003A/m, [$A_\mathrm{ex}$]{}=16[$\times$10$^{-12}$]{}J/m, $K_\mathrm{u1}$=1.276 J/m$^3$) as a function of the Dzyaloshinskii-Moriya strength [$D_\mathrm{ex}$]{}. The left-hand and righ-hand sides correspond to a Bloch and Néel wall, respectively. Results correspond well to [@Thiaville2012].](dmi){width="0.5\linewidth"}
Magneto-crystalline anisotropy {#Banis}
------------------------------
#### Uniaxial {#uniaxial .unnumbered}
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides uniaxial magneto-crystalline anisotropy in the form of an effective field term:
$$\begin{aligned}
{{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{anis}}} &=& \frac{2 K_\mathrm{u1}}{{\ensuremath{B_\mathrm{sat}}}} ({\ensuremath{\vec{\textbf{u}}}} \cdot {\ensuremath{\vec{\textbf{m}}}}) {\ensuremath{\vec{\textbf{u}}}}\nonumber\\
&+& \frac{4 K_\mathrm{u2}}{{\ensuremath{B_\mathrm{sat}}}} ({\ensuremath{\vec{\textbf{u}}}} \cdot {\ensuremath{\vec{\textbf{m}}}})^3 {\ensuremath{\vec{\textbf{u}}}} \end{aligned}$$
where $K_\mathrm{u1}$ and $K_\mathrm{u2}$ are the first and second order uniaxial anisotropy constants and ${\ensuremath{\vec{\textbf{u}}}}$ a unit vector indicating the anisotropy direction. This corresponds to an energy density: $$\begin{aligned}
\mathcal{E}_\mathrm{anis} &=& -K_\mathrm{u1}({\ensuremath{\vec{\textbf{u}}}} \cdot {\ensuremath{\vec{\textbf{m}}}})^2- K_\mathrm{u2}({\ensuremath{\vec{\textbf{u}}}}\cdot {\ensuremath{\vec{\textbf{m}}}})^4 \label{EanisUA} \\
&=& -\frac{1}{2} {{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{anis}}}(K_\mathrm{u1})\cdot{\ensuremath{\vec{\textbf{M}}}} -\frac{1}{4} {{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{anis}}}(K_\mathrm{u2})\cdot{\ensuremath{\vec{\textbf{M}}}} \label{EaniUM}\end{aligned}$$
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}calculates the energy density from the effective field using Eq.\[EaniUM\], where ${{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{anis}}}(K_\mathrm{ui})$ denotes the effective field term where only $K_\mathrm{ui}$ is taken into account. The resulting energy is verified in [Fig.\[figAnis\]]{}. Since the energy is derived directly form the effective field, this serves as a test for the field as well.\
![\[figAnis\]Uniaxial (top) and cubic (bottom) anisotropy energy density of a single spin as a function of its orientation in the $xy$-plane. The uniaxial axis is along $x$, the cubic axes along $x$, $y$ and $z$. The dots are computed with [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}(Eq.\[EaniUM\],\[EanisCM\]), lines are analytical expressions (Eq.\[EanisUA\],\[EanisCA\]). Positive and negative $K$ values denote hard and easy anisotropy, respectively.](anisotropy-crop "fig:"){width="0.22\linewidth"} ![\[figAnis\]Uniaxial (top) and cubic (bottom) anisotropy energy density of a single spin as a function of its orientation in the $xy$-plane. The uniaxial axis is along $x$, the cubic axes along $x$, $y$ and $z$. The dots are computed with [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}(Eq.\[EaniUM\],\[EanisCM\]), lines are analytical expressions (Eq.\[EanisUA\],\[EanisCA\]). Positive and negative $K$ values denote hard and easy anisotropy, respectively.](anisotropy2-crop "fig:"){width="0.22\linewidth"}\
![\[figAnis\]Uniaxial (top) and cubic (bottom) anisotropy energy density of a single spin as a function of its orientation in the $xy$-plane. The uniaxial axis is along $x$, the cubic axes along $x$, $y$ and $z$. The dots are computed with [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}(Eq.\[EaniUM\],\[EanisCM\]), lines are analytical expressions (Eq.\[EanisUA\],\[EanisCA\]). Positive and negative $K$ values denote hard and easy anisotropy, respectively.](cubic-crop "fig:"){width="0.15\linewidth"} ![\[figAnis\]Uniaxial (top) and cubic (bottom) anisotropy energy density of a single spin as a function of its orientation in the $xy$-plane. The uniaxial axis is along $x$, the cubic axes along $x$, $y$ and $z$. The dots are computed with [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}(Eq.\[EaniUM\],\[EanisCM\]), lines are analytical expressions (Eq.\[EanisUA\],\[EanisCA\]). Positive and negative $K$ values denote hard and easy anisotropy, respectively.](cubic2-crop "fig:"){width="0.15\linewidth"} ![\[figAnis\]Uniaxial (top) and cubic (bottom) anisotropy energy density of a single spin as a function of its orientation in the $xy$-plane. The uniaxial axis is along $x$, the cubic axes along $x$, $y$ and $z$. The dots are computed with [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}(Eq.\[EaniUM\],\[EanisCM\]), lines are analytical expressions (Eq.\[EanisUA\],\[EanisCA\]). Positive and negative $K$ values denote hard and easy anisotropy, respectively.](cubic3-crop "fig:"){width="0.15\linewidth"}
#### Cubic {#cubic .unnumbered}
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides cubic magneto-crystalline anisotropy in the form of an effective field term:
$$\begin{aligned}
{{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{anis}}} =&&\nonumber\\
- 2 K_\mathrm{c1}/{\ensuremath{M_\mathrm{sat}}}(&(({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}})^2 + ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ) ( ({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}}) {\ensuremath{\vec{\textbf{c}}}}_1)&+ \nonumber\\
&(({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}})^2 + ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ) ( ({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}}) {\ensuremath{\vec{\textbf{c}}}}_2)&+ \nonumber\\
&(({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}})^2 + ({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ) ( ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}}) {\ensuremath{\vec{\textbf{c}}}}_3)&) \nonumber\\
- 2 K_\mathrm{c2}/{\ensuremath{M_\mathrm{sat}}}(&(({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ) ( ({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}}) {\ensuremath{\vec{\textbf{c}}}}_1)&+ \nonumber\\
&(({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ) ( ({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}}) {\ensuremath{\vec{\textbf{c}}}}_2)&+ \nonumber\\
&(({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ) ( ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}}) {\ensuremath{\vec{\textbf{c}}}}_3)&) \nonumber\\
- 4 K_\mathrm{c3}/{\ensuremath{M_\mathrm{sat}}}(&(({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}})^4 + ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}})^4 ) ( ({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}})^3 {\ensuremath{\vec{\textbf{c}}}}_1)&+ \nonumber\\
&(({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}})^4 + ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}})^4 ) ( ({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}})^3 {\ensuremath{\vec{\textbf{c}}}}_2)&+ \nonumber\\
&(({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}})^4 + ({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}})^4 ) ( ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}})^3 {\ensuremath{\vec{\textbf{c}}}}_3)&) \nonumber\\\end{aligned}$$
where $K_{\mathrm{c}n}$ is the $n$th-order cubic anisotropy constant and ${\ensuremath{\vec{\textbf{c}}}}_1$, ${\ensuremath{\vec{\textbf{c}}}}_2$, ${\ensuremath{\vec{\textbf{c}}}}_3$ a set of mutually perpendicular unit vectors indicating the anisotropy directions. (The user only specifies ${\ensuremath{\vec{\textbf{c}}}}_1$ and ${\ensuremath{\vec{\textbf{c}}}}_2$. We compute ${\ensuremath{\vec{\textbf{c}}}}_3$ automatically as ${\ensuremath{\vec{\textbf{c}}}}_1 \times {\ensuremath{\vec{\textbf{c}}}}_2$.) This corresponds to an energy density: $$\begin{aligned}
\mathcal{E}_\mathrm{anis} =&& \nonumber\\
K_\mathrm{c1} &(({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}})^2 &+ \nonumber\\
&({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}})^2 &+ \nonumber\\
&({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}})^2)&+ \nonumber\\
K_\mathrm{c2} &({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}})^2 ({\ensuremath{\vec{\textbf{c}}}}_3\cdot{\ensuremath{\vec{\textbf{m}}}})^2 &+ \nonumber\\
K_\mathrm{c3} &(({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}})^4 ({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}})^4&+ \nonumber\\
&({\ensuremath{\vec{\textbf{c}}}}_1\cdot {\ensuremath{\vec{\textbf{m}}}})^4 ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}})^4&+ \nonumber\\
&({\ensuremath{\vec{\textbf{c}}}}_2\cdot {\ensuremath{\vec{\textbf{m}}}})^4 ({\ensuremath{\vec{\textbf{c}}}}_3\cdot {\ensuremath{\vec{\textbf{m}}}})^4&) \label{EanisCA}\end{aligned}$$
which, just like in the uniaxial case, [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}computes using the effective field:
$$\begin{aligned}
\mathcal{E}_\mathrm{anis} &=& -\frac{1}{4} {{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{anis}}}(K_\mathrm{c1})\cdot{\ensuremath{\vec{\textbf{M}}}} -\frac{1}{6} {{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{anis}}}(K_\mathrm{c2})\cdot{\ensuremath{\vec{\textbf{M}}}}\nonumber\\
&& -\frac{1}{8} {{\ensuremath{\vec{\textbf{B}}}}\ensuremath{_\mathrm{anis}}}(K_\mathrm{c3})\cdot{\ensuremath{\vec{\textbf{M}}}} \label{EanisCM}\end{aligned}$$
which is verified in [Fig.\[figAnis\]]{}.\
Thermal fluctuations {#Btherm}
--------------------
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides finite temperature by means of a fluctuating thermal field [[$\vec{\textbf{B}}$]{}$_\mathrm{therm}$]{} according to Brown[@Brown1963temp]:
$${\ensuremath{\vec{\textbf{B}}}}_\mathrm{therm} = \vec\eta(\mathrm{step}) \sqrt{ \frac{2\mu_0\alpha k_\mathrm{B} T}{B_\mathrm{sat}\gamma_\mathrm{LL}\Delta V\Delta t} }$$
where $\alpha$ is the damping parameter, $k_\mathrm{B}$ the Boltzmann constant, $T$ the temperature, [$B_\mathrm{sat}$]{} the saturation magnetization expressed in Tesla, $\gamma_\mathrm{LL}$ the gyromagnetic ratio (1/Ts), $\Delta V$ the cell volume, $\Delta t$ the time step and $\vec\eta(\mathrm{step})$ a random vector from a standard normal distribution whose value is changed after every time step.\
![\[figSwitch\] Arrhenius plot of the thermal switching rate of a 10nm large cubic particle (macrospin), with [$M_\mathrm{sat}$]{}=1MA/m, $\alpha$=0.1, $\Delta t$=[$\times$10$^{-12}$]{}s, $K_\mathrm{u1}$=1[$\times$10$^{4}$]{} or 2[$\times$10$^{4}$]{} J/m$^3$. Simulations were performed on an ensemble of 512$^2$ uncoupled particles for 0.1$\mu$s (high temperatures) or 1$\mu$s (low temperatures). Solid lines are the analytically expected switching rates (Eq.\[eqSwitch\]).](temp){width="0.5\linewidth"}
#### Solver constraints {#solver-constraints .unnumbered}
${\ensuremath{\vec{\textbf{B}}}}_\mathrm{therm}$ randomly changes in between time steps. Therefore, only [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}’s Euler and Heun solvers (\[solver\]) can be used as they do not require torque continuity in between steps. Additionally, with thermal fluctuations enabled we enforce a fixed time step $\Delta t$. This avoids feedback issues with adaptive time step algorithms.\
#### Verification {#verification .unnumbered}
We test our implementation by calculating the thermal switching rate of a single (macro-)spin particle with easy uniaxial anisotropy. In the limit of a high barrier compared to the thermal energy, the switching rate $f$ is know analytically to be [@Breth2012]:
$$f = \gamma_\mathrm{LL}\frac{\alpha}{1+\alpha^2}\sqrt{\frac{8 K_\mathrm{u1}^3 V}{2\pi M_\mathrm{sat}^2kT}}e^{-KV/kT} \label{eqSwitch}$$
[Fig.\[figSwitch\]]{} shows Arrhenius plots for the temperature-dependent switching rate of a particle with volume $V$=(10nm)$^3$ and $K_\mathrm{u1}$=1[$\times$10$^{4}$]{} or 2[$\times$10$^{4}$]{} J/m$^3$. The [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}simulations correspond well to Eq.\[eqSwitch\].
Zhang-Li Spin-transfer torque {#tqZL}
-----------------------------
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}includes a spin-transfer torque term according to Zhang and Li [@Zhang2004], applicable when electrical current flows through more than one layer of cells:
$$\begin{aligned}
\nonumber {{\ensuremath{\vec{\textbf{\ensuremath{\tau}}}}}\ensuremath{_\mathrm{ZL}}} &=& \frac{1}{1+\alpha^2} ( \left(1+\xi\alpha\right) {{\ensuremath{\vec{\textbf{m}}}}\xspace}\times \left({{\ensuremath{\vec{\textbf{m}}}}\xspace}\times {({\ensuremath{\vec{\textbf{u}}}}\cdot\nabla){\ensuremath{\vec{\textbf{m}}}}}\right) +\\
& & \left(\xi-\alpha\right){\ensuremath{\vec{\textbf{m}}}}\times {({\ensuremath{\vec{\textbf{u}}}}\cdot\nabla){\ensuremath{\vec{\textbf{m}}}}}) \\
{\ensuremath{\vec{\textbf{u}}}} &=& \frac{\mu_B \mu_0}{2 e \gamma_0 B_\mathrm{sat} (1 + \xi^2)} {\ensuremath{\vec{\textbf{j}}}}\end{aligned}$$
where $\vec{j}$ is the current density, $\xi$ is the degree of non-adiabaticity, $\mu_B$ the Bohr magneton and [$B_\mathrm{sat}$]{} the saturation magnetization expressed in Tesla.
The validity of our implementation is tested by Standard Problem \#5 (Section \[std5\]).
Slonczewski Spin-transfer torque {#tqSL}
--------------------------------
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides a spin momentum torque term according to Slonczewski[@Slonczewski1996; @Xiao2004], transformed to the Landau-Lifshitz formalism:
$$\begin{aligned}
{{\ensuremath{\vec{\textbf{\ensuremath{\tau}}}}}\ensuremath{_\mathrm{SL}}} &=& \beta\frac{\epsilon-\alpha\epsilon'}{1+\alpha^2} ({{\ensuremath{\vec{\textbf{m}}}}\xspace}\times ({{\ensuremath{\vec{\textbf{m}}}}\xspace}_P \times {{\ensuremath{\vec{\textbf{m}}}}\xspace})) \nonumber\\
&&- \beta\frac{\epsilon'-\alpha\epsilon}{1+\alpha^2} {{\ensuremath{\vec{\textbf{m}}}}\xspace}\times {{\ensuremath{\vec{\textbf{m}}}}\xspace}_P \label{eqSTT}\\
\beta &=& \frac{j_z \hbar}{ {\ensuremath{M_\mathrm{sat}}}e d} \\
\epsilon &=& \frac{P{\ensuremath{\left({\ensuremath{\vec{\textbf{r}}}},t \right)}}\Lambda^2}{(\Lambda^2 + 1)+ (\Lambda^2-1)({{\ensuremath{\vec{\textbf{m}}}}\xspace}\cdot{{\ensuremath{\vec{\textbf{m}}}}\xspace}_P)}\end{aligned}$$
where $j_z$ is the current density along the $z$ axis, $d$ is the free layer thickness, ${{\ensuremath{\vec{\textbf{m}}}}\xspace}_P$ the fixed-layer magnetization, $P$ the spin polarization, the Slonczewski $\Lambda$ parameter characterizes the spacer layer, and $\epsilon'$ is the secondary spin-torque parameter.\
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}only explicitly models the free layer magnetization. The fixed layer is handled in the same way as material parameters and is always considered to be on top of the free layer. The fixed layer’s stray field is not automatically taken into account, but can be pre-computed by the user and added as a space-dependent external field term.\
![\[figStd5b\] Verification of the Slonczewski torque: average magnetization in a 160nm[$\times$]{}80nm[$\times$]{}5nm rectangle with [$M_\mathrm{sat}$]{}=8003A/m, [$A_\mathrm{ex}$]{}=13[$\times$10$^{-12}$]{}J/m$^2$, $\alpha$=0.01, $P$ = 0.5669, $j_z$=4.6875[$\times$10$^{11}$]{}A, $\Lambda$=2, $\epsilon'$=1, ${\ensuremath{\vec{\textbf{m}}}}_p$=(cos(20$^\circ$), sin(20$^\circ$), 0), initial $m$=(1,0,0). Solid line calculated with OOMMF, points by [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}.](std5b){width="0.5\linewidth"}
As a verification we consider switching an MRAM bit in 160nm[$\times$]{}80nm[$\times$]{}5nm Permalloy ([$M_\mathrm{sat}$]{}=8003A/m, [$A_\mathrm{ex}$]{}=13[$\times$10$^{-12}$]{}J/m$^2$, $\alpha$=0.01, $P$ = 0.5669) by a total current of -6mA along the $z$ axis using $\Lambda$=2, $\epsilon'$=1. These parameters were chosen so that none of the terms in Eq.\[eqSTT\] are zero. The fixed layer is polarized at 20$^\circ$ from the $x$ axis to avoid symmetry problems and the initial magnetization was chosen uniform along $x$. The [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}and OOMMF results shown in [Fig.\[figStd5b\]]{} correspond well.
Time integration {#solver}
================
Dynamics
--------
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides a number of explicit Runge-Kutta methods for advancing the Landau-Lifshitz equation (Eq.\[eqLLG\]):
- RK45, the Dormand-Prince method, offers 5-th order convergence and a 4-th order error estimate used for adaptive time step control. This is the default for dynamical simulations.
- RK32, the Bogacki-Shampine method, offers 3-th order convergence and a 2nd order error estimate used for adaptive time step control. This method is used when relaxing the magnetization to its ground state in which case it performs better than RK45.
- RK12, Heun’s method, offers 2nd order convergence and a 1st order error estimate. This method is used for finite temperature simulations as it does not require torque continuity in between time steps.
- RK1, Euler’s method is provided for academic purposes.
These solvers’ convergence rates are verified in [Fig.\[figConvergence\]]{}, which serves as a test for their implementation and performance.\
![\[figConvergence\] Absolute error on a single spin after precessing without damping for one period in a 0.1T field, as a function of different solver’s time steps. The errors follow 1st, 2nd, 3rd or 5th order convergence (solid lines) for the respective solvers down to a limit set by the single precision arithmetic.](convergence){width="0.5\linewidth"}
#### Adaptive time step {#adaptive-time-step .unnumbered}
RK45, RK23 and RK12 provide adaptive time step control, i.e., automatically choosing the time step to keep the error per step $\epsilon$ close to a preset value $\epsilon_0$. As the error per step we use $\epsilon = \mathrm{max}\left|\tau_\mathrm{high}-\tau_\mathrm{low}\right|\Delta t$, with $\tau_\mathrm{high}$ and $\tau_\mathrm{low}$ high-order and low-order torque estimates provided by the particular Runge-Kutta method, and $\Delta t$ the time step. The time step is adjusted using a default headroom of 0.8.
![\[figMaxErr\] Absolute error on a single spin after precessing without damping for one period in a 0.1T field, as a function of different solver’s MaxErr settings. Solid lines represent 1st order fits. The same lower bound as in [Fig.\[figConvergence\]]{} is visible.](maxerr){width="0.5\linewidth"}
In [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}, $\epsilon_0$ is accessible as the variable MaxErr. Its default value of 10$^{-5}$ was adequate for the presented standard problems. The relation between $\epsilon_0$ and the overall error at the end of the simulation is in general hard to determine. Nevertheless, we illustrate this in [Fig.\[figMaxErr\]]{} for a single period of spin precession under the same conditions as Fig.\[figConvergence\]. It can be seen that the absolute error per precession scales linearly with $\epsilon_0$, although the absolute value of the error depends on the solver type and exact simulation conditions.\
Energy minimization
-------------------
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides a [`relax()`]{} function that attempts to find the systems’ energy minimum. This function disables the precession term Eq.\[eqLLG\], so that the effective field points towards decreasing energy. [`Relax`]{} first advances in time until the total energy cuts into the numerical noise floor. At that point the state will be close to equilibrium already. We then begin monitoring the magnitude of the torque instead of the energy, since close to equilibrium the torque will decrease monotonically and is less noisy than the energy. So we advance further until the torque cuts into the noise floor as well. Each time that happens, we decrease `MaxErr` and continue further until `MaxErr`=10$^{-9}$. At this point it does not make sense to increase the accuracy anymore (see Fig.\[figMaxErr\]) and we stop advancing.\
This [`Relax`]{} procedure was used in the presented standard problems, where it proved adequate. Typical residual torques after [`Relax`]{} are of the order of 10$^{-4}$–10$^{-7}$$\gamma_\mathrm{LL}$T, indicating that the system is indeed very close to equilibrium. Nevertheless, as with any energy minimization technique, there is always a possibility that the system ends up in a saddle point or very flat part of the energy landscape.\
[`Relax`]{} internally uses the RK23 solver, which we noticed performs better then RK45 in most relaxation scenarios. Near equilibrium, both solvers tend to take similarly large time steps, but RK23 needs only half as many torque evaluations per step as RK45.
Standard Problems
=================
In this section we provide solutions to micromagnetic standard problems \#1–4 provided by the [$\mu$Mag]{}modeling group[@mumag] and standard problem \#5 proposed by Najafi [*et al.*]{}[@Najafi2009]. Reference solutions were taken from[@mumag] as noted, or otherwise calculated with OOMMF1.2alpha5bis[@oommf].
Standard Problem \#1 {#std1}
--------------------
The first $\mu$Mag standard problem involves the hysteresis loops of a 1[$\mu$m]{}[$\times$]{}2[$\mu$m]{}[$\times$]{}20[nm]{}Permalloy rectangle ([$A_\mathrm{ex}$]{}= 1.3[$\times$10$^{-11}$]{}J/m, [$M_\mathrm{sat}$]{}= 85 A/m, [$K_\mathrm{u1}$]{}= 52 J/m$^3$ uniaxial, with easy axis nominally parallel to the long edges of the rectangle) for the field approximately parallel to the long and short axis, respectively. Our solution is presented in Fig.\[figStd1\]. Unfortunately the submitted solutions[@mumag] do not agree with each other, making it impossible to assert the correctness in a quantitative way.
![\[figStd1\] [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}solution for standard problem \#1, using a 2D grid of 3.90625nm wide cells. Open symbols represent the virgin curve starting from a vortex state. After each field step we applied thermal fluctuations with $\alpha=0.05$, $T=300K$ for 500ps to allow the magnetization to jump over small energy barriers. There are no consistent standard solutions to compare with.](std1){width="0.5\linewidth"}
Standard Problem \#2 {#std2}
--------------------
The second $\mu$Mag standard problem considers a thin film of width $d$, length $5d$ and thickness $0.1d$, all expressed in terms of the exchange length $\l_\mathrm{ex} = \sqrt{2A_\mathrm{ex}/\mu_0M_\mathrm{sat}^2}$. The remanence and coercitive field, expressed in units [$M_\mathrm{sat}$]{}, are to be calculated as a function of $d/l_\mathrm{ex}$.\
The coercivity, shown in Fig.\[figStd2hc\], behaves interestingly in the small-particle limit where an analytical solution exists[@Donahue2000]. In that case the magnetization is uniform and the magnetostatic field dominates the behaviour. Of the solutions submitted to the [$\mu$Mag]{}group [@Streibl1999; @Lopez-Diaz1999; @McMichael1999; @Donahue2000], the Streibl[@Streibl1999], Donahue[@Donahue2000] (OOMMF1.1) and [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}results best approach the small-particle limit. It was shown by Donahue [*et al.*]{}[@Donahue2000] that proper averaging of the magnetostatic field over each cell volume is needed to accurately reproduce the analytical limit. Hence this standard problem serves as a test for the numerical integration of our demagnetizing kernel.\
![\[figStd2rem\] Remanence for standard problem \#2 as a function of the magnet size $d$ expressed in exchange lengths $l_\mathrm{ex}$. The [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}calculations (points) use automatically chosen cell sizes between 0.25 and 0.5$l_\mathrm{ex}$. OOMMF results (line) were taken from[@mumag].](std2rem){width="0.5\linewidth"}
![\[figStd2hc\]Coercivity for standard problem \#2 as a function of the magnet size $d$ expressed in exchange lengths $l_\mathrm{ex}$. [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}calculations (points) use automatically chosen cell sizes between 0.25 and 0.5$l_\mathrm{ex}$. OOMMF results (line) taken from[@mumag]. The slight discrepancy at high $d$ is attributed to OOMMF’s solution using larger cells there. The analytical limit for very small size is by Donahue [*et al.*]{}[@Donahue2000].](std2hc){width="0.5\linewidth"}
Standard Problem \#3 {#std3}
--------------------
Standard problem \#3 considers a cube with edge length $L$ expressed in exchange lengths $\l_\mathrm{ex} = \sqrt{2A_\mathrm{ex}/\mu_0M_\mathrm{sat}^2}$. The magnet has uniaxial anisotropy with $K_{u1}=0.1 K_m$, with $K_m=1/2\mu_0 M_\mathrm{sat}^2$, easy axis parallel to the $z$-axis. The critical edge size $L$ where the ground state transitions between a quasi-uniform and vortex-like state needs to be found, it is expected around $L$=8.\
![\[figStd3\]Standard problem \#3: energy densities of the flower (a), twisted flower (b), vortex (c) and canted vortex (d) states as a function of the cube edge length $L$. Transitions of the ground state are marked with vertical lines at $L=8.16$ and $L=8.47$.](std3 "fig:"){width="0.5\linewidth"}\
This problem was solved using a 16[$\times$]{}16[$\times$]{}16 grid. The cube was initialized with $\propto$3,000 different random initial magnetization states for random edge lengths $L$ between 7.5 and 9, and relaxed to equilibrium. Four stable states were found, shown in Fig.\[figStd3\]: a quasi-uniform flower state (a), twisted flower state (b), vortex state (c) and a canted vortex (d). Then cubes of different sizes were initialized to these states and relaxed to equilibrium. The resulting energy for each state, shown in Fig.\[figStd3\], reveals the absolute ground states in the considered range: flower state for $L<8.16$, twisted flower for $8.16<L<8.47$ and vortex for $L>8.47$.\
The transition at $L$=8.47 is in quantitative agreement with the solutions posted to [$\mu$Mag]{}by Rave [*et al.*]{}[@Rave1998] and by Martins [*et al.*]{}[@mumag]. The existence of the twisted flower state was already noted by Hertel [*et al.*]{}[@mumag], although without determining the flower to twisted flower transition point.\
Standard Problem \#4 {#std4}
--------------------
Standard problem \#4 considers dynamic magnetization reversal in a 500nm[$\times$]{}125nm[$\times$]{}3[nm]{}Permalloy magnet ([$A_\mathrm{ex}$]{}=1.3[$\times$10$^{-11}$]{}J/m, [$M_\mathrm{sat}$]{}=85A/m). The initial state is an S-state obtained after saturating along the (1,1,1) direction. Then the magnet is reversed by either field (a): (-24.6, 4.3, 0)mT or field (b): (-35.5, -6.3, 0)mT. Time-dependent average magnetizations should be given, as well as the space-dependent magnetization when $<m_x>$ first crosses zero.\
Our solution, shown in Fig.\[figStd4\], agrees with OOMMF.\
![\[figStd4\][<span style="font-variant:small-caps;">MuMax$^3$</span>]{}(dots) and OOMMF (lines) solution to standard problem \#4a (top graph) and \#4b (bottom graph), as well as space-dependent magnetization snapshots when $<m_x>$ crosses zero, for fields (a) and (b). All use a 200[$\times$]{}50[$\times$]{}1 grid.](std4 "fig:"){width="0.5\linewidth"}\
Standard Problem \#5 {#std5}
--------------------
Standard problem \#5 proposed by Najafi [*et al.*]{}[@Najafi2009] considers a 100nm$\times$100nm$\times$10nm Permalloy square ($A=13\times10^{-12}$J/m, ${\ensuremath{M_\mathrm{sat}}}=8\times10^5$A/m, $\alpha$=0.1, $\xi$=0.05) with an initial vortex magnetization. A homogeneous current $\mathbf{j}=10^{12}$Am$^{-2}$ along $x$, applied at $t=0$ drives the vortex towards a new equilibrium state. The obtained time-dependent average magnetization, shown in Fig.\[figStd5\], agrees well the OOMMF solution.
![\[figStd5\] [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}(dots) and OOMMF (lines) solution to standard problem \#5, both using a 50[$\times$]{}50[$\times$]{}5 grid.](std5){width="0.5\linewidth"}
Extensions
==========
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}is designed to be modular and extensible. Some of our extensions, described below, have been merged into the mainline code because they may be of general interest. Nevertheless, extensions are considered specific to certain needs and are less generally usable than the aforementioned main features. E.g., MFM images and Voronoi tessellation are only implemented in 2D and only qualitatively tested.\
Moving frame
------------
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides an extension to translate the magnetization with respect to the finite difference grid (along the $x$-axis), inserting new values from the side. This allows the simulation window to seemingly “follow” a region of interest like domain wall moving in a long nanowire, without having to simulate the entire wire. [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}can automatically translate the magnetization to keep an average magnetization component of choice as close to zero as possible, or the user may arbitrarily translate ${\ensuremath{\vec{\textbf{m}}}}$ from the input script.\
When following a domain wall in a long in-plane magnetized wire, we also provide the possibility to remove the magnetic charges on the ends of the wire. This simulates an effectively infinitely long wire without closure domains, as illustrated in Fig.\[figMove\].\
Finally, when shifting the magnetization there is an option to also shift the material regions and geometry along. The geometry and material parameters for the “new” cells that enter the simulation from the side are automatically re-calculated so that new grains and geometrical features may seamlessly enter the simulation. This can be useful for, e.g., simulating a long racetrack with notches like illustrated in Fig.\[figMove\], or a moving domain wall in a grainy material as published in [@Leliaert2014].\
![\[figMove\] Top frame: magnetization in a 1$\mu$m wide, 20nm thick Permalloy wire of finite length. The remaining frames apply edge charge removal to simulate an infinitely long wire. The domain wall is driven by a 3[$\times$10$^{12}$]{}A/m$^2$ current while being followed by the simulation window. So that it appears steady although moving at high speed (visible by the wall breakdown). While moving, new notches enter the simulation from the right.](move){width="0.5\linewidth"}
Voronoi Tessellation
--------------------
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides 2D Voronoi Tessalation as a way to simulate grains in thin films, similar to OOMMF[@Lau2009]. It is possible to have [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}set-up the regions map with grain-shaped islands, randomly colored with up to 256 region numbers (Fig.\[figVoronoi\](a)). The material parameters in each of those regions can then be varied to simulate, e.g., grains with randomly distributed anisotropy axes or even change the exchange coupling between them (Fig.\[figVoronoi\](b)).\
Our implementation is compatible with the possibility to move the simulation window. E.g., when the simulation window is following a moving domain wall, new grains will automatically enter the simulation from the sides. The new grains are generated using hashes of the cell coordinates so that there is no need to store a (potentially very large) map of all the grains beyond the current simulation grid. More details can be found in[@Leliaert2014]\
![\[figVoronoi\]Example of a Voronoi tessellation with average 100nm grains in a 2048$\mu$m wide disk. Left: cells colored by their region index (0–256). Right: boundaries between the grains visualized by reducing the exchange coupling between them (Eq.\[eqBexch\]), and outputting [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}’s [`ExchCoupling`]{} quantity, the average [$M_\mathrm{sat}$]{}/[$A_\mathrm{ex}$]{} around each cell.](voronoi){width="0.5\linewidth"}
Magnetic force microscopy
-------------------------
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}has a built-in capability to generate magnetic force microscopy (MFM) images in Dynamic (AC) mode from a 2D magnetization. We calculate the derivative of the force between tip and sample from the convolution:
$$\frac{\partial F_z}{\partial z} = \sum_{i=x,y,z} M_i(x,y) * \frac{\partial^2 B_{\mathrm{tip},i}(x,y)}{\partial{z}^2} \label{eqMFM}$$
where ${\ensuremath{\vec{\textbf{B}}}}_\mathrm{tip}$ is the tip’s stray field evaluated in the sample plane. [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}provides the field of an idealized dipole or monopole tip with arbitrary elevation. No attempt is made to reproduce tip fields in absolute terms as our only goal is to produce output proportional to the actual MFM contrast, like shown in Fig.\[figMFM\].\
Eq. \[eqMFM\] is implemented using FFT-acceleration similar to the magnetostatic field, and is evaluated on the GPU. Hence MFM image generation is very fast and generally takes only a few milliseconds. This makes it possible to watch “real-time” MFM images in the web interface while the magnetization is evolving in time.\
![\[figMFM\] (a) vortex magnetization in a 750nm[$\times$]{}750nm[$\times$]{}10nm Permalloy square. (b), (c) are [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}-generated MFM images at 50nm and 100nm lift height respectively, both using AC mode and a monopole tip model.](mfm){width="0.5\linewidth"}
Performance {#perf}
===========
Simulation size
---------------
Nowadays, GPU’s offer massive computational performance of several TFlop/s per device. However, that compute power is only fully utilized in case of sufficient parallelization, i.e., for sufficiently large simulations. This is clearly illustrated by considering how many cells can be processed per second. I.e., $N_\mathrm{cells}/t_\mathrm{update}$ with $t_\mathrm{update}$ the time needed to calculate the torque for $N_\mathrm{cells}$ cells. We refer to this quantity as the throughput. Given the overall complexity of $\mathcal{O}(N\log(N))$ one would expect a nearly constant throughput that slowly degrades at high $N$. For all presented throughputs, magnetostatic and exchange interactions were enabled and no output was saved.\
The throughput presented in Fig.\[figAllsize\] for a square 2D simulation on a GTX TITAN GPU only exhibits the theoretical, nearly constant, behaviour starting from about 256000 cells. Below, the GPU is not fully utilized and performance drops. Fortunately, large simulations are exactly where GPU speed-up is needed most and where performance is optimal.\
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}’s performance is dominated by FFT calculations using the cuFFT library, which performs best for power-of-two sizes and acceptably for 7-smooth numbers (having only factors 2,3,5 and 7). Other numbers, especially primes should be avoided. This is clearly illustrated in Fig.\[figAllsize\] where other than the recommended sizes show a performance penalty of up to about an order of magnitude. So somewhat oversizing the grid up to a nice smooth number may be beneficial to the performance.\
Note that the data in Fig.\[figPerf\] is for a 2D simulation. Typically a 3D simulation with the same total number of cells costs an additional factor $\propto 1.5\times$ in compute time and memory due to additional FFTs along the $z$-axis.\
On the other hand, simulations with periodic boundary conditions will run considerably faster than their non-periodic counterparts. This is due to the absence of zero-padding which reduces FFT sizes by 2 in each periodic direction. Memory consumption will be considerably lower in this case as well.\
![\[figAllsize\] [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}throughput on GTX TITAN GPU, for all $N\times N$ grid sizes up to 1024[$\times$]{}1024. Numbers with only factors 2,3,5,7 are marked with an open box, pure powers of two (corresponding to Fig.\[figPerf\]) with a full box. Proper grid sizes should be chosen to ensure optimal performance.](allsize){width="0.5\linewidth"}
Hardware
--------
Apart form the simulation size, [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}’s performance is strongly affected by the particular GPU hardware. We highlight the differences between several GPU models by comparing their throughput in Fig.\[figPerf\]. This was done for a 4M cells simulation where all tested GPUs were fully utilized. So the numbers are indicative for all sufficiently large simulation sizes.\
We also included OOMMF’s throughput on a quad-core 2.1GHz core i7 CPU to give a rough impression of the GPU speed-up. The measured OOMMF performance (not clearly distinguishable in Fig.\[figPerf\]) was around 4[$\times$10$^{6}$]{} cells/s. So with a proper GPU and sufficiently large grid sizes, a speed-up of 20–45[$\times$]{}with respect to a quad-core can be reached or, equivalently, a 80–180[$\times$]{}speed-up compared to a single-core CPU. This is in line with earlier <span style="font-variant:small-caps;">MuMax</span>1 and MicroMagnum benchmarks [@mumax; @micromagnum]. It must be noted however that OOMMF operates in double-precision in contrast to [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}’s single-precision arithmetic, and also does not suffer reduced throughput for simulations.\
![\[figPerf\] [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}throughput, measured in how many cells can have their torque evaluated per second (higher is better), for a 4[$\times$10$^{6}$]{} cell simulation (indicative for all sufficiently large simulations). For comparision, OOMMF performance on a quad-core 2.1GHz CPU lies around 4M cells/s.](gpus){width="0.5\linewidth"}
Finally, MicroMagnum’s throughput (not presented) was found to be nearly indistinguishable from [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}. This is unsurprising since both [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}’s MicroMagnum’s performance are dominated by CUDA’s FFT routines. In our benchmarks on a GTX650M, differences between both packages were comparable to the noise on the timings.\
Memory use
----------
![\[figMem\] Indication of the number of cells that can be addressed with 2GB of GPU memory for simulations in 2D, 3D and thin 3D (here 3 layers) and using different solvers. RK45 is [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}’s default solver for dynamics, RK23 for relaxation. Only magnetostatic, exchange and Zeeman terms were enabled.](mem){width="0.5\linewidth"}
In contrast to their massive computational power, GPUs are typically limited to rather small amounts of memory (currently 1—6GB). Therefore, [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}was heavily optimized to use as little memory as possible. E.g., we exploited the magnetostatic kernel symmetries and zero elements and make heavy use of memory pooling and recycling.\
Also, [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}employs minimal zero-padding in the common situation of 3D simulations with only a small number of layers. For up to 10 layers there is no need to use a power of two, and memory usage will be somewhat reduced as well.\
In this way, [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}on a GPU with only 2GB of memory is able to simulate about 9 million cells in 2D and 6 million in 3D, or about 2[$\times$]{}more than MicroMagnum v0.2[@micromagnum] (see Fig.\[figMem\]). When using a lower-order solver this number can be further increased to 126 cells with RK23 (2D) or 166 cells with RK12(2D), all in 2GB. Cards like the GTX TITAN and K20XM, with 6GB RAM can store proportionally more, e.g., 31M cells for 2D with the RK45 solver.\
Conclusion
==========
We have presented in detail the micromagnetic model employed by [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}, as well as a verification for each of its components. GPU acceleration provides a speed-up of 1–2 orders of magnitude compared to CPU-based micromagnetic simulations. In addition, [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}’s low memory requirements open up the possibility of very large-scale micromagnetic simulations, a regime where the GPU’s potential is fully utilized and where the speed-up is also needed most. E.g., depending on the solver type [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}can fit 10–16 million cells in 2GB GPU RAM — about 2[$\times$]{}more than MuMax2 or MicroMagnum.\
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}is open-source and designed to be easily extensible, so anybody can in principle add functionality. Some extensions like a moving simulation window, edge charge removal, Voronoi tessellation and MFM images have been permanently merged into [<span style="font-variant:small-caps;">MuMax$^3$</span>]{}and more extensions are expected in the future.
Acknowledgements
================
This work was supported by the Flanders Research Foundation (FWO).\
The authors would like to cordially thank Ahmad Syukri bin Abdollah, Alex Mellnik, Aurelio Hierro, Ben Van de Wiele, Colin Jermain, Damien Louis, Ezio Iacocca, Gabriel Chaves, Graham Rowlands, Henning Ulrichs, Joo-Von Kim, Lasse Laurson, Mathias Helsen, Raffaele Pellicelli, Rémy Lassalle-Balier, Robert Stamps and Xuanyao (Kelvin) Fong for the fruitful discussions, contributions or feedback, as well as all others who tested early <span style="font-variant:small-caps;">MuMax</span> versions.\
[<span style="font-variant:small-caps;">MuMax$^3$</span>]{}uses svgo (<http://github.com/ajstarks/svgo>), copyright Anthony Starks, and freetype-go (<http://code.google.com/p/freetype-go>), copyright Google Inc., Jeff R. Allen, Rémy Oudompheng, Roger Peppe.
Input scripts {#appendixA}
=============
Geometry ([Fig.\[figCSG\]]{})
-----------------------------
Precession ([Fig.\[figtqLL\]]{})
--------------------------------
Cube demag tensor (Table\[tabCube\])
------------------------------------
Long-range demag ([Fig.\[figLong\]]{})
--------------------------------------
Sheet demag tensor with PBC (Table\[tabPBC1\])
----------------------------------------------
Rod demag tensor with PBC (Table\[tabPBC2\])
--------------------------------------------
Exchange energy ([Fig.\[figExchE\]]{})
--------------------------------------
DM interaction ([Fig.\[figThiaville\]]{})
-----------------------------------------
Uniaxial anisotropy ([Fig.\[figAnis\]]{})
-----------------------------------------
Cubic anisotropy ([Fig.\[figAnis\]]{})
--------------------------------------
Thermal fluctuations ([Fig.\[figSwitch\]]{})
--------------------------------------------
Slonzewski STT ([Fig.\[figStd5b\]]{})
-------------------------------------
Solver convergence and MaxErr (Figs.\[figConvergence\] and \[figMaxErr\])
-------------------------------------------------------------------------
Standard Problem 1 ([Fig.\[figStd1\]]{})
----------------------------------------
Standard Problem 2 (Figs.\[figStd2rem\] and \[figStd2hc\])
----------------------------------------------------------
Standard Problem 3 ([Fig.\[figStd3\]]{})
----------------------------------------
Standard Problem 4 ([Fig.\[figStd4\]]{})
----------------------------------------
Standard Problem 5 ([Fig.\[figStd5\]]{})
----------------------------------------
Extension: moving reference frame ([Fig.\[figMove\]]{})
-------------------------------------------------------
Extension: Magnetic Force Microscopy ([Fig.\[figMFM\]]{})
---------------------------------------------------------
Benchmark: throughput (Figs.\[figAllsize\] and \[figPerf\])
-----------------------------------------------------------
Benchmark: memory ([Fig.\[figMem\]]{})
--------------------------------------
|
---
abstract: 'Synchronization underlies phenomena including memory and perception in the brain, coordinated motion of animal flocks, and stability of the power grid. These synchronization phenomena are often modeled through networks of phase-coupled oscillating nodes. Heterogeneity in the node dynamics, however, may prevent such networks from achieving the required level of synchronization. In order to guarantee synchronization, external inputs can be used to pin a subset of nodes to a reference frequency, while the remaining nodes are steered toward synchronization via local coupling. In this paper, we present a submodular optimization framework for selecting a set of nodes to act as external inputs in order to achieve synchronization from almost any initial network state. We derive threshold-based sufficient conditions for synchronization, and then prove that these conditions are equivalent to connectivity of a class of augmented network graphs. Based on this connection, we map the sufficient conditions for synchronization to constraints on submodular functions, leading to efficient algorithms with provable optimality bounds for selecting input nodes. We illustrate our approach via numerical studies of synchronization in networks from power systems, wireless networks, and neuronal networks.'
author:
- 'Andrew Clark, Basel Alomair, Linda Bushnell, and Radha Poovendran, [^1][^2][^3]'
bibliography:
- 'TAC14.bib'
title: 'Global Practical Synchronization in Kuramoto Networks: A Submodular Optimization Framework'
---
[^1]: A. Clark is with the Department of Electrical and Computer Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609. [aclark@wpi.edu]{}
[^2]: L. Bushnell and R. Poovendran are with the Department of Electrical Engineering, University of Washington, Seattle, WA, 98195-2500. [{lb2, rp3}@uw.edu]{}
[^3]: B. Alomair is with the Center for Cybersecurity, King Abdulaziz City for Science and Technology, Riyadh, Saudi Arabia. [alomair@kacst.edu.sa]{}
|
---
abstract: 'Dynamic replication is a wide-spread multi-copy routing approach for efficiently coping with the intermittent connectivity in mobile opportunistic networks. According to it, a node forwards a message replica to an encountered node based on a utility value that captures the latter’s fitness for delivering the message to the destination. The popularity of the approach stems from its flexibility to effectively operate in networks with diverse characteristics without requiring special customization. Nonetheless, its drawback is the tendency to produce a high number of replicas that consume limited resources such as energy and storage. To tackle the problem we make the observation that network nodes can be grouped, based on their utility values, into clusters that portray different delivery capabilities. We exploit this finding to transform the basic forwarding strategy, which is to move a packet using nodes of increasing utility, and actually forward it through clusters of increasing delivery capability. The new strategy works in synergy with the basic dynamic replication algorithms and is fully configurable, in the sense that it can be used with virtually any utility function. We also extend our approach to work with two utility functions at the same time, a feature that is especially efficient in mobile networks that exhibit social characteristics. By conducting experiments in a wide set of real-life networks, we empirically show that our method is robust in reducing the overall number of replicas in networks with diverse connectivity characteristics without at the same time hindering delivery efficiency.'
author:
-
bibliography:
- 'cbr-bibl.bib'
title: A Replication Strategy for Mobile Opportunistic Networks based on Utility Clustering
---
opportunistic networks, delay-tolerant networks, mobile social networks, cluster-based routing
Introduction
============
Packet replication has been the dominant routing approach for coping with the intermittent and random connectivity in mobile opportunistic networks, especially in those where nodes exhibit human mobility . The idea behind replication is straightforward; more packet copies increase the probability that a node with a replica will encounter the destination and thus deliver the packet. Yet, replication comes at the cost of more transmissions and increased storage requirements. Therefore, it is imperative to control the level of replication and improve the trade-off between delivery efficiency and cost (both energy and storage related). In other words, it is critical to reduce replication without sacrificing delivery efficiency. So far, the proposed multi-copy routing algorithms work towards this direction but follow two different replication approaches; the “constrained" (or “spray-based") and the “dynamic" one [@DF; @COORDconf; @rfc6693]. In the first approach, the source node starts with a predetermined number of replicas ($L$). Each node with multiple copies makes autonomous decisions on how to distribute them. Algorithms in this category differentiate in the decision making regarding the distribution of replicas. The advantage of this approach is that it provides an easy way for controlling replication since $L$ is the upper limit of copies in the network. The downside is that selecting the optimal $L$ is not straightforward since the choice depends on the network properties that are not known beforehand. In “dynamic" replication, the number of replicas is not predetermined. Instead, each node carrying a packet dynamically creates replicas on a contact basis, i.e., according to the network connectivity. This aspect provides algorithms with the capacity to accommodate networks with diverse characteristics. To control replication levels, in the majority of dynamic schemes a node chooses a subset of its contacts for creating replicas based on the concept of *utility*. The latter is a value that can be calculated using different methods or functions [@Greedy; @EBR; @SimBetTS; @UtilSpray; @DF; @Fresh; @Prophet; @Friendship] and summarizes the fitness (or quality) of a node for delivering and/or forwarding a message.
In this work, we focus on dynamic replication due to its flexibility and versatility in diverse types of networks. Unfortunately, dynamic schemes exhibit an inclination towards over-replication, i.e., create an unnecessary amount of replicas [@DF]. The problem is more severe in the subclass of schemes that endorse a simple “Compare & Replicate" approach [@Greedy; @wowmom-cnr; @DF; @rfc6693]. There, a node $v$ replicates a packet to an encountered node $u$ only if the latter has a higher utility. Several methods try to improve this strategy by implementing more elaborate criteria, e.g., require the utility of $u$ to exceed a threshold or evaluate in parallel the number of already created replicas [@rfc6693]. Probably the most efficient of those approaches is the Delegation Forwarding (DF) algorithm [@DF] that exploits $v$’s replication history and mandates that $u$’s utility should exceed the highest utility recorded among $v$’s past contacts. The COORD algorithm [@COORDconf] further improves the performance of DF by enabling packet carriers to coordinate their views about the highest recorded utility among packet carriers.
Thus far all dynamic schemes make replication decisions using some sort of pair-wise utility-based comparison. In other words, the suitability of a node for carrying a packet replica is decided by comparing its utility to a threshold utility value, e.g., the utility of the packet carrier or the maximum utility among packet carriers, etc. The idea is to place replicas to nodes of increasing delivery capability. We argue that this type of decision making brings significant constraints to our capacity to limit replication since a pair-wise comparison only provides a narrow view of a node’s fitness, i.e., the one relative to the selected threshold. In other words, finding a node with a better utility does not always guarantee a significantly improved delivery capability and therefore replicating the packet may be pointless. Instead, we believe that it is possible to obtain a more broad view of a node’s fitness by examining how its utility value compares to the utilities of the other nodes in the network. To this end, we make use of the observation that mobile opportunistic networks and especially those with human mobility exhibit certain social characteristics and their nodes can be classified into groups with diverse delivery capabilities [@Yoneki-Crowcroft; @SimBetTS; @MilanoJ]. Our intuition is that an analysis of the observed utilities in such a network will bring to light *clusters of utility values that correspond to groups of nodes with different delivery capabilities*, provided that the utility function effectively captures a node’s ability to deliver a message. By classifying nodes to the identified clusters of utilities, it is possible to obtain a network-level view of each node’s capability for delivering a message. Then, we can use this knowledge to avoid replication to nodes in the same cluster as they possess similar delivery capabilities. Instead, we choose to *replicate a packet to nodes classified in clusters of increasing delivery capability*.
In our previous work [@cbr-wowmom] we portrayed the basic principles of *Cluster-based Replication (CbR)*, a method that incarnates our cluster-driven replication strategy and works on top of the most well-known dynamic replication schemes, such as “Compare & Replicate", DF and COORD, and regardless of the chosen utility. In this work, we provide a more detailed description of our observation regarding the clustering property of utilities in real-life networks and provide extensive experimental results to validate it (Section \[sec-formulation\]). Then, we delineate the CbR method (Section \[sec-druc\]). Furthermore:
- we provide an in-depth experimental evaluation of CbR using an extended set of diverse contact traces from real-life opportunistic networks as well as an enriched collection of utility functions (Section \[sec-performance\]). The evaluation corroborates the broad implementation scope of CbR.
- we explore and evaluate various techniques for allowing CbR to keep up with the time-evolving nature of mobile opportunistic networks (Appendix).
- we propose $C^{2}bR$, an extension of CbR that implements the concept of cluster-based replication when two utility functions are used for assessing the delivery/forwarding efficiency of a node (Section \[sec-cbr2d\]). This is typically the case of social-based routing algorithms. Contrary to such existing algorithms, C$^2$bR does not require a pre-configuration that depends on the network. The experimental evaluation confirms that C$^{2}$bR is robust in networks with diverse characteristics and brings significant cost savings compared to state-of-the-art social-based algorithms.
In the rest of the paper, we review the related literature and provide background information on dynamic replication schemes in Section \[sec-lit\]. Section \[sec-concl\] summarizes our findings.
Background and Related Work {#sec-lit}
===========================
The routing protocols proposed for mobile opportunistic networks with human mobility can be broadly categorized in *single-copy* and *multi-copy* ones. As the names suggest, protocols in the first category use only one copy for each packet while in the second category multiple copies of a packet may exist in the network. Multi-copy schemes are superior to single-copy ones in terms of delivery efficiency. This is because the probability of finding the destination is higher when multiple nodes carry the message. Epidemic routing [@Epidemic] is the extreme of the multi-copy approach; every node carrying a packet forwards a copy to every encountered non-carrier node. Apparently, this strategy results in energy depletion and memory starvation at nodes. Therefore, research efforts have focused in reducing the number of replicas without sacrificing the delivery efficiency. One approach is to use a probabilistic scheme [@epidemic-eval; @PDF], i.e., allow a node to probabilistically create/distribute replicas. Besides the difficulty in setting up the suitable replication probability, this approach is also susceptible to degradation of delivery efficiency.
In the deterministic side, there are two prominent approaches; *“Spray-based"* or *“Constrained replication"* and *“Dynamic replication"*. In the first class of algorithms, the source node determines the maximum number of replicas ($L$). Then, the spray process distributes those replicas to other nodes on a contact basis, i.e., every node carrying multiple replicas selects which of its contacts will receive some of them. The selection process is either blind , i.e., every encountered node is eligible for receiving at least one copy, or based on a *utility* value that captures the ability or *fitness* of a node to forward/deliver the message [@UtilSpray]. More specifically, assume that node $v$ (with a utility value $U_{v}(d)$ for destination $d$) carries a message $p$ destined to $d$ and encounters node $u$ (with utility value $U_{u}(d)$). Then, $p$ is replicated to $u$ iff: $$\label{relative_criterion}
U_{u}(d) > U_{v}(d) + U_{th}$$ or $$\label{absolute_criterion}
U_{u}(d) > U_{th}$$ where $U_{th}$ is a protocol parameter used to secure that the new carrier will contribute a minimum utility improvement (first case) or its utility exceeds a threshold (second case). There are various utility metrics constructed based on some feature of a node’s connectivity profile such as the contact rate [@EBR; @Greedy], the time elapsed between successive contacts [@UtilSpray; @Fresh], the probability of node meetings [@Prophet], as well as metrics based on the social characteristics of nodes [@SimBetTS; @Friendship]. Note that typically a utility is *destination dependent*, i.e., it captures the ability of a node $v$ to deliver packets to their destination. In this case, $v$ should store one utility value for every possible destination. However, there are also *destination independent* utilities that capture a node’s ability to interact with other nodes and therefore its fitness for acting as a forwarder regardless of the actual destination. In this case $U_{v}(d)\!=\!U_{v}, \forall d$ and $v$ should store a single value. Another point of differentiation between algorithms in the “spray-based" category is the spraying method itself, i.e. the decision on how many replicas should an eligible node receive. The most popular strategies are for a node to hand over half of its replicas (binary spray) or a fraction of them that depends on $U_{v}(d)$ and $U_{u}(d)$ [@EBR; @SimBetTS]. When a node ends up with a single copy, it waits until it meets the destination (Spray & Wait) or uses the utility-based approach to forward the message (Spray and Focus).
The advantage of Spray-based schemes is that it is possible to control the trade-off between delivery efficiency and the degree of replication by determining $L$. Yet, there is an important downside; choosing the optimal $L$ is not trivial since this depends on the network properties. On the other hand, the second multi-copy strategy, known as *“Dynamic replication"*, is more flexible since there is no requirement for predetermining the number of replicas to be created. Instead, every node $v$ carrying a packet follows a utility-based approach and dynamically creates a replica based on the utility of the encountered node $u$. More specifically, in the event of a contact between $v$ and $u$, $v$ implements a *“Compare & Replicate"* approach [@Greedy; @wowmom-cnr; @DF; @rfc6693], i.e. forwards a copy to $u$ when (\[relative\_criterion\]) holds. There are also other, less popular, approaches that relax or enforce (\[relative\_criterion\]) by co-evaluating how many replicas have been created so far or whether $U_{u}(d)$ exceeds a fixed threshold [@rfc6693]. A point of criticism for this approach is that it frequently favors over-replication [@DF]. And this is true regardless of the utility choice, although the latter impacts the algorithm performance. To tackle the problem, Delegation Forwarding (DF) [@DF] introduces a replication strategy that exploits the *history of a node’s observations*. To explain, let us consider the case of a contact between $v$, that carries a packet $p$ destined to $d$, and $u$. Then, $p$ is replicated to $u$ iff: $$\label{delegation_criterion}
U_{u}(d)>\tau^{p}_{v} \; (=\!\max_{k \in N_{v}}\{U_{k}(d)\})
\vspace{-3pt}$$ where $N_{v}$ is the set of all nodes that $v$ has met since the reception of $p$. $\tau^{p}_{v}$ is the delegation threshold that $v$ knowns for $p$, i.e., the highest utility recorded so far among the nodes that received $p$. The idea here is clear; there is no point in replicating a packet to $u$ if another node with a higher utility already has the packet. COORD [@COORDconf] builds on the idea of DF to further reduce replication without impacting delivery efficiency. It makes the observation that $\tau^{p}_{v}$ captures only $v$’s perspective of the highest utility among the packet carriers. Therefore, it enables carrier nodes to coordinate their views, i.e., exchange their thresholds, and this results to significant performance improvements. Finally, Gao et al. [@Redundancy] also focus on limiting packet redundancy. However, this approach is applicable only to a small class of utility metrics.
The Clustering Property of Utility values {#sec-formulation}
=========================================
Our work focuses on “dynamic" replication schemes due to their capacity to accommodate networks of diverse characteristics. Observe that the key concept in this class of algorithms is to make replication decisions based on a simple pair-wise comparison involving the individual utilities of the encountering nodes like in (\[relative\_criterion\]) and (\[absolute\_criterion\]). As mentioned, the downside of this strategy is its tendency towards over-replication. Both DF and COORD algorithms target at this drawback by requiring $U_{u}(d)$ to be greater than $\tau^{p}_{v}$, i.e., $v$’s perception of the highest utility among packet carriers (refer to (\[delegation\_criterion\])). Although both algorithms provide an important performance improvement, they do not tackle the root of over-replication which is the limited potential of the “pair-wise utility comparison" based strategy adopted in (\[relative\_criterion\])-(\[delegation\_criterion\]). A closer look at (\[relative\_criterion\])-(\[delegation\_criterion\]) reveals that the underlying idea is to improve the utility of the carrier as the packet moves towards the destination. The challenge here is to identify what constitutes a suitable minimum utility improvement $\delta U_{min}$, for replicating a packet. Choosing to replicate a packet to candidates that produce a small or minimal $\delta U_{min}$ may result in over-replication. On the other hand, a high $\delta U_{min}$ may result in rejecting the majority or all of the candidate carriers and thus the packet may never reach the destination. The problem is well-known and has been treated by adding $U_{th}$ ($\coloneqq \delta U_{min}$) in (\[relative\_criterion\]).
Yet, determining the optimal $\delta U_{min}$ is a challenge that depends on a series of complex factors such as:
- the utility function, i.e., how a node’s forwarding quality is mapped to a value, since this determines the range of values assigned to candidate carriers. A utility function producing a small value range calls for a small $\delta U_{min}$ and vice versa.
- the network dynamics, i.e., the number and quality of contacts, because they affect the distribution of utility values assigned to nodes. If all nodes share similar connectivity profiles, this results in similar utility values and thus promotes the choice of small $\delta U_{min}$ in order to avoid under-replication. However, this is not necessarily the case if the network consists of nodes with diverse connectivity profiles.
- the distance (utility-wise) between the packet carrier and the destination. A large distance may require a small $\delta U_{min}$ to allow the packet to quickly move towards the destination.
Based on the previous discussion, it becomes apparent that using a pair-wise utility comparison approach for making replication decisions is insufficient.
In this work, we argue that a carrier node $v$, when presented with a forwarding opportunity to a node $u$ with $U_{u}(d)$, instead of just making a local scope pair-wise comparison as in (\[relative\_criterion\])-(\[delegation\_criterion\]), could make a better decision by obtaining a network-wide assessment of $U_{u}(d)$’s importance using the distribution of utility values assigned to other nodes. Clearly, it is impossible for a node $v$ to become aware of the aforementioned distribution. Therefore, we opt to use $v$’s perception of this distribution which is the *distribution of utility values* formed by $v$’s past contacts, i.e., the set of values $\{U_{k}(d)\}_{k \in C_{v}}$, where $C_{v}$ is the set of $v$’s past contacts. In this context, the key issue is to determine *how $v$ could exploit the distribution of utility values to identify important replication opportunities*. The answer highly depends on the characteristics of this distribution which in turn depend on the network dynamics. The analysis of contact traces from real networks with human mobility has clearly demonstrated that the nodes of such networks can be classified based on the contact properties into distinct groups [@Yoneki-Crowcroft; @MilanoJ], each one corresponding to a different level of delivery capability. Recall that a utility metric is constructed based on one or more features of a node’s contacts. Bearing this in mind, it is reasonable to expect that, for any well-structured utility, *the grouping of nodes will show up as clusters of utility values*. If this is the case then our strategy could *decide* whether a contact $u$ should receive a packet copy *based on the group that $u$ belongs to*, i.e., *instead of making a decision based on $U_{u}(d)$ we decide based on the characteristics of the cluster that $U_{u}(d)$ belongs to*.
To validate the clustering tendency of utility values, we conducted a set of experiments using the “Compare & Replicate" approach with different utility functions in various real-life contact traces. More specifically, for every node $v$ we recorded the utilities announced by its contacts for a destination $d$, i.e., the set of values $\{U_{k}(d)\}_{k \in C_{v}}$. Then, we used the $k$-Means clustering algorithm [@kmeans] on this set of one-dimensional data in order to identify clusters of values, where the appropriate number of clusters was automatically selected using the Silhouette criterion [@silhouette]. More details about the simulation setup, including the utilities used, can be found in Section \[sec-performance\]. Fig. \[validation-res-1\] illustrates a series of 100 utility values for the destination with id 50 recorded by the node with id 23 in the Reality trace [@Reality-dataset]. The values are presented in the order they were recorded and different colors (and point types) represent the different clusters of values produced by our approach. The three subfigures correspond to the same experiment but with three different utility functions proposed in the literature, namely Prophet [@Prophet], LTS [@UtilSpray] and DestEnc [@EBR]. Similar results for the Sigcomm trace [@SigComm-dataset] are illustrated in Fig. \[validation-res-2\]. The grouping of utility values into clusters is evident in all figures. We recorded similar clustering behaviors for utility values from various observer-destination pairs. Since a utility value captures the fitness of a node for forwarding/delivering a packet, we *interpret such clusters of utility values as groups of nodes with different delivery capabilities*. Following this interpretation, *the key idea in our approach is to distribute replicas to nodes that belong to clusters with an increasing delivery capability* in order to avoid unnecessary replications. However, efficiently implementing this strategy highly depends on the observing node and specifically on which cluster the utility of this node belongs to. Therefore, it is imperative to polish the key idea to propose a sophisticated forwarding strategy. We discuss this strategy in detail in Section \[subsec-algorithm\].
Dynamic Replication with Clusters of Utility {#sec-druc}
============================================
We call the method that incarnates our cluster-driven replication strategy *Cluster based Replication (CbR)*. CbR is not a standalone algorithm but a mechanism that is integrated into the existing dynamic replication schemes, namely “Compare and Replicate" (CnR), DF and COORD. Recall that, so far, all those schemes make replication decisions by comparing two utility values. We implement CbR on top of these schemes to transform the decision making process so that, instead of comparing two values, it takes into account the clusters that those values belong to. In the following we will illustrate how CbR works in synergy with the three replication strategies. This will result in three CbR flavors, namely *CbR-CnR*, *CbR-DF* and *CbR-COORD*. CbR consists of three processes:
- *Data Collection and Training*: The training process allows each node to collect a sufficient sample of utility values in order to be able to detect clusters of utility values. During the training period, the node uses the decision making process of the underlying algorithm, i.e., either CnR, DF or COORD. At the end of the training period, the node implements a clustering technique for identifying the utility clusters. Clustering algorithms have been previously used in the context of opportunistic networks but for different purposes, e.g., for identifying node communities based on their contact properties [@BubbleRap] or for fine-tunning the social graph used by social-based algorithms [@knowthy].
- *Update*: This process commences after the completion of the training period. Since the network evolves, each node continues to record new utility values through its contacts. These new recordings enrich its view of the distribution of utility values in the network. The update process aims to accordingly refresh the clustering result.
- *Decision making*: This is the replication process that exploits the identified clusters of utilities. The process commences after the completion of the training period and operates in parallel with the update process. In contrast to the two other processes, its implementation is different for each of the CbR flavors.
In the following we delineate each CbR process.
Detecting Clusters of Utility values {#subsec-utilitygroups}
------------------------------------
CbR starts with a training period, where each node $v$ records the utility values reported by each contact node $u$ for each destination $d$ for which $u$ carries a packet. In other words, $v$ stores, for each destination $d$, a set of values $S_{v}^{d}=\{U_{k}(d)\}_{k \in C_{v}}$, where $C_{v}$ is $v$’s history of contact nodes that carried at least one packet to $d$. In the case of a destination independent metric, i.e., when the reported utility is generic and does not refer to a specific destination, $v$ stores a single set of values $S_{v}=\{U_{k}\}_{k \in C_{v}}$. Note that in all utility-based algorithms, including CnR, DF and COORD, during a contact the two nodes typically exchange their utility values. Therefore, the training process does not involve any additional communication cost. Furthermore, a node $u$ usually reports the utility values on a per packet rather than on a per destination basis, i.e., the utility $U_{u}(d)$ is reported for every packet destined to $d$. Since $U_{u}(d)$ refers to $d$ and not any specific packet, we record it only once during a contact in order to avoid importing noise to the $S_{v}^{d}$ dataset. The duration of the training period should allow the collection of a sufficient number of utility samples but at the same time it should not be extremely long in order to facilitate a prompt initiation of the cluster-based replication process. We define the duration of the training period in terms of the number of recorded values. More specifically, the training period ends when $|S_{v}^{d}|\!=\!N_{TR}$, where $N_{TR}$ is a predefined number. Observe that, in the most common case of a destination-dependent utility, the node actually goes through a different training period for every set $S_{v}^{d}$, i.e., for each destination. Moreover, each of these periods may end at a different time because, during a contact, a value is added to $S_{v}^{d}$ only if the contact node carries a packet for $d$.
As mentioned, after the end of the training period a node implements a clustering algorithm on the recorded values. In this work, we choose the *$k$-Means algorithm* [@kmeans] although any clustering algorithm could be used. Our choice is based on the rather simple structure of the clusters observed in the recorded data. This allows us to choose a lightweight algorithm such as $k$-Means since the computational cost is a point of consideration in mobile environments. An important issue in $k$-Means is how to estimate the number of clusters $k$. Recall from Figs. \[validation-res-1\] and \[validation-res-2\], that every node may observe a different number of clusters. Therefore, it is not feasible to find a $k$ value that can be used globally. Instead, we follow a more flexible approach where we determine the appropriate number of clusters for each set $S_{v}^{d}$. More specifically, each node executes $k$-Means on $S_{v}^{d}$ for several values of $k$, i.e., $k\!=\!2,3,\ldots,K_{max}$ and obtains $K_{max}-1$ clustering solutions. Next, the quality of each of these solutions is evaluated using the *Silhouette criterion* [@silhouette] and the solution with the highest score is chosen. A pseudocode of the clustering process can be found in [@cbr-wowmom].
Updating the Clustering Result {#subsec-updating}
------------------------------
Recall that a utility function relies on a node’s connectivity profile, i.e., the average rate and duration of contacts with each node, to assess its forwarding capability and assign a suitable utility value. It is reasonable to assume that in mobile opportunistic networks, especially in those with human mobility, a node’s connectivity profile evolves over time, e.g., because a node moves in various locations during different hours of a day. Typically, the time scale of this evolution is relatively large and therefore cannot be captured by the training period which is a one-time process and should be of relatively small duration to timely initiate replication decisions. Hence, we introduce a process that is able to capture changes occurring over relatively long periods of time and update accordingly the clustering structure. This process runs in parallel to the replication one and does not interfere with it.
Our experimental results indicate that in most cases the clusters of utility values do evolve over time. However, the changes frequently involve the structure and center of the observed clusters rather than their number. Based on this observation, we opted to employ a low-complexity, yet efficient, method for updating the clusters found during the training period. This is the Learning Vector Quantization (LVQ) clustering algorithm [@LVQ] which can be considered as an on-line version of the $k$-Means algorithm. Each time a node records a new utility value $U_{new}$, LVQ decides on which cluster $i$ this value is assigned and subsequently moves the center $c_{i}$ of this cluster towards $U_{new}$, i.e., $$\label{lvq_update}
c_{i}^{new} = c_{i} + \alpha (U_{new}-c_{i})
\vspace{-3pt}$$ where $\alpha$ is a constant known as the learning rate. In Appendix we explore a set of alternative updating methods and evaluate their impact on CbR’s performance.
Utilizing Clusters on Replication Decision Making {#subsec-algorithm}
-------------------------------------------------
After completing the training period, a node is able to use the identified clusters to make replication decisions. In a nutshell, the basic idea of CbR dictates that a node $v$ replicates a packet to $u$ provided that the utility of the latter belongs to a cluster of higher utility values. To implement this simple rule, a node should first rank the identified clusters. This can be easily accomplished since the clustered data are one-dimensional. Thus, we rank the clusters in decreasing order based on their center value, i.e., the cluster with the highest valued center is ranked first. Accordingly, each node $v$ is assigned the rank of the cluster on which its utility value belongs to. In the following, we denote the rank of node $v$ with $R_{v}$. Based on the ranking method, the previous forwarding rule now reads: “*$u$ receives a packet replica if its utility belongs to a cluster of a higher rank*". Note that this rule is rather stringent and in certain occasions may result in under-replication and thus poor delivery rates. We have identified two occasions where this may occur. The first case is when $U_{v}(d)$ (CnR) or $\tau^{p}_{v}$ (DF or COORD) belongs to the top level cluster of values, i.e., either the carrier $v$ belongs to the top level group of nodes (CnR) or there is a node among packet carriers that belongs to the top level group of nodes. In this case, if $v$ is the packet source the previous rule actually prohibits any replication while if $v$ is an intermediate node the rule blocks any replication within the group of most capable nodes. The second case of potential under-replication occurs when the utility used by $v$ resides in a populous cluster of values and the clusters with a better rank are sparsely populated. In this case, the opportunities for replicating the packet to a better ranked cluster are rare therefore the most probable scenario is that packet replication will involve a substantial delay. The best strategy for both the aforementioned cases is to relax the requirement of replicating the packet to a higher ranked cluster. In other words, it is important to also *allow replication to a node $u$ with a utility in the same cluster provided that $u$’s utility is higher than the utility used by $v$ (traditional decision making).*
Fig. \[pseudocodeCbR-Replication\] presents the pseudocode of CbR when implemented on top of CnR (Fig. \[pseudocodeCbR-CnR\]), DF (Fig. \[pseudocodeCbR-DF\]) and COORD (Fig. \[pseudocodeCbR-COORD\]). The pseudocode is executed for a packet $p$ when the carrier node $v$ encounters node $u$. Note that, in the case of CbR-CnR, the pseudocode is actually the same as in CnR with the single addition being line 4. Recall that in CnR replication decisions are made using (\[relative\_criterion\]) which can also be found in line 5 of the pseudocode. Line 4 realizes our cluster based approach by introducing the requirement $R_{u}\!\!<\!\!R_{v}$, where $R_{v}$ and $R_{u}$ are the ranks of $v$ and $u$ respectively. Both $R_{v}$ and $R_{u}$ can easily be retrieved by simply checking the proximity of $v$’s and $u$’s utility value to the centers of the clusters (procedure $cRankof(\cdot)$). We mitigate the risk of under-replication (both identified cases) by moving from the basic criterion $R_{u}\!\!<\!\!R_{v}$ to a relaxed one ($R_{u}\!\!=\!\!R_{v}$) if $v$ has not yet replicated $p$. We distinguish non-replicated packets from replicated ones using a single bit in the packet’s header ($p.rep$). As soon as $p$ is forwarded, $p.rep$ is set to 1 and the relaxation is canceled. Note that, in contrary to the $R_{u}\!\!<\!\!R_{v}$ case, it is possible that $U_{u}(d)\!<\!U_{v}(d)$ when $R_{u}\!\!=\!\!R_{v}$. Therefore, the original CnR rule (line 5) is used to control replication in such cases.
We follow a similar approach when integrating CbR into DF and COORD. Recall that in both DF and COORD, when the packet carrier $v$ encounter $u$, the replication decision is made using (\[delegation\_criterion\]), where $\tau_{v}^{p}$ is $v$’s perception of the highest utility among the carriers of $p$. The two algorithms only differ in the way that $\tau_{v}^{p}$ is updated. Since the decision making criterion is common in DF and COORD, the implementation of the CbR rule is the same in CbR-DF (line 6, Fig. \[pseudocodeCbR-DF\]) and CbR-COORD (line 7, Fig. \[pseudocodeCbR-COORD\]). Again, the pseudocode of CbR-DF (CbR-COORD) is the same as in DF (COORD), with the only difference being the addition of line 6 (line 7). Regarding the CbR replication rule, observe that the original rule $\tau_{v}^{p}\!<\!U_{u}(v)$ transforms into $R_{u}\!<\!R_{t}$, where $R_{t}$ is the rank of the cluster that $\tau_{v}^{p}$ belongs to. Here, we follow a more efficient approach for relaxing this rule and avoid the two cases of under-replication. More specifically, we allow replication when $R_{u}\!=\!R_{t}$ provided that $R_{v}\!=\!R_{t}$. The latter equality means that $U_{v}(d)$ and $\tau_{v}^{p}$ reside in the same cluster. In other words, the packet carrier $v$ and the carrier with the highest utility have similar delivery capacity, i.e., the packet has not moved to a better cluster. When this happens $\tau_{v}^{p}$ will be updated to a new value so that $R_{t}\!>\!R_{v}$, which will deactivate the relaxation. Again, when $R_{u}\!=\!R_{t}$ the traditional rule (line 7 in Fig. \[pseudocodeCbR-DF\] and line 8 in Fig. \[pseudocodeCbR-COORD\]) acts as a safeguard. As a final note, all presented implementations are also compatible with destination independent utility functions.
Evaluation {#sec-performance}
==========
We evaluate the performance of all CbR flavors under various opportunistic environments. To this end, we use the Adyton [@Adyton] simulator. Adyton includes a plethora of routing protocols and is capable of processing a multitude of well-known contact traces from real-world networks [@crawdad-site].
Trace Name \# Nodes Duration (days) Area
------------------------------------ ---------- ----------------- ------------
Infocom ’05[@haggle-dataset] 41 3 conference
Sigcomm ’09[@SigComm-dataset] 76 3.7 conference
MIT Reality[@Reality-dataset] 97 283 campus
Milano pmtr[@pmtr-dataset] 44 18.9 campus
Cambridge upmc[@Cambridge-dataset] 52 11.4 city
: Properties of real-world opportunistic traces
\[Traces\]
For the evaluation we use traces that represent opportunistic networks of different scale. More specifically, we used two conference traces, namely Infocom’05 [@haggle-dataset] and Sigcomm’09 [@SigComm-dataset]. Additionally, we selected two traces from campuses where the participants, students and faculty members, move in a larger area. More specifically, we used the well-known MIT Reality [@Reality-dataset] and the Milano pmtr datasets [@MilanoJ; @pmtr-dataset]. Finally, we used the Cambridge upmc dataset [@Cambridge-dataset] which is a city-level trace collected in Cambridge, UK. Table \[Traces\] summarizes the characteristics of the selected traces.
{width="0.99\columnwidth"}
{width="0.99\columnwidth"}
Similar to CnR, DF and COORD, CbR is able to work with virtually any proposed utility metric. Clearly, the utility choice impacts performance and therefore the gains of CbR. Thus, we use a collection of six well-known utilities, both destination dependent and independent, to assess the performance of CbR. More specifically, we use the following utility metrics:
- DestEnc [@DF]: It captures the total number of contacts with a specific node. Thus, this is a destination dependent utility.
- Enc [@Greedy; @EBR]: This is the destination independent version of DestEnc. The metric captures the total number of contacts with all network nodes.
- LTS [@Fresh; @UtilSpray]: This is a destination dependent metric receiving values in $[0,1]$. It is inversely proportional to the time elapsed since the last contact with the destination.
- Prophet [@Prophet; @rfc6693]: It is a destination dependent metric proposed in the context of the well-known PRoPHET algorithm. The metric has the transitive property, i.e. it captures the fitness of a node to deliver a message to its destination not only directly but also indirectly.
- SPM [@Friendship]: Social Pressure Metric is destination dependent and captures the friendship between network nodes. It depends on the frequency, the longevity and the regularity of past node contacts.
- LastContact [@DF]: This is a destination independent metric expressed as $1/(1+T_{L})$ where $T_{L}$ is the time since the node’s last contact with any of the network nodes.
Regarding the clustering settings, the analysis of data from real contact traces revealed that using a small value for $K_{max}$ such as 4 is sufficient for capturing reasonable estimates of the number of clusters. Furthermore, we used a training period of 50 samples, i.e., $N_{TR}=50$. After extensive experimentation, we found that there are no significant performance improvements for greater values. Finally, the LVQ learning rate $\alpha$ was set equal to 0.05, i.e., the distance between a newly added value and the center of its cluster is reduced by 5$\%$ by moving the center towards the new value.
In each experiment we use a traffic load of 5000 packets. We choose randomly the source/destination pair for each packet while its generation time is chosen with uniform probability in the interval during which both the source and the destination are present in the network. Each packet is assigned a TTL equal to the 20% of the trace duration. To eliminate statistical bias and monitor the network in its steady state, we use a warm-up and a cool-down period during which packets are not generated. The duration of both periods is 20$\%$ of the total trace duration. We report the average values of 20 repetitions.
Results {#subsec-res}
-------
In the first experiment we test the performance of all three flavors of CbR in all traces and using each time a different utility metric. To eliminate other interfering factors, we first assume an infinite buffer in each node. We use the *routing gain (RG)*, i.e., the percentage of transmissions saved when using CbR, to capture the extend at which CbR reduces the replicas and therefore the number of transmissions. More specifically, we monitor the quantity $(1\!\!-\!\!T_{CbR}/T)\, \%$, where $T$ is the number of transmissions per delivered packet for the underlying algorithm, i.e., either CnR, DF or COORD, and $T_{CbR}$ is the number of transmissions per delivered packet for the CbR flavor of this algorithm. Fig \[OH-results\] illustrates the routing gain provided by the CbR approach when used on top of CnR (fig. \[CnRKMeansOH\]), DF (fig. \[DFKMeansOH\]) and COORD (fig. \[COORDKMeansOH\]). In all cases there is a significant gain that, depending on the baseline algorithm and the utility metric, reaches up to an impressive $\sim\!\!60\%$. Reasonably, the routing gain is smaller when CbR is integrated into DF and COORD since those algorithms significantly reduce transmissions on their own, therefore there is a smaller room for improvement. Still, CbR achieves significant gains that reach up to $\sim\!40\%\!\!-\!\!45\%$.
What is of great importance is that CbR’s routing gain comes at limited or virtually no delivery cost. In other words, CbR clearly improves the delivery efficiency-cost trade-off. Fig. \[DR-results\] presents the *delivery rate change*, i.e., the quantity $(D_{CbR}/D-1)$ where $D$ is the delivery rate of the baseline algorithm and $D_{CbR}$ is the delivery rate of its CbR version, for all CbR flavors and for all combinations of traces and utility functions. The performance of all CbR flavors is in most cases within $\sim$1$\%$ of the performance of the baseline algorithm and in all cases within $\sim$2.2$\%$. Besides being minimal, this performance degradation can be justified if we bear in mind that even random contacts help nodes communicate. However, such random contacts are not predictable and the only way to exploit them is to increase replication. Furthermore, we will show in the next experiment that, in a more realistic setting where the buffer size of a node is limited, this minimal degradation is almost eliminated and in many cases turns into an improvement of delivery efficiency.
The reduced level of replication in CbR, as expected, also interferes with the delivery delay. Fig. \[Delay-results\] presents the *delay change* (in analogy to the delivery rate change) of CbR flavors. In the case of the Reality trace there is a limited delay increase. An easy way to explain this finding is to visualize replication as a process that delivers multiple copies to a destination through different paths. Reducing replication is equivalent to pruning some paths. This delays the packet delivery unless none of the pruned paths is the shortest one in terms of delay, which is rather unlikely. To increase the probability that the shortest delay path will survive pruning, one should assign a high rank to the contacts that this path consists of. However, this responsibility lies with the utility metric and not the replication mechanism. Indeed, note that the delay increase is smaller when the utility metric takes into account connectivity aspects that are related to delay such as the frequency and the regularity of contacts (e.g. the SPM utility). In all other traces, except Reality, the impact of replication on the delay is negligible. An apparent reason is that all those traces are far more dense compared to Reality, i.e., the contact rate is higher. Therefore, denying a replication opportunity results in a smaller delay increase. Note that in some cases the delay of CbR in fact reduces. The decrease, which is minimal and statistically not important, is attributed to the statistical bias due to the lower delivery rate. This phenomenon is absent in the Reality trace because contacts are less frequent. Thus, reduced replication results in higher delay which cloaks that statistical bias.
In the next experiment we focus on the Reality and Cambridge traces and examine the more realistic case of limited storage, i.e., a node can only store a limited number of packets. More specifically, we test the performance of CbR with respect to the storage capacity of nodes. Fig. \[OH-vsBUF-results\] illustrates the routing gain when CbR is used for both traces. Fig. \[CnRKMeansDRCampBufReality\] presents the delivery rate change for CbR-CnR, CbR-DF and CbR-COORD for various utility functions in the Reality trace while Fig. \[COORDKMeansDRCampBufCambridge\] presents the same delivery rate change in the Cambridge trace. Clearly, when storage is limited, all CbR flavors provide not only significantly better routing cost gains but also better delivery efficiency compared to the unlimited storage case. We found this to be true not only for the two presented but also for the rest of the traces. This is reasonable since reducing the routing load significantly alleviates congestion and cuts down the packet drop rate. This is also why the improvement is bigger for CbR-CnR since in this case congestion is more severe. On the other hand, for CnR-DF and CbR-COORD the improvement, although evident, is limited because both DF and COORD are able to effectively reduce transmissions, and thus congestion, on their own. Overall, under limited storage, CbR-CnR combines improvements in both the routing cost and the delivery efficiency compared to CnR. At the same time, CbR-DF and CbR-COORD provide significant routing gains and a delivery performance which is slightly better or similar to their baseline algorithms, i.e., DF and COORD respectively.
Looking in more detail in the routing cost performance, CbR outperforms by a wide margin the baseline algorithm (positive routing gain) in all cases, i.e., combinations of trace and utility function. The only exception is the LastContact utility when used with CbR-DF and CbR-COORD where the gain is minimal. This can be associated with the structure of this utility, i.e., a destination independent utility that produces values with little diversity, especially in traces with sparse contacts. For the rest of the cases, reasonably the general trend is that the gain increases with the storage capacity while it is still significant for very small buffer sizes. This is because less packets are dropped and this provides more opportunities for pruning replicas. However, there are two exceptions where the gain decreases. The first is the case of infinite buffer size in CbR-CnR in the Reality dataset (Fig. \[CnRKMeansOHCampBufReality\]). To shed some light, observe that implementing CbR on top of CnR significantly improves the delivery efficiency when the storage capacity is limited (Fig. \[CnRKMeansDRCampBufReality\]). This improves CbR’s routing cost $T_{CbR}$ because the latter is normalized to the number of delivered packets. As a result, the routing gain $1\!-\!T_{CbR}/T$ appears to increase when the storage is limited compared to the case of unlimited storage. Note that the phenomenon does not appear to this extent in the Cambridge trace because the increase in delivery efficiency is smaller. Moreover, the utilities that tend to over-replicate, such as Enc, are unaffected by this phenomenon. This is because over-replication is more severe when no storage limitation exists and the effectiveness of CbR in reducing the routing gain dramatically increases in conditions of over-replication. The second exception to the gain increasing trend is observed when Enc is used in CbR-DF (Fig. \[DFKMeansOHCampBufReality\] and \[DFKMeansOHCampBufCambridge\]) and CbR-COORD (Fig. \[COORDKMeansOHCampBufReality\] and\[COORDKMeansOHCampBufCambridge\]). This can be explained with the same reasoning discussed for the previous exception with the additional note that, contrary to the case of CbR-CnR with Enc, here over-replication is limited due to CbR-DF and CbR-COORD.
Two-Dimensional CbR {#sec-cbr2d}
===================
Up to this point we presented and evaluated CbR with a single utility. Nonetheless, there exist routing algorithms that use two utility functions for making forwarding decisions. This approach typically appears in routing for PSNs (Pocket Switched Networks) [@psns] due to their social structure. Probably the most typical example of social-based routing that capitalizes on two utilities can be found in the SimBet algorithm [@SimBet]. The algorithm adopts two utilities known from social graph analysis, namely “betweenness" [@bet-1; @bet-2] and “similarity" [@similarity]. Then, it combines them using a normalized weighted sum to form a single utility function, known as “simbet", based on which forwarding decisions are made.
A first straightforward approach for implementing CbR with the “simbet" utility is to use the same method as in the single utility case, i.e., apply clustering on the recorded “simbet" values and then implement one of the algorithms proposed in Section \[sec-druc\]. In general, we expect this approach to provide some performance gains because the basic idea is the same as in the single utility case; since “simbet" utility has been proved to be an effective indicator for good forwarders, identifying clusters of “simbet" values corresponds to detecting groups of forwarders with different delivery capabilities. However, this one-dimensional approach also bears limitations. Each of the similarity and betweenness metrics is associated with a specific social property; similarity is a predictor of social ties and betweenness an indicator of social significance. Nonetheless, it is not clear what is a valid interpretation of the “simbet" utility. Therefore, when identifying a cluster of high utility nodes it is not clear how this cluster should be interpreted with respect to its social properties. This limits the ways we can exploit this cluster. Furthermore, it has been documented in the related literature that, instead of using the sum of two social-based utilities like in “simbet", it is beneficial to utilize them independently and sequentially depending on the social proximity of the packet carrier to the destination [@BubbleRap].
The latter observation has been the driving force of our second approach which we call two-dimensional CbR or simply $C^{2}bR$. More specifically, we examine betweenness and similarity independently and identify the corresponding clusters. Since betweenness captures the social importance, clusters of betweenness values correspond to groups of nodes with different social importance. On the other hand, similarity, besides being an indicator of future social ties, also reveals social proximity because it is non-zero when the social proximity to the destination is no more than two hops. Therefore, different clusters of similarity values correspond to nodes with different social proximity to the destination. The key concept in all C$^2$bR flavors is simple and in a nutshell can be expressed as follows: “*Move the message up to the social hierarchy constructed using betweenness until a message reaches a group of nodes with high social proximity (similarity) to the destination. At this time, continue the same strategy but confine it within this group of nodes*". It is well-known that in networks with social heterogeneity a single utility that provides a ranking of nodes cannot perform efficiently [@BubbleRap]. On the other hand, it is also not possible to rely on a metric that captures social proximity to the destination because the source of the packet may be socially far away [@BubbleRap]. The strategy of C$^2$bR combines the two utilities to allow a message to move far from the source if this is needed (source and destination socially apart) and then move the message towards the destination (by using a destination dependent utility).
The previous strategy materializes in three versions of C$^{2}$bR; one based on CnR (C$^{2}$bR-CnR), another based on DF (C$^{2}$bR-DF) and the third based on COORD (C$^{2}$bR-COORD). The pseudocode of the three algorithms is presented in Fig. \[pseudocodeC2bR-Replication\] where $S_{v}(d)$ is the similarity of $v$ for $d$ (a destination dependent metric), $B_{v}$ is the betweenness of $v$ (a destination independent metric), $\tau_{v}^{\scriptscriptstyle{p,S}}$ ($\tau_{v}^{\scriptscriptstyle{p,B}}$) is $v$’s perception of the highest similarity (betweenness) among the carriers of packet $p$, $R_{v}^{S}$ ($R_{v}^{B}$) is the rank of the cluster that $v$’s similarity (betweenness) belongs to and $R_{t}^{S}$ ($R_{t}^{B}$) is the rank of the cluster that $\tau_{v}^{\scriptscriptstyle{p,S}}$ ($\tau_{v}^{\scriptscriptstyle{p,B}}$) belongs to. Observe that, similar to CbR-DF and CbR-COORD, C$^{2}$bR-DF and C$^{2}$bR-COORD only differ in the way they update $\tau_{v}^{\scriptscriptstyle{p,S}}$ and $\tau_{v}^{\scriptscriptstyle{p,B}}$ but share a common forwarding strategy which we summarize in the following. If a packet carrier $v$ (including the source) does not belong to the highest similarity cluster then it uses only betweenness and reverts to the simple CbR algorithm, either CbR-DF or CbR-COORD respectively (lines 6-7 in Fig. \[pseudocodeC2bR-DF\] and 9-10 in Fig. \[pseudocodeC2bR-COORD\]). This corresponds to an attempt to find more socially important forwarders. However, replication stops once $v$ finds out that the packet has been replicated to a node that belongs to a group of better similarity ($R_{t}^{S}\!<\!R_{v}^{S}$). This is to promote replication to nodes with increasingly higher social proximity to the destination. On the other hand, when $v$ belongs to the group of nodes with the highest similarity, i.e., highest social proximity to destination (lines 8-9 in Fig. \[pseudocodeC2bR-DF\] and 11-13 in Fig. \[pseudocodeC2bR-COORD\]), again it tries to find a more socially important carrier by reverting to the simple CbR flavor with betweenness as the only utility. However, in this case, replication is confined to nodes in the highest similarity cluster ($R_{u}^{S}\!=\!1$), i.e., we do not allow replication that decreases the social proximity to the destination. In analogy, C$^2$bR-CnR falls back to simple CbR-CnR with betweenness as the only utility but confines replication within the group of nodes with the highest proximity to the destination, i.e., highest similarity, when the packet carrier $v$ belongs to that group. This is done by requiring the packet recipient to also belong in this group ($R_{u}^{S}\!=\!1$).
To evaluate the performance of C$^2$bR we compare it with the simple CbR approach that uses “simbet" as the utility function. We test the two approaches on top of CnR, DF and COORD in five different traces and for various node storage capacities. Fig. \[CbR2D-OH\] presents the routing gain of both CbR and C$^2$bR with respect to the performance of the underlying algorithm (either CnR, DF or COORD) when it uses “simbet" as the utility function. As expected, the simple CbR approach provides significant performance improvements in all cases. At the same time, the results confirm our assessment regarding the limitations of CbR and prove that it is possible to achieve vast performance improvements with C$^2$bR. Indeed, in most cases C$^2$bR manages to almost double the routing gain or perform even better. But most impressively, C$^2$bR provides this gain with virtually no trade-off. In fact, C$^2$bR-CnR also significantly improves delivery efficiency (fig. \[CbR2D-DR\]) while C$^2$bR-DF and C$^2$bR-COORD achieve virtually the same performance as CbR-DF and CbR-COORD and better performance than the underlying algorithm (i.e., DF and COORD respectively). The only exception is the case of infinite storage capacity. But even in this case the observed performance lag is limited ($\sim2\%$ and no more than $\sim4\%$ in the worst case).
Besides the benefits of C$^2$bR over CbR, we also examine how C$^2$bR compares to the most well-known and established social-based routing algorithms, namely SimBet [@SimBet] and BubbleRap [@BubbleRap; @BubbleRap-conf]. SimBet was originally proposed as a single-copy algorithm featuring the “simbet" utility that we discussed previously. For a fair comparison, we used its follow-up multi-copy version [@SimBetTS]. This falls in the spray-based category of algorithms, i.e., a predetermined number of $L$ packet copies is distributed in the network. The distribution and forwarding of copies depends on the “simbet" utility of the encountering nodes. In order to produce the “simbet" utility, we used the proposed weight of 0.5 for both similarity and betweenness, i.e., both have equal importance [@SimBet]. BubbleRap is a multi-copy algorithm that falls in the dynamic replication sub-class [@BubbleRap]. The algorithm requires a community detection mechanism. Its forwarding strategy bears similarities to the one in C$^2$bR. A message is moved up in the global hierarchy, constructed based on the centrality of each node, until it reaches a node in the destination’s community. Then, the message is moved within the community using the local hierarchy, constructed based on the local centrality of nodes. Besides the apparent analogy, which is to forward a message up in the hierarchy until it moves in the social vicinity of the destination, C$^2$bR is different from BubbleRap in many aspects. First, C$^2$bR uses ego-betweenness [@egoBet] to approximate betweenness centrality and construct the global hierarchy whereas BubbleRap proposes the use of the average unit-time degree. More importantly, C$^2$bR does not require any community detection mechanism to identify the destination’s social neighborhood. Nor it requires any sort of customization that comes with it. Instead, it capitalizes on the metric of similarity to quantify the social proximity to the destination and route the packet in the direction of increasing proximity. Last but not least, C$^2$bR utilizes the concept of cluster-based replication in all phases of forwarding a message towards the destination in order to reduce the incurred cost. To enable distributed community detection by a node in BubbleRap, we implemented the distributed version of the $K$-CLIQUE algorithm discussed in [@BubbleRap] and described in [@BubbleRap-dist-comm]. $K$-CLIQUE requires some customization that depends on the network, namely the value for $K$ as well as a weight threshold for ruling out insignificant, from a social point of view, contacts. We focus our comparison in the Reality trace since it is a typical example of a trace exhibiting social characteristics. Furthermore, this choice allows us to use the parameter values for $K$-CLIQUE that were reported in [@BubbleRap; @BubbleRap-conf] for the Reality trace, namely $K=3$ and a threshold of 388800s. This is critical for providing a fair comparison.
To evaluate the pure replication and forwarding efficiency of all algorithms in terms of the delivery-cost trade-off, we first consider the case of unlimited storage at each node. Moreover, we assume that all copies of a message are instantly deleted upon delivery of this message to the destination. Fig. \[C2bR-SimBet-Bubble-0\] presents the performance of all algorithms in terms of delivery ratio, i.e., the percentage of packets successfully delivered to the destination, and routing cost, i.e., the average number of transmissions performed for each message. Furthermore, the performance of each protocol in terms of average delivery delay is presented with different color darkness. C$^2$bR-DF and C$^2$bR-COORD achieve the best delivery-cost trade-off, a confirmation of the effectiveness of the cluster-based approach. SimBet achieves approximately the same delivery efficiency (for $L\!\geq\!8$) at a cost that is $\sim\!2\!-\!2.5$ times greater than the cost of C$^2$bR schemes. Note that obtaining the best performance for SimBet depends on determining the optimal $L$, which is not a straightforward task since it depends on the network. On the other hand, BubbleRap produces an improved delivery ratio of $\sim\!6\%$ but this improvement comes at a high cost which is $\sim\!5$ times greater than the cost of C$^2$bR-DF and C$^2$bR-COORD. Moreover, C$^2$bR-CnR achieves a delivery ratio close to BubbleRap (lagging only $\sim\!2\%$) but its cost is only $\sim\!64\%$ of BubbleRap’s cost.
It is clear that the previous setting captures the best case performance with respect to the routing cost because it assumes that all copies are immediately deleted upon delivery of a message to the destination. In a real-life setting, it is critical for an algorithm to provide a stopping rule in order to prevent nodes, not meeting the destination, from continuing replication after message delivery. Spray-based schemes impose such a rule since a node left with one copy is not allowed to continue replication. BubbleRap, on the other hand, does not delineate any specific stopping rule. Interestingly, C$^2$bR-DF and C$^2$bR-COORD inherently enforce a stopping policy for each node that in a nutshell can be described as follows: a node stops replication as long as the packet is moved to a node that belongs to a better cluster (see line 6 in Fig. \[pseudocodeC2bR-DF\] and line 9 in Fig. \[pseudocodeC2bR-COORD\]). These stopping rules are also augmented by using the concept of utility threshold. C$^2$bR-CnR also delineates a stopping policy which is, however, less effective since it does not use the idea of utility threshold. The stopping rule dictates that replication stops when the packet reaches a node in the best cluster (see line 5 in Fig. \[pseudocodeC2bR-CnR\]). We extensively experimented with the more realistic scenario where nodes that do not meet the destination erase a packet based on TTL. We found that both SimBet and BubbleRap failed to compete in terms of routing cost with C$^2$bR schemes and especially C$^2$bR-DF and C$^2$bR-COORD. Compared to the previous case (Fig. \[C2bR-SimBet-Bubble-0\]), the cost of both C$^2$bR-DF and C$^2$bR-COORD increased slightly, the cost of SimBet increased by up to $30\%$ depending on the value of $L$ and, as expected, BubbleRap’s cost escalated dramatically. In other words, C$^2$bR-DF and C$^2$bR-COORD proved to be the most efficient regarding the policy for stopping replication while the strategy of SimBet proved to be inadequate. At the same time BubbleRap’s performance collapses due to the lack of any stopping policy.
As a next step, additionally to TTL (again set to 20% of trace duration), we tested a more effective rule for stopping replication which is to limit the hops that a packet can travel (*hop limit* rule). The combination of this rule with the replica limit imposed by the Spray-based approach used in SimBet produced reasonable performances in terms of cost. On the contrary, the hop limit rule proved to be insufficient for delivering a reasonable performance when used in BubbleRap. Based on this finding and in order to have a fair comparison with SimBet, we also limit the number of copies in the case of BubbleRap. Since the latter is a dynamic replication algorithm, the only realistic way to do this is to limit the number of copies ($\ell$) that each node is allowed to produce. We should stress that, in the following comparison, *we do not use the hop limit rule nor we limit the number of copies for C$^2$bR schemes*. We will show later that one reason for this decision is that, performance-wise, this was not necessary. The second reason pertains to the nature of C$^2$bR schemes. Discovering clusters depends on the exchange of copies and then those clusters are used to confine replication. Using other means to limit replication may damage the process of cluster formation and therefore may be harmful overall. Finally, in the following comparison we do not use C$^2$bR-CnR since the other two C$^2$bR protocols produce much better performance results.
Fig. \[DRvsOHvsDelay-Multi-HL3\] presents the routing cost with respect to the delivery ratio for C$^2$bR-DF and C$^2$bR-COORD. The graph also illustrates the performance of SimBet and BubbleRap for different values of $L$ and $\ell$ respectively in the case that we allow packets to travel at maximum 3 hops. A first important observation is the robustness of C$^2$bR schemes regarding their ability to confine replication, especially after the delivery of the message. Even without imposing any predetermined limit on the number of copies or on the number of hops, the cost for both algorithms presents a minimal increase compared to the previous experiment (Fig. \[C2bR-SimBet-Bubble-0\]). Overall, C$^2$bR-COORD strikes the best performance trade-off (the same as BubbleRap with $\ell\!\!=\!\!2$). Improving the delivery rate by $\sim\!\!0.5\%$ (SimBet $L\!\!=\!\!16$), $\sim\!\!3.5\%$ (BubbleRap $\ell\!\!=\!\!3$) or $\sim\!\!4.5\%$ (BubbleRap $\ell\!\!=\!\!4$) requires a cost that is $\sim\!\!35\%$, $\sim\!\!77.5\%$ and $\sim\!\!174\%$ greater than that of C$^2$bR-COORD, respectively. Impressively, C$^2$bR-COORD does not require any special customization. On the contrary, to achieve the best performance of BubbleRap one is required, besides customizing the community detection algorithm, to also determine the appropriate value of $\ell$. This is not straightforward because the best value depends on the connectivity properties of the network that are not known beforehand. It is evident that failure to properly set $\ell$ results in either a noticeable cutback in delivery efficiency or in a significant increase in cost (Fig \[DRvsOHvsDelay-Multi-HL3\]). Furthermore, it is critical for BubbleRap to choose the proper hop limit. Fig \[DRvsOHvsDelay-Multi-HL4\] is similar to fig. \[DRvsOHvsDelay-Multi-HL3\] but for a limit of 4 hops for SimBet and Bubble Rap. Clearly, increasing the hop limit destroys the performance trade-off for BubbleRap regardless of $\ell$. This illustrates the importance of yet another parameter that requires non-trivial customization because it depends on the network properties which may not be known, especially at setup time. Fig. \[C2bR-SimBet-Bubble-1\] implies that a similar customization problem also applies to SimBet, i.e., a misfire in customization of either $L$ or the hop limit significantly affects its performance. On the other hand, C$^2$bR schemes do not depend on any similar customization and although C$^2$bR-COORD outperforms C$^2$bR-DF, the latter is very close and its performance is competitive to those of SimBet and BubbleRap.
Another interesting finding is that C$^2$bR schemes perform efficiently compared to the other algorithms under different values of packet TTL. In other words, the rules for stopping replication in C$^2$bR schemes do not impair the ability of the cluster-based approach to timely deliver packets. Fig. \[C2bR-SimBet-Bubble-2\] presents the performance of all algorithms for different values of TTL from a minimum of 15 mins to a maximum that equals the 20% of the Reality trace duration. For SimBet and BubbleRap we present the best performances, i.e., $L\!\!=\!\!8$ and $L\!\!=\!\!12$ with a limit of 3 hops for SimBet and $\ell\!\!=\!\!2$ with a limit of 3 hops for BubbleRap. Both C$^2$R-DF and C$^2$R-COORD achieve delivery performances similar to BubbleRap and SimBet with $L\!\!=\!\!12$ for all TTL values (Fig. \[DRvsTTL\]). In fact, both the C$^2$bR schemes slightly outperform the other two for medium TTL values. Only SimBet with $L\!\!=\!\!8$ lags significantly in delivery efficiency which comes as a trade-off for reducing cost (Fig. \[OHvsTTL\]). C$^2$bR-DF outperforms SimBet with $L\!\!=\!\!12$ and C$^2$bR-COORD performs similar to BubbleRap ($\ell\!\!=\!\!2$) in terms of cost although, contrary to their counterparts, they do not require any network-dependent customization.
As a final test, we explored the performance of the algorithms in the case of limited node storage (Fig. \[C2bR-SimBet-Bubble-3\]). Reasonably, the delivery efficiency of all algorithms declines as the available storage gets smaller. Both C$^2$bR-DF and C$^2$bR-COORD achieve performances that are competitive to those of SimBet and BubbleRap. This is especially true if we keep in mind that the presented versions of SimBet and BubbleRap are the ones with the optimal replication lever for each algorithm. This is critical since controlling replication allows more free storage space and therefore minimizes the packet drop rate. Needless to say that producing the optimal replication level for SimBet and BubbleRap calls for a network-dependent fine-tuning which may not even be possible in a real-life setting. As a last remark, BubbleRap exhibits an increased resilience to limited storage. This is mainly due to the method we implemented for controlling replicas which inherently imposes load balancing since each node is allowed to create the same number of copies.
Conclusion {#sec-concl}
==========
Despite their flexibility in effectively operate in delay-tolerant networks with diverse characteristics, dynamic replication schemes are inclined towards over-replication. To deal with the problem, we first made the observation that the utility values observed by a node through its contacts form clusters. We validated that these clusters can be identified by a node using lightweight clustering algorithms. Then, we delineated a novel forwarding policy that can be used to transform the decision making process of traditional dynamic replication schemes to one that relies on cluster-based decisions. More specifically, the key concept in our approach to forward a packet through clusters of increasing delivery capability, in contrast to the existing approach which is to create replicas in nodes of increasing utilities. We also extended our cluster-based approach to work with two utility functions at the same time. This extension is tailored for routing in mobile social networks. We experimentally demonstrated the significant performance benefits of cluster-based replication when operating either with one or two utility functions. We also validated that our approach is robust in a set of networks with diverse characteristics without the need for a complex and non-trivial pre-configuration.
Updating the Clustering Result {#sec-updating .unnumbered}
==============================
In Section \[subsec-updating\] we discussed the requirement for refreshing the utility values recorded by a node and accordingly its clustering result. Since we observed rather simple and smooth changes in the recorded data, we opted for LVQ as the refreshing function due to its low complexity. However, alternative update methods could be examined. Here, we evaluate the performance of two alternative update methods and justify our choice of using LVQ.
The first method is *periodic $k$-Means*, i.e., a periodic, window-based execution of $k$-Means algorithm. More specifically, after completing the training period and detecting utility clusters, a node continues to record new utility values. After collecting $T_{P}$ new samples, the node re-evaluates the utility clusters using the $k$-Means algorithm and the $W$ most recently recorded utility values. We call $T_{P}$ the update period and $W$ the update window. The second update approach extents the periodic one by using a concept known as *weighted $k$-Means* [@weighted-kmeans]. The idea here is to assign to each recorded utility value a weighting factor and then execute the $k$-Means algorithm. The weight for each recorded value decreases with the age of this value, i.e., an older recorded value is assigned a smaller weight, thus providing a node with the ability to adjust its clustering result to more recent utility values. More specifically, we assign to each recorded utility value $u$ a weight: $$w(u) = e^{-i/R}$$ where $i$ is the index of $u$ if all utilities in the window $W$ are ordered by their recording time and $R$ is a constant used to control the weight decaying rate. Now, the $k$-Means objective function is: $$\sum_{i=1}^{k}\sum_{u \in C_{i}} w(u)||u-c_{i}||^{2}$$ where $C_{1},C_{2},\ldots,C_{k}$ are the $k$ clusters of utility values. The value $c_{i}$ is the mean of $C_{i}$: $$c_{i} = \frac{1}{\sum\limits_{u \in C_{i}}w(u)}\sum_{u \in C_{i}} uw(u)$$ Observe that the traditional $k$-Means algorithm can be seen as a special case where $w(u)$ is constant, i.e. $w(u)\!=\!w, \forall u$.
We implemented both updating methods in Adyton [@Adyton] and assessed their performance compared to LVQ. Similar to the traditional implementation of $k$-Means, here, we also use the Silhouette criterion to automatically select the best value of $k$. We assumed unlimited storage at nodes in order to avoid other interfering factors. For periodic $k$-Means, we report results for $T_{P}\!=\!50$ and $W\!=\!50$. Although we also tested various combinations of values for $T_{P}$ and $W$, we witnessed insignificant performance variations. Regarding weighted $k$-Means, we report the results for the same values of $T_{P}$ and $W$ and for $R\!=\!400$. Again, when using different values of $R$, we observed only slight performance variations. We used CbR-DF as the reference algorithm and produced three algorithm versions corresponding to the three updating methods. Then, we captured the performance of those three versions using various utility functions in three representative traces, namely MIT Reality, Sigcomm and Cambridge. Fig. \[UpdatingMethods-results\] illustrates the routing gain of the three CbR-DF versions compared to the simple DF algorithm. For the majority of utility functions and trace combinations the three schemes achieve similar performance. The result is reasonable since we have observed that the number of utility clusters that a node detects rarely changes. Instead, the time evolution mostly concerns the center of the clusters, a type of evolution that LVQ can handle as efficiently as the other two update methods. In fact, for some utility functions such as DestEnc, PRoPHET, SPM and LTS, LVQ performs consistently and noticeably better than the other two schemes, an indication of the smooth adaptation to the changing network conditions. On the other hand, LVQ is slightly lagging in most cases when a destination independent (DI) utility is used, e.g., Enc and LastContact. Recall that a DI utility $U_{v}$ aims to capture the generic importance of $v$, therefore it is built based on contact information regarding multiple possible destination nodes instead of a single one in the case of destination dependent (DD) utilities. This makes a DI utility a more mutable quantity compared to a DD one. In any case, the routing gain lag of LVQ is extremely limited and can be considered an acceptable trade-off for its lower computational complexity. The same picture of minimal performance variations between the three schemes also appears when examining the normalized delivery rate and delay. Regarding the delivery rate change, besides an $\sim\!\!1\%$ improvement in favor of LVQ when LTS is used, we found that the three schemes yield practically the same performance (maximum variation $0.4\%$). The same observation applies to the delay change where the maximum variation was $1.1\%$.
|
---
abstract: 'The Digital Ludeme Project (DLP) aims to reconstruct and analyse over 1000 traditional strategy games using modern techniques. One of the key aspects of this project is the development of Ludii, a general game system that will be able to model and play the complete range of games required by this project. Such an undertaking will create a wide range of possibilities for new AI challenges. In this paper we describe many of the features of Ludii that can be used. This includes designing and modifying games using the Ludii game description language, creating agents capable of playing these games, and several advantages the system has over prior general game software.'
author:
-
bibliography:
- 'References.bib'
title: 'An Overview of the Ludii General Game System\'
---
General Game Playing, Artificial Intelligence, Ludii, Ludemes, Board games
Introduction
============
General game research is one of the most popular areas of game-based academia, and is widely viewed as a suitable means of evaluating AI techniques for a variety of applicable domains and problems [@gameaibook]. Several general game systems have been developed to assist with this research field, with the main candidates for such work being the General Game Playing (GGP) system [@genesereth05], the General Video Game AI (GVGAI) framework [@gvgaioverview], and the Arcade Learning Environment (ALE) [@aleoverview]. We propose Ludii as a new general game system, that can facilitate many novel and exciting areas of research.
Within this demo paper, we describe the key aspects of the Ludii system that make it an appealing alternative to other general game platforms. We cover the language used for describing games in Ludii, which provides a simple and human understandable approach. We also provide details on AI techniques that will be included with Ludii at launch. The first version of Ludii is expected to be released in August 2019, but will continue to be improved and updated with new games, functionality and organised events for many years afterwards.
Ludii System
============
Ludii is a new general game system currently under development [@Piette19]. It is based on similar principles to the earlier Ludi system [@browne09], but uses significantly different mechanisms to achieve much greater generality, extensibility and efficiency.
Ludii is being designed and implemented primarily to provide answers to the questions raised by the Digital Ludeme Project [@ludii1], but will stand alone as a platform for general games research in the areas of agent development, content generation, game design, player analysis, and education. Ludii provides many advantages over existing GGP systems, including performance improvements, simpler and clearer game descriptions, as well as a highly evolvable language.
Ludii uses a class grammar approach which automatically generates the game description language from the class hierarchy of its underlying source code [@BrowneB16]. This ensures that there is a 1:1 mapping between the Ludii source code and the game description grammar, as any games which are expressed using this grammar are automatically instantiated back into the corresponding library code for compilation. Ludii can theoretically support any rule, equipment or behaviour that can be programmed in Java. The exact implementation details for each of these definitions are encapsulated within our simplified grammar, which summarises the code to be called. Additional details and examples on the Ludii description language are provided in the next section.
[0.03]{}
[0.28]{} {width="\textwidth"} \[fig:1\]
[0.05]{}
[0.4]{}
(game "Gomoku"
(mode 2)
(equipment {
(goBoard 15)
(ball Each)
})
(rules
(play (to (mover) (empty)))
(end (line length:5) (result (mover) Win))
)
)
[0.24]{} {width="\textwidth"} \[fig:1\]
[0.6]{}
(game "Gomoku"
(mode 2 (addToEmpty))
(equipment {
(goBoard 15)
(ball Each)
}
)
(rules
(play (to (mover) (empty)))
(end (line length:5) (result (mover) Win))
)
)
[0.35]{} {width="\textwidth"} \[fig:1\]
[0.6]{}
(<@\textcolor{red}{game}@> "Amazons"
(<@\textcolor{red}{mode}@> 2)
(<@\textcolor{red}{equipment}@> {
(chessBoard 10)
(queen Each (slide (in (to) (empty)) (then (replay))))
(dot None)
})
(<@\textcolor{red}{rules}@>
(<@\textcolor{red}{start}@> {
(place "Queen1" {3 6 30 39})
(place "Queen2" {60 69 93 96})
})
(<@\textcolor{red}{play}@>
(if (even (turn))
(byPiece)
(shoot (in (to) (empty)) "Dot0")
)
)
(<@\textcolor{red}{end}@>
(stalemated (mover))
(result (next) Win)
)
)
)
[0.155]{} {width="\linewidth"}
[0.155]{} {width="\linewidth"}
[0.135]{} {width="\linewidth"}
[0.155]{} {width="\linewidth"}
[0.155]{} {width="\linewidth"}
[0.155]{} {width="\linewidth"}
A wide range of different game types can be implemented in Ludii, including but not limited to:
- Deterministic, stochastic and hidden information games.
- Board, card, dice and tile games.
- Boardless games (e.g. Dominoes).
- Stacking games (e.g. Lasca).
- Simultaneous move games (e.g. Chinook).
- Graph games (e.g. Dots and Boxes).
- Multi-player / team games and single-player puzzles.
Ludii Game Description Langugage
================================
Games are described in the Ludii grammar using [*ludemes*]{}, which are conceptual units of game related information that encapsulate key game design concepts, essentially forming the building blocks or “DNA” of games. As an example, Figure \[Fig:Gomoku\] shows the game of Gomoku, along with the Ludii game description file that was used to create this game. The equipment section describes the pieces and board that are needed to play. The rules section describes how the game was initially set up (start), how each turn in the game proceeds (play), and under what conditions the game is over (end).
Games described using this approach can be easily modified by adjusting ludemes to suit your needs. Changing the board size or victory conditions can be as simple as editing a single parameter. File sizes for each game are also very small, with each game’s description fitting into a QR code. The language is easy to read and understand compared to more verbose alternatives such as GDL. Recent experimental results revealed that our system tends to be faster than other GGP alternatives. The ability to break games down into their individual ludemes allows for additional analyses, such as phylogenetic analyses, of the similarities and differences between games.
Ludii Agents
============
Implementations of standard game-playing agents are included in Ludii. Due to the large number of (variants of) games that are planned to be included, we focus on techniques that are generally applicable across many games. This means that we focus on techniques such as Monte Carlo tree search (MCTS), which is the most popular approach in prior GGP research. Third parties will also be able to develop their own agents for Ludii.
Ludii will also provide visualisations to gain insight into the “thinking process” of algorithms such as MCTS. Examples of such visualisations are depicted in Figures \[Fig:Gomoku\] and \[Fig:ManyGames\]. Arrows are drawn for every possible movement action (in games like Amazons, Chess, Hnefatafl, etc.), and circles are drawn for placement actions (in games like Go, Hex, Reversi, etc.). The sizes of these drawings scale with the visit counts associated with actions in an MCTS search process, and colours are changed based on the value estimates of MCTS (e.g. blue for winning moves, red for losing moves, purple for moves with neutral value estimates).
Conclusion
==========
The Ludii system presents a new and intuitive approach to game design and game playing. The scope and completeness of the games available with Ludii can rival that of any prior general game playing system. The software is easy to use for those who are not technically inclined, while also providing the functionality for integrating existing agents and AI techniques. The language used to describe games also opens up many new avenues of research, particularly in the areas of general game generation and analysis.
Acknowledgment. {#acknowledgment. .unnumbered}
===============
This research is part of the European Research Council-funded Digital Ludeme Project (ERC Consolidator Grant \#771292) run by Cameron Browne at Maastricht University’s Department of Data Science and Knowledge Engineering.
|
---
abstract: 'In this paper, a lower bound is determined in the minimax sense for change point estimators of the first derivative of a regression function in the fractional white noise model. Similar minimax results presented previously in the area focus on change points in the derivatives of a regression function in the white noise model or consider estimation of the regression function in the presence of correlated errors.'
address: 'School of Mathematics & Statistics F07, The University of Sydney, NSW, 2006, Australia.'
author:
- Justin Rory Wishart
title: 'Minimax lower bound for kink location estimators in a nonparametric regression model with long-range dependence'
---
nonparametric regression ,long-range dependence ,kink ,minimax 62G08 ,62G05 ,62G20
Introduction {#Intro}
============
Nonparametric estimation of a kink in a regression function has been considered for Gaussian white noise models by @Cheng-Raimondo-2008 [@Goldenshluger-et-al-2008a; @Goldenshluger-et-al-2008b]. Recently, this was extended to the fractional Gaussian noise model by [@Wishart-2009]. The fractional Gaussian noise model assumes the regression structure, $$dY(x) = \mu(x)\,dx + \varepsilon^\alpha dB_H(x), \quad x \in \mathbb{R},
\label{eq:fixednonparareg}$$ where $B_H$ is a fractional Brownian motion (fBm) and $\mu {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}}$ is the regression function. The level of error is controlled by $\varepsilon \asymp n^{-1/2}$ where the relation $a_n \asymp b_n$ means the ratio $a_n/b_n$ is bounded above and below by constants. The level of dependence in the error is controlled by the Hurst parameter $H \in (1/2,1)$ and $\alpha {\mathrel{\mathop:}=}2 - 2H$, where the i.i.d. model corresponds to $\alpha = 1$. The fractional Gaussian noise model was used by [@Johnstone-Silverman-1997; @Wishart-2009] among others to model regression problems with long-range dependent errors.
This paper is interested in the performance of estimators of a change-point in the first derivative of $\mu$ observed in model . This type of change point is called a kink and the location denoted by $\theta$. Let $\widehat \theta_n$ denote an estimator of $\theta$ given $n$ observations. A lower bound is established for the minimax rate of kink location estimation using the quadratic loss in the sense that, $$\liminf_{n \to \infty } \inf_{\widehat \theta_n} \sup_{\mu \in \mathscr F_s(\theta)} \rho_n^{-2}\mathbb{E} \left| \widehat \theta_n - \theta\right|^2 \ge C \qquad \text{for some constant $C>0$}.\label{eq:rate}$$ The main quantity of interest in this lower bound is the rate, $\rho_n$. In , $\inf_{\widehat \theta_n}$ denotes the infimum over all possible estimators of $\theta$. The class of functions under consideration for $\mu$ is denoted $\mathscr F_s(\theta)$ and defined below.
\[def:functionalclass\] Let $s \geq 2$ be an integer and $a \in \mathbb{R}\setminus \left\{ 0\right\}$. Then, we say that $\mu\in \mathscr F_s(\theta)$ if,
1. The function $\mu $ has a kink at $\theta \in (0,1)$. That is, $$\lim_{x \downarrow \theta}\mu^{(1)}(x) - \lim_{x \uparrow \theta}\mu^{(1)}(x) = a \neq 0.$$
2. The function $\mu \in \mathscr L_2\left(\mathbb{R}\right) \cap \mathscr L_1(\mathbb{R}) $, and satisfies the following condition, $$\int_\mathbb{R} |\widetilde \mu(\omega)||\omega|^s\,d\omega < \infty, \label{sobolev}$$ where $\widetilde \mu(\omega) {\mathrel{\mathop:}=}\int_\mathbb{R} e^{-2 \pi i \omega x}\mu(x)\, dx$ is the Fourier transform of $\mu$.
The minimax rate for the kink estimators has been discussed in the i.i.d. scenario by [@Cheng-Raimondo-2008; @Goldenshluger-et-al-2008a] and was shown to be $n^{-s/(2s+1)}$. An extension of the kink estimators to the long-range dependent scenario was considered in [@Wishart-2009] that built on the work of [@Cheng-Raimondo-2008]. An estimator of kink locations was constructed by [@Wishart-2009] and achieved the rate in the probabilistic sense, $$\left| \widehat \theta_n - \theta\right| = \mathcal O_p (n^{-\alpha s /(2s+\alpha)}),\label{eq:kinkrate}$$ which includes the result of [@Cheng-Raimondo-2008] as a special case with the choice $\alpha = 1$. Both [@Cheng-Raimondo-2008] and [@Wishart-2009] considered a comparable model in the indirect framework and used the results of [-@Goldenshluger-et-al-2006] to infer the minimax optimality of . However, the results of [@Cheng-Raimondo-2008] and [@Wishart-2009] require a slightly more restrictive functional class than $\mathscr F_s(\theta)$. The rate obtained by [@Cheng-Raimondo-2008] of $n^{-s/(2s+1)}$ was confirmed as the minimax rate by the work of [@Goldenshluger-et-al-2008a] who used the i.i.d. framework and a functional class similar to $\mathscr F_s(\theta)$.
The fBm concept is an extension of Brownian motion that can exhibit dependence among its increments which is typically controlled by the Hurst parameter, $H$ (see [-@Beran-1994; -@Doukhan-et-al-2003] for more detailed treatment on long-range dependence and fBm). The fBm process is defined below.
\[fBm\] The fractional Brownian motion $\left\{B_H(t) \right\}_{t \in \mathbb{R}}$ is a Gaussian process with mean zero and covariance structure, $$\mathbb{E} B_H(t)B_H(s) =\frac{1}{2}\left\{ |t|^{2H} + |s|^{2H} - |t-s|^{2H} \right\}.$$
We assume throughout the paper that $H\in (1/2,1)$, whereby the increments of $B_H$ are positively correlated and are long-range dependent.
In this paper a lower bound for the minimax convergence rate of kink estimation using the quadratic loss function will be shown explicitly on model . This is a stronger result in terms of a lower bound than the simple probabilistic result in given by [@Wishart-2009] and is applicable to a broader class of functions.
Lower bound {#lowerbound}
===========
The aim of the paper is to establish the following result.
\[thm:lowerboundK\] Suppose $\mu \in \mathscr F_s \left( \theta \right)$ is observed from the model and $0 < \alpha < 1$. Then, there exists a positive constant $C < \infty$ that does not depend on $n$ such that the lower rate of convergence for an estimator for the kink location $\theta$ with the square loss is of the form, $$\liminf_{n \to \infty} \inf_{\widehat{\theta}_n} \sup_{\mu \in \mathscr F_s(\theta)} n^{ 2\alpha s/(2s + \alpha)} \mathbb{E}\left| \widehat{\theta}_n - \theta\right|^2 \ge C.$$
From one can see that the minimax rate for kink estimation in the i.i.d. case is recovered with the choice $\alpha = 1$ [see @Goldenshluger-et-al-2008a]. Also unsurprisingly, the level of dependence is detrimental to the rate of convergence. For instance as the increments become more correlated, and $\alpha \to 0$, the rate of convergence diminishes.
As will become evident in the proof of the Kullback-Leibler divergence is required between two measures involving modified fractional Brownian motions. To cater for this, some auxiliary definitions to precede the proof of are given in the next section.
Preliminaries
=============
In this paper, the functions under consideration are defined in the Fourier domain (see ). Among others, there are two representations for fBm that satisfy that are used in this paper. The first being the moving average representation of [@Mandelbrot-van-Ness-1968] in the time domain and second is the spectral representation given by [@Samorodnitsky-Taqqu-1994] in the Fourier domain. These both need to be considered since they are both used in the proof of the main result. Both representations have normalisation constants $C_{T,H}$ and $C_{F,H}$ for the time and spectral representations respectively to ensure the fBm satisfies . Start with the time domain representation.
\[fBmMVN\] The fractional Brownian motion $\left\{B_H(t) \right\}_{t \in \mathbb{R}}$ can be represented by, $$B_H(t) =\frac{1}{C_{T,H}}\int_\mathbb{R} \left((t-s)_+^{H- 1/2} - (-s)_+^{H-1/2}\right)dB(s),$$ where $C_{T,H} = \Gamma(H + 1/2)/\sqrt{2H \sin (\pi H) \Gamma(2H)}$ and $x_+ = x\mathbbm{1}_{\left\{ x > 0\right\}}(x).$
For the spectral representation a complex Gaussian measure $\breve B{\mathrel{\mathop:}=}B^{[1]} + i B^{[2]}$ is used where $B^{[1]}$ and $B^{[2]}$ are independent Gaussian measures such that for $i = 1,2;$ $B^{[i]}(A) = B^{[i]}(-A)$ for any Borel set $A$ of finite Lebesgue measure and $\mathbb{E} (B^{[i]}(A))^2 = \text{mesh}(A)/2$.
\[fBmST\] The fractional Brownian motion $\left\{B_H(t) \right\}_{t \in \mathbb{R}}$ can be represented by, $$B_H(t) =\frac{1}{C_{F,H}}\int_\mathbb{R} \frac{e^{i s t} - 1}{is}|s|^{-(H-1/2)}d\breve{B}(s),$$ where $C_{F,H} = \sqrt{\pi/(2H \sin (\pi H) \Gamma(2H))}$.
As will become evident in , to obtain the lower bound result for the minimax rate, it is crucial to know which functional class to consider for $\mu {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}}$ such that the process $ \int_\mathbb{R} \mu(x)\, dB_H(x)$ is a well defined random variable with finite variance. Two such classes of functions will be considered, $\mathcal H$ and $\widetilde{\mathcal{H}}$, which correspond to the time and spectral versions of fBm respectively. Begin with the moving average representation.
\[otherstochasticintegralfBmclass\] Let $H \in \left( 1/2, 1\right)$ be constant. Then the class $\mathcal H$ is defined by, $$\mathcal H = \left\{ \mu {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}} \Bigg| \int_\mathbb{R}\int_\mathbb{R} \mu(x) \mu(y) | x-y|^{-\alpha}\, dy \, dx < \infty \right\}.$$
Simlar to there is an inner product on the space $\mathcal H$ that satisfies the following. For all $f,g \in \mathcal H$, $$\mathbb{E} \left\{ \int_\mathbb{R} f(x) \, dB_H(x)\int_\mathbb{R} g(y) \, dB_H(y) \right\} = C_\alpha\int_\mathbb{R}\int_\mathbb{R} f(x) g(y)| x-y|^{-\alpha}\, dy \, dx {=\mathrel{\mathop:}}{ \langle f , g \rangle }_{\mathcal{H}},$$ where the constant $C_\alpha = \tfrac{1}{2} (1-\alpha)(2-\alpha)$. The other functional class for the spectral representation is denoted by $\mathcal H$ and defined below.
\[stochasticintegralfBmclass\] Let $H \in \left( 1/2, 1\right)$ be constant. Then the class $\widetilde{\mathcal H}$ is defined by, $$\widetilde{\mathcal H} = \left\{ \mu {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}} \Bigg| \int_\mathbb{R} |\widetilde\mu(\omega)|^2 \left| \omega \right|^{-(1-\alpha)}\, d\omega < \infty \right\}.$$
On the space $\widetilde{\mathcal H}$, the stochastic integrals with respect to fBm are well defined and satisfy the following. For all $f,g \in \widetilde{\mathcal{H}}$, $$\mathbb{E} \left\{ \int_\mathbb{R} f(x) \, dB_H(x)\int_\mathbb{R} g(y) \, dB_H(y) \right\} = \frac{1}{C_{F,H}^2}\int_\mathbb{R} \widetilde{f}(\omega) \overline{\widetilde{g}(\omega) }| \omega|^{-(1-\alpha)}\, d\omega {=\mathrel{\mathop:}}{ \langle f , g \rangle }_{\widetilde{\mathcal{H}}},
\label{eq:spectralExpectation}$$ where $\overline{\widetilde{g}}$ denotes the complex conjugate of $\widetilde{g}$.
These two classes of integrands were considered extensively in [@Pipiras-Taqqu-2000]. In this context of this paper the inner products can be used interchangeably because if $\mu \in \mathscr F_s(\theta)$ then $\mu \in \mathscr L_1(\mathbb{R})\cap \mathscr L_2(\mathbb{R})$ and by @Pipiras-Taqqu-2000 [Proposition 3.1] then $\mu \in \mathcal H$. Also, using @Pipiras-Taqqu-2000 [Proposition 3.2] with the isometry @Biagini-et-al-2008 [Lemma 3.1.2] and Parseval’s Theorem then $\mu \in \widetilde{\mathcal H}$ and consequently $\mu \in \mathcal H \cap \widetilde{\mathcal H}$.
Proof of Theorem 1 {#proof}
==================
The lower bound for the minimax rate is constructed by adapting the results of [@Goldenshluger-et-al-2006] to our framework. This requires obtaining the Kullback-Leibler divergence of two suitably chosen functions $\mu_0$ and $\mu_1$ from the functional class $\mathscr F_s(\theta)$. The main hurdle in determining the Kullback-Leibler divergence is the long-range dependent structure in the fBm increments. A summary of Girsanov type theorems for fBm have been established by @Biagini-et-al-2008 [Theorem 3.2.4]. Here however, the Radon-Nikodym derivative is the main focus. Once that is determined, the Kullback-Leibler divergence is linked to the lower rate of convergence using @Tsybakov-2009 [Theorem 2.2 (iii)]. Lastly, before proceeding to the proof, the quantity $C > 0$ denotes a generic constant that could possibly change from line to line.
Without loss of generality, consider a function $\mu_0 \in \mathscr F_s(\theta_0)$ where $\theta_0 \in (0,1/2]$ and define $\theta_1 = \theta_0 + \delta$ where $\delta \in (0, 1/2)$ (a symmetric argument can be setup to accommodate the case when $\theta_0 \in [1/2,1)$). Define the functions $v {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}}$ and $v_N {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}}$ such that $$\begin{aligned}
v(x) &{\mathrel{\mathop:}=}a((\theta_1\wedge x) - \theta_0) \mathbbm{1}_{ (\theta_0,1] }(x), \qquad v_N(x) {\mathrel{\mathop:}=}\int_{-N}^N \widetilde{v}(\omega) e^{2 \pi i x \omega}\, d\omega,\end{aligned}$$ where $a$ is the size of the jump given in and $\widetilde v$ is the Fourier transform of $v$. Note that, $v_N(x)$ is close to $v(x)$ in the sense that it is the inverse Fourier transform of $\widetilde{v}(\omega)\mathbbm{1}_{|\omega| \le N}$ and $\widetilde v_N(\omega) = \widetilde v(\omega)\mathbbm{1}_{|\omega| \le N}$. With these definitions, the derivative takes the form, $ v^{(1)}(x) = a \mathbbm{1}_{[\theta_0, \theta_1] }(x)$ and the function $(\mu_0 - v)$ has a single kink at $\theta_1$. Then define $\mu_1 {\mathrel{\mathop:}=}\mu_0 - (v - v_N)$. The function $v_N$ is infinitely differentiable across the whole real line and smooth for finite $N$, which implies that $\mu_1 = \mu_0 - (v - v_N)$ has a single kink at $\theta_1$. It can be shown that, $$\left|\widetilde{v}(\omega)\right| \le a\delta/(2 \pi \left|\omega\right|)^{-1}.\label{eq:vomegabound}$$ Further, if $N$ is chosen to be $N = \left( s\pi C/(a \delta) \right)^{1/s}$ then $\int_\mathbb{R} | \widetilde{v_N}(\omega)| | \omega|^s \,d\omega < \infty$ and consequently $\mu_1 \in \mathscr F_s(\theta_1)$.
To be able to determine the Radon-Nikodym derivative, define $\Delta {\mathrel{\mathop:}=}\mu_0 - \mu_1 = v - v_N$ and note that $\Delta {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}}$. The Radon-Nikodym derivative also needs a paired function $\underline{\Delta}{\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}}$. Define such a function with a singular integral operator with $$\Delta(x) {\mathrel{\mathop:}=}\varepsilon^{-\alpha}C_\alpha \int_\mathbb{R} |x-y|^{-\alpha} \underline{\Delta}(y) \, dy = \frac{\Gamma(3-\alpha)}{2} \left( \mathcal D_-^{-(1-\alpha)}\underline\Delta(x) + \mathcal D_+^{-(1-\alpha)}\underline\Delta(x)\right),\label{eq:DeltaDecomp}$$ where, for $\nu \in (0,1) $, $\mathcal D_-^{-\nu}$ and $\mathcal D_+^{-\nu}$ are the left and right fractional Liouville integral operators defined by, $$\mathcal D_-^{-\nu} f(x) {\mathrel{\mathop:}=}\frac{1}{\Gamma(\nu)}\int_{-\infty}^x (x-y)^{\nu -1}f(y)\, dy \qquad \mathcal D_+^{-\nu} f(x) {\mathrel{\mathop:}=}\frac{1}{\Gamma(\nu)}\int_x^\infty (y-x)^{\nu -1}f(y)\, dy$$ This function $\underline \Delta$ has representation in the Fourier domain with $$\widetilde{\underline \Delta} (\omega) \asymp \varepsilon^{-\alpha}|\omega|^{1-\alpha} \widetilde{\Delta}(\omega).$$ Furthermore, $\underline{\Delta} \in \mathcal H \cap \widetilde{\mathcal H}$. Indeed, by definition, $\Delta = \mu_0 - \mu_1$ with $\mu_0 \in \mathscr F_s(\theta_0)$ and $\mu_1 \in \mathscr F_s(\theta_1)$ which implies that $\Delta \in \mathscr L_1(\mathbb{R}) \cap \mathscr L_2(\mathbb{R})$ and $\widetilde \Delta (\omega) = o(\omega^{-s})$ due to . First, it will be shown that, $\underline \Delta \in \widetilde{\mathcal H}$. $$\begin{aligned}
{ \langle \underline \Delta , \underline \Delta \rangle }_{\widetilde{\mathcal H}} &\asymp \int_\mathbb{R} | \widetilde \Delta (\omega)|^2 |\omega|^{1-\alpha}\, d\omega\nonumber\\
&\le C\left\{ \|\Delta\|_{1}^2\int_{|\omega|\le 1} |\omega|^{1-\alpha}\, d\omega + \int_{|\omega|\ge 1} |\widetilde \Delta (\omega)|^2 |\omega|^{1-\alpha}\, d\omega\right\},\label{eq:uDeltainnerp}\end{aligned}$$ where $C > 0$ is some constant and $ \|\Delta\|_{1} = \int_\mathbb{R} | \Delta (x)| \, dx $. In , the first integral is finite since $\alpha \in (0,1)$ and the last integral is finite since $\widetilde \Delta (\omega) = o(\omega^{-s}) $ for $s \ge 2$, proving $\underline \Delta \in \widetilde{\mathcal H}$. Then apply the isometry in @Biagini-et-al-2008 [Lemma 3.1.2] with Plancherel and , it follows that $\underline \Delta \in \mathcal H$.
Now let $P_0$ and $P_1$ be the probability measures associated with model with $\mu = \mu_0$ and $ \mu = \mu_1$ respectively. Define, $\mathring{B}_H(x) {\mathrel{\mathop:}=}\varepsilon^{-\alpha} \int_0^x \Delta(x)\, dx + B_H(x)$. Then under the $P_0$ measure, $$\begin{aligned}
dY_0(x) &= \mu_0(x)\,dx + \varepsilon^\alpha \, dB_H(x)
= \mu_1(x)\,dx + \varepsilon^\alpha \, d\mathring{B}_H(x).\end{aligned}$$ The Radon-Nikodym derivative between these measures takes the form, $$\begin{aligned}
\frac{d P_1}{d P_0} &{\mathrel{\mathop:}=}\exp \left\{ - \int_\mathbb{R} \underline{\Delta}(x) \, dB_H(x) - \frac{1}{2}\mathbb{E}_{P_0} \left( \int_\mathbb{R} \underline{\Delta}(x) \, dB_H(x) \right)^2\right\} . \label{eq:radnikderiv}\end{aligned}$$ Indeed to show is valid, for $\underline{\Delta} \in \mathcal H$ and $\psi \in \mathcal H$, use and apply @Biagini-et-al-2008 [Lemma 3.2.1] with the change of measure formula in to yield, $$\mathbb{E}_{P_1} \left[ \psi(\mathring{B}_H(x)) \right]= \mathbb{E}_{P_0} \left[\psi(\mathring{B}_H(x))\frac{d P_1}{d P_0} \right] = \mathbb{E}_{P_0} \Big[\psi(B_H(x))\Big].$$ So, using in , the Kullback-Leibler divergence between the two models can be evaluated, $$\mathcal K(P_0,P_1) {\mathrel{\mathop:}=}\mathbb{E} \ln \frac{d P_0}{d P_1} = \frac{1}{2 } { \langle \underline{\Delta} , \underline{\Delta} \rangle }_{\widetilde{\mathcal H}}.\label{logstochasticexp}$$ To evaluate , obtain a finer bound on $|\widetilde{\underline{\Delta}}(\omega)|^2$ by recalling that $\Delta = v - v_N$ and using , $$|\widetilde{\underline{\Delta}}(\omega)|^2 \asymp \varepsilon^{-2\alpha} |\widetilde{v}(\omega)|^2 \mathbbm{1}_{\left\{ |\omega| \geq N \right\}} |\omega|^{2 - 2\alpha} \leq \frac{C^2a^2\delta^2}{4\pi^2} \varepsilon^{-2\alpha} |\omega|^{-2\alpha} \mathbbm{1}_{\left\{ |\omega| \geq N \right\}}. \label{eq:DeltaModulus}$$ Apply the bound in to with the chosen $N = \left( s\pi C/(a \delta) \right)^{1/s}$, $$\begin{aligned}
\mathcal{K}(P_0,P_1) &= \frac{1}{2} \int_\mathbb{R} |\widetilde{\underline{\Delta}}(\omega)|^2|\omega|^{-(1-\alpha)}\, d\omega\\
&\leq \frac{ Ca^2 \delta^2}{4 \pi^2} \varepsilon^{-2\alpha} \int_{\left|\omega\right| \geq N } \left|\omega\right|^{-\alpha-1}\, d\omega\\
&= Ca^2 \delta^2\varepsilon^{-2\alpha} \left( s /(a \delta) \right)^{-\alpha/s}\\
&\asymp \delta^{(2s+\alpha)/s} \varepsilon^{-2\alpha} .\end{aligned}$$ Now choose $\delta \asymp \varepsilon^{2\alpha s/(2s + \alpha)}$ which guarantees that $\mathcal{K}(P_0,P_1) \leq K < \infty$ for some finite positive constant $K$. Then by @Tsybakov-2009 [Theorem 2.2 (iii)] combined with the fact that $\varepsilon \asymp n^{-1/2}$ it follows that the lower rate of convergence for the minimax risk is $\varepsilon^{2\alpha s/(2s+\alpha)}\asymp n^{- \alpha s/(2s+\alpha)}$. $\Box$
Acknowledgements {#acknowledgements .unnumbered}
================
The author would like to thank the editor and an anonymous referee for their comments and suggestions which lead to an improved version of this paper.
[13]{} natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix
Beran, J., 1994. Statistics for long-memory processes. Vol. 61 of Monographs on Statistics and Applied Probability. Chapman and Hall, New York.
Biagini, F., Hu, Y., [Ø]{}ksendal, B., Zhang, T., 2008. Stochastic Calculus for Fractional [B]{}rownian Motion and Applications. Probability and its Applications (New York). Springer-Verlag London Ltd., London.
Cheng, M.-Y., Raimondo, M., 2008. Kernel methods for optimal change-points estimation in derivatives. J. Comput. Graph. Statist. 17 (1), 56–75. <http://dx.doi.org/10.1198/106186008X289164>
Doukhan, P., Oppenheim, G., Taqqu, M. S. (Eds.), 2003. Theory and applications of long-range dependence. Birkhäuser Boston Inc., Boston, MA.
Goldenshluger, A., Juditsky, A., Tsybakov, A. B., Zeevi, A., 2008. Change-point estimation from indirect observations. [I]{}. [M]{}inimax complexity. Ann. Inst. Henri Poincaré Probab. Stat. 44 (5), 787–818. <http://dx.doi.org/10.1214/07-AIHP110>
Goldenshluger, A., Juditsky, A., Tsybakov, A., Zeevi, A., 2008. Change-point estimation from indirect observations. [II]{}. [A]{}daptation. Ann. Inst. Henri Poincaré Probab. Stat. 44 (5), 819–836. <http://dx.doi.org/10.1214/07-AIHP144>
Goldenshluger, A., Tsybakov, A., Zeevi, A., 2006. Optimal change-point estimation from indirect observations. Ann. Statist. 34 (1), 350–372. <http://dx.doi.org/10.1214/009053605000000750>
Johnstone, I. M., Silverman, B. W., 1997. Wavelet threshold estimators for data with correlated noise. J. Roy. Statist. Soc. Ser. B 59 (2), 319–351. <http://dx.doi.org/10.1111/1467-9868.00071>
Mandelbrot, B. B., Van Ness, J. W., 1968. Fractional [B]{}rownian motions, fractional noises and applications. SIAM Rev. 10, 422–437. <http://dx.doi.org/10.1137/1010093>
Pipiras, V., Taqqu, M. S., 2000. Integration questions related to fractional [B]{}rownian motion. Probab. Theory Related Fields 118 (2), 251–291. <http://dx.doi.org/10.1007/s440-000-8016-7>
Samorodnitsky, G., Taqqu, M. S., 1994. Stable non-[G]{}aussian random processes. Stochastic Modeling. Chapman & Hall, New York, stochastic models with infinite variance.
Tsybakov, A. B., 2009. Introduction to Nonparametric Estimation. Springer Publishing Company, Incorporated.
Wishart, J., 2009. [Kink estimation with correlated noise]{}. Journal of the Korean Statistical Society 38 (2), 131–143. <http://dx.doi.org/10.1016/j.jkss.2008.08.001>
|
---
date: |
R. AVALOS-ZUÑIGA[^1] $\dagger$, M. XU$\dagger$, F. STEFANI$\dagger$, G. GERBETH$\dagger$\
and F. PLUNIAN$\ddagger$
title: '**Cylindrical anisotropic $\alpha^{2}$ dynamos**'
---
Authors:
:
RAUL ALEJANDRO AVALOS-ZUÑIGA
Universidad Autónoma Metropolitana-Iztapalapa. Av. San Rafael Atlixco 186, col. Vicentina, 09340 D.F. México. Tel.: +52 55 5804 4648 ext. 238. Fax: +52 55 5804 4900. E-mail: raaz@xanum.uam.mx.
MINGTIAN XU
Forschungszentrum Dresden-Rossendorf, P.O. Box 510119, 01314 Dresden, Germany. Tel.: +49 351 260 2227. Fax: +49 351 260 2007. E-mail:M.Xu@fzd.de
FRANK STEFANI
Forschungszentrum Dresden-Rossendorf, P.O. Box 510119, 01314 Dresden, Germany. Tel.: +49 351 260 3069. Fax: +49 351 260 2007. E-mail: F.Stefani@fzd.de
GUNTER GERBETH
Forschungszentrum Dresden-Rossendorf, P.O. Box 510119, 01314 Dresden, Germany. Tel.: +49 351 260 2168. Fax: +49 351 260 2007. E-mail:G.Gerbeth@fzd.de
FRANCK PLUNIAN
Laboratoire de Géophysique Interne et de Tectonophysique, BP 53, 38041 Grenoble Cedex 9, France. Tel.: +33 4 76 82 80 37. Fax: 33 4 76 82 81 01. E-mail: Franck.Plunian@ujf-grenoble.fr
$\dagger$[Forschungszentrum Dresden-Rossendorf, P.O. Box 510119, 01314 Dresden, Germany]{}\
$\ddagger$[Lab. de Géophysique Interne et de Tectonophysique, BP 53, 38041 Grenoble Cedex 9, France]{}\
(Received 30 November 2006; in final form 15 June 2007)
[We explore the influence of geometry variations on the structure and the time-dependence of the magnetic field that is induced by kinematic $\alpha^{2}$ dynamos in a finite cylinder. The dynamo action is due to an anisotropic $\alpha$ effect which can be derived from an underlying columnar flow. The investigated geometry variations concern, in particular, the aspect ratio of height to radius of the cylinder, and the thickness of the annular space to which the columnar flow is restricted. Motivated by the quest for laboratory dynamos which exhibit Earth-like features, we start with modifications of the Karlsruhe dynamo facility. Its dynamo action is reasonably described by an $\alpha^{2}$ mechanism with anisotropic $\alpha$ tensor. We find a critical aspect ratio below which the dominant magnetic field structure changes from an equatorial dipole to an axial dipole. Similar results are found for $\alpha^{2}$ dynamos working in an annular space when a radial dependence of $\alpha$ is assumed. Finally, we study the effect of varying aspect ratios of dynamos with an $\alpha$ tensor depending both on radial and axial coordinates. In this case only dominant equatorial dipoles are found and most of the solutions are oscillatory, contrary to all previous cases where the resulting fields are steady.]{}
*Keywords:* [Dynamo; $\alpha$ effect; magnetic field orientation]{}
Introduction
============
It is generally assumed that columnar flows in the Earth’s outer core play an essential role for the generation of the geomagnetic field. At the surface of the Earth, the magnetic field structure has almost an axial dipole (AD) structure closely aligned with the Earth’s rotation axis. Direct numerical simulations of the geodynamo have successfully reproduced many observed features like, e.g., the dominance of the axial dipole and the occurrence of reversals (e.g. Olson *et al.* 1999, Ishihara and Kida 2002, Aubert and Wicht 2004, Wicht and Olson 2004 and references therein). The poloidal part of the field is thought to be produced from the toroidal part by the $\alpha$-effect generated by the columnar flows, while the toroidal component of the Earth’s magnetic field is associated to the $\Omega$-effect, but also again to an $\alpha$-effect or even to both mechanisms together. These types of magnetic field generation are usually referred to as $\alpha\Omega$, $\alpha^{2}$ and $\alpha^{2}\Omega$ dynamos, respectively.
It was one of the motivations of the Karlsruhe dynamo experiment to study an Earth-like magnetic field generation process in the laboratory (Gailitis 1967, Busse 1975, Stieglitz and Müller 2001). However, in contrast to the axial dipole (AD) of the Earth, the eigenfield structure of the Karlsruhe dynamo is an equatorial dipole (ED) what has been predicted in terms of the mean-field theory with an anisotropic $\alpha$ effect (Rädler *et al.* 1998). Actually, a general tendency of anisotropic $\alpha^{2}$ dynamos to produce fields with dominant equatorial dipole structure has been known for long (Rädler 1975, Rädler 1980, Rüdiger 1980, Rüdiger and Elstner 1994).
It is also well known that a transition from equatorial to axial dipoles can occur if some differential rotation is added (Rädler 1986, Gubbins *et al.* 2000). However, an axial field orientation can also result from $\alpha^{2}$ dynamos if the magnetic diffusion is enhanced by small scales of the flow (Tilgner 2004).
Besides the axial and equatorial dipole, the quadrupole structure seems to play also a certain role in geodynamo models. In many kinematic models one finds a quasi-degeneration with the dipole field (Gubbins *et al.* 2000). This degeneration is also responsible for the appearance of hemispherical dynamos in dynamically coupled models (Grote and Busse 2000). In this case, both quadrupolar and dipolar components contribute nearly equal magnetic energy so that their contributions cancel in one hemisphere and add to each other in the opposite hemisphere. The interplay between the nearly degenerated (Gubbins *et al.* 2000) axial dipole, equatorial dipole, and quadrupole was used in various models to explain the reversal phenomenon of the geodynamo (Melbourne *et al.* 2001).
With the same focus on field reversals, the importance of transitions between steady and oscillatory solutions of kinematic dynamos has been highlighted by several authors (Weisshaar 1982, Yoshimura *et al.* 1984, Sarson and Jones 1999, Phillips 1993, Rüdiger *et al.* 2003). In an extremely reduced reversal model dealing only with the axial dipole it was shown that many features of reversals (typical time scales, asymmetry between slow dipole decay and fast recovery, bimodal field distribution) can be understood by the magnetic field dynamics in the vicinity of transition points between steady and oscillatory solutions (Stefani and Gerbeth 2005, Stefani *et al.* 2006a, Stefani *et al.* 2006b). The main ingredient of this reversal model, as well as of the reversal model of Giesecke et al. (Giesecke *et al.* 2005a), is a sign change of $\alpha$ along the radius which brings into play a coupling between the first two radial eigenfunctions of the axial dipole field. It should be noticed that such a sign change results indeed from simulations of magneto-convection (Giesecke *et al.* 2005b).
With this background, we investigate in the present paper various kinematic dynamo models within cylindrical geometry. Our focus will lay first on the dominance of field structure: equatorial (ED) or axial dipoles (AD) or even quadrupoles (Q), and second on the occurrence of oscillatory solutions. The cylindrical geometry, which might seem awkward from the purely geodynamo perspective, is quite natural from an experimentalist’s viewpoint. One could ask, e.g., how the geometry and the arrangement of spin-generators in the Karlsruhe dynamo could be modified in order to make its eigenfield prone to reversals.
After presenting the general framework, we will explore geometrical effects that could lead to dominant AD fields in cylindrical anisotropic $\alpha^{2}$ dynamos. The utilised numerical code, which is based on the integral equation approach to kinematic dynamos (Stefani *et al.* 2000, Xu *et al.* 2004a, Xu *et al.* 2004b, Xu *et al.* 2006), was already used for the simulation of various cylindrical dynamos, including the VKS dynamo experiment in Cadarache (Stefani *et al.* 2006c).
The geometrical variations which are actually considered are the aspect ratio of height to radius of the cylinder and the width of the annular space to which the dynamo source is restricted. First we consider the geometry of the Karlsruhe dynamo experiment. Its steady dynamo field is generated by a bundle of axially invariant helical columns, which is well described within mean-field theory as an $\alpha^{2}$ dynamo with anisotropic $\alpha$-effect. We find that for this experiment a dominant AD field could be achieved below a critical value of the aspect ratio which is not so far from the one of the real facility. In a next step, we explore more complex structures of $\alpha$ which have been derived from a flow described by axially invariant helical columns which are restricted to an annular space. The resulting $\alpha$ coefficients acquire a radial profile which depends on the flow structure. As for the modified Karlsruhe case, dynamo solutions show dominant steady AD fields below a critical value of the aspect ratio. In contrast to this, the reduction of the thickness of the annular space does not lead to a transition from non-axisymmetric to axisymmetric modes, although the critical dynamo numbers for both modes seem to converge. Finally, we have considered axial-radial dependence of $\alpha$. The dynamo action works in a fixed annular space and again the aspect ratio of height to radius of cylinder is the varying geometrical parameter. In this case, non-axisymmetric oscillatory fields are the dominant solutions.
The general concept
===================
We consider an incompressible steadily moving fluid with velocity ${\mathrm{\mathbf{u}}}$, which is confined to a cylinder and surrounded by vacuum. The fluid has homogenous electrical conductivity $\sigma$ and magnetic permeability $\mu$. The fluid motion induces a magnetic field ${\mathrm{\mathbf{B}}}$ which extends in whole space. The magnetic field is governed by the induction equation$$\eta{\nabla}^{2}\,{\mathrm{\mathbf{B}}}+{\nabla}\x({\mathrm{\mathbf{u}}}\x{\mathrm{\mathbf{B}}})-\p_{t}\,{\mathrm{\mathbf{B}}}=\bzo\,,\quad{\nabla}\cdot{\mathrm{\mathbf{B}}}=0\,,\label{eq:induction}$$ where $\eta$ is the magnetic diffusivity defined by $\eta=1/\mu\sigma$. In the mean field approach, each quantity is decomposed into a mean part (denoted by an overline) and a fluctuating part (denoted by a prime). Referring to a cylindrical coordinate system $(s,\varphi,z)$, we define mean fields by averaging over $\varphi$.
As we are only interested in the induction effects originated by the fluctuating part ${\mathrm{\mathbf{u}}}'$ we assume that the mean motion ${\overline{\mathrm{\mathbf{u}}}}$ is equal to zero. In this case, the mean part of the induction equation (1) reduces to $$\eta{\nabla}^{2}\,{\overline{\mathrm{\mathbf{B}}}}+{\nabla}\x{{{\mbox{\boldmath $\cal{E}$}}}}-\p_{t}\,{\overline{\mathrm{\mathbf{B}}}}=\bzo\,,\quad{\nabla}\cdot{\overline{\mathrm{\mathbf{B}}}}=0\,,\label{eq:mean-ind}$$ where ${{{\mbox{\boldmath $\cal{E}$}}}}=\overline{{\mathrm{\mathbf{u}}}'\x{\mathrm{\mathbf{B}}}'}$, is the mean electromotive force (e.m.f.) which is the source of generation of the large scale magnetic field ${\overline{\mathrm{\mathbf{B}}}}$. This e.m.f. results from the interaction of motion and magnetic field at small scales.
Representations of ${{{\mbox{\boldmath $\cal{E}$}}}}$
-----------------------------------------------------
We consider different forms of ${{{\mbox{\boldmath $\cal{E}$}}}}$ which are generated by flows organised in columnar vortices parallel to the vertical axis of the cylinder. In the following only the $\alpha$ effect that results from these columnar structures is considered as the main contribution to the generation of ${{{\mbox{\boldmath $\cal{E}$}}}}$, other effects are just neglected. In a strict sense, such a reduction of ${{{\mbox{\boldmath $\cal{E}$}}}}$ to an $\alpha$-effect term is only possible if the spatial variations of ${\overline{\mathrm{\mathbf{B}}}}$ are sufficiently weak.
We consider first the mean e.m.f ${{{\mbox{\boldmath $\cal{E}$}}}}$ produced by the flow in the Karlsruhe dynamo experiment (Rädler *et al.* 1998 ). Its most simplified analytical representation is given by $$\begin{array}{ccc}
{{{\mbox{\boldmath $\cal{E}$}}}}& = & -\alpha\left(\,{\overline{\mathrm{\mathbf{B}}}}-(\mathrm{\mathbf{e}}_{z}\cdot{\overline{\mathrm{\mathbf{B}}}}\,)\mathrm{\mathbf{e}}_{z}\right).\end{array}\label{eq:efm-Karlsruhe}$$ where $\alpha$ is constant in the cylindrical volume and ${{{{\mbox{\boldmath $e$}}}}}_{z}$ is the unit vector in axial direction. We point out the anisotropy of the $\alpha$-effect as represented in (\[eq:efm-Karlsruhe\]).
The next considered example of ${{{\mbox{\boldmath $\cal{E}$}}}}$ results from an axially invariant flow organised in columnar vortices equally distributed in an annular region. Similar flow structures were recently discussed in the context of quasi-geostrophic dynamos (Schaeffer and Cardin 2006). A detailed description of such ”rings of rolls” and the $\alpha$ tensor resulting from them has been derived by Avalos et al. (2007) and is given in Appendix A. The mean e.m.f. ${{{\mbox{\boldmath $\cal{E}$}}}}$ produced by such a flow is given, under the assumptions mentioned in Appendices A and B, by $${\mathcal{{E}}}_{\kappa}=\alpha_{\kappa\lambda}(s)\,{\overline{B}}_{\lambda},\label{eq:fem-zind}$$ with the subscripts $\kappa$ and $\lambda$ standing for $s$, $\varphi$, or $z$.
For some flow configurations, it has been shown (Avalos *et al.* 2007) that the resulting matrix $\alpha_{\kappa\lambda}$ is of the form $$\alpha_{\kappa\lambda}=\left\{ \begin{array}{ccc}
\alpha_{ss}(s) & 0 & 0\\
0 & \alpha_{\varphi\varphi}(s) & 0\\
0 & \alpha_{z\varphi}(s) & 0\end{array}\right\} .\label{eq:matrix-alpha}$$
We have also considered an additional axial dependence of the components in (\[eq:matrix-alpha\]) multiplying them by harmonic functions of $z$ that vanish at the top and the bottom of the cylinder. This was motivated by the fact that for rolls in real rotating bodies a North-South antisymmetry of the axial velocity is expected, while the horizontal velocity components are expected to be symmetric with respect to the equator. Admittedly, the correct treatment of this problem would require a new derivation of the $\alpha$ matrix for such rolls along the lines outlined in the appendices. As a sort of compromise we focus here only on the general symmetry properties of the elements of the $\alpha$ matrix. Since $\alpha_{ss}$ and $\alpha_{\varphi\varphi}$ depend on products of axial and horizontal velocity components, we expect an antisymmetric behaviour. On the other hand, $\alpha_{z\varphi}$ should remain North-South symmetric since it depends on horizontal velocity components only.
The cylinder is assumed to extend over the axial interval $-H/2\leq z\leq H/2$. If we assume $u_{s}$ and $u_{\varphi}$ to be proportional to $\cos(\pi z/H)$ and $u_{z}$ to be proportional to $\sin(2\pi z/H)$, then the new $\alpha$ matrix is given by $$\alpha_{\kappa\lambda}=\left\{ \begin{array}{ccc}
\alpha_{ss}(s)\cos(\pi z/H)\sin(2\pi z/H) & 0 & 0\\
0 & \alpha_{\varphi\varphi}(s)\cos(\pi z/H)\sin(2\pi z/H) & 0\\
0 & \alpha_{z\varphi}(s)\cos^{2}(\pi z/H) & 0\end{array}\right\} .\label{eq:matrix-alpha-z}$$ Evidently, the resulting diagonal elements of $\alpha_{\kappa\lambda}$ are anti-symmetric with respect to $z=0$, whereas the non-diagonal element is symmetric with respect to $z=0$.
Though the representation of ${{{\mbox{\boldmath $\cal{E}$}}}}$ defined above was derived for an infinitely extended conducting fluid, we assume that it applies also to a finite cylinder. This approximation has been successfully used, e.g. in Rädler *et al.* (1998, 2002), to solve the Karlsruhe dynamo problem in a finite cylinder. In this case the symmetry of the most easily excited magnetic field mode was found to be independent of the conductivity outside the cylinder, while the other properties of this mode well depend on the conductivity in outer space.
Dynamo solutions
================
Once we have defined different representations of ${{{\mbox{\boldmath $\cal{E}$}}}}$, we solve the mean-field dynamo problem in a finite cylinder enclosed by vacuum using a numerical code based on the integral equation approach (Stefani *et al.* 2000, Xu *et al.* 2004a, Xu *et al.* 2004b, Xu *et al.* 2006).The magnetic field is determined by a self-consistent solution of the Biot-Savart equation together with a surface integral equation for the electric potential at the vacuum boundaries. For time-dependent solutions, the model has to be completed with an integral equation for the magnetic vector potential. All field quantities are expanded in harmonic modes ($\sim\exp\left({\textrm{i}}\, m\varphi\right)$) in azimuthal direction and vary in time $t$ according to $\exp\left(pt\right)$ with a constant $p$ that is, in general, complex. Then, there are two ways to solve the integral equation system. For steady eigenfields (i.e. marginal eigenfields which are non-oscillatory) it is treated as an eigenvalue equation for the critical value of $\alpha$. For unsteady eigenfields (including marginal eigenfields which are oscillatory) the integral equation system is treated as an eigenvalue problem in $p$: dynamo solutions corresponding to exponentially growing magnetic fields are characterised by a positive real part of $p$ . In Appendix C more details about this numerical approach are given.
We stress that the $\alpha$ effect has been determined under the assumption of an axisymmetric mean magnetic field with $m=0$. Using the same $\alpha$ effect for other $m$ modes is an approximation which is valid only if $m\ll n$, where $n$ is the number of pairs of rolls. In that case the azimuthal variation of ${\overline{\mathrm{\mathbf{B}}}}$ is weak compared to the one of ${\mathrm{\mathbf{u}}}$.
Karlsruhe geometry
------------------
It is well known that the main generation mechanism of the Karlsruhe dynamo experiment is an $\alpha$ effect which maintains, in the marginal case, a steady equatorial dipole (ED) field, i.e. a mode with $m=1$. For numerical studies, a simplified geometry has been assumed in form of a finite cylinder with height $H$ and radius $R$. We use this simplified geometry and the ${{{\mbox{\boldmath $\cal{E}$}}}}$ given by (\[eq:efm-Karlsruhe\]) to compute dynamo solutions for different ratios $H/R$. In figure 1 we represent the threshold $C_{\alpha}^{c}$ of the dynamo number $C_{\alpha}=\mu\sigma R\alpha$ corresponding to $\Re\{ p\}=0$. This is done for the two leading axisymmetric modes with $m=0$, i.e. for the axial dipole (AD) and the quadrupole (Q), as well as for the first non-axisymmetric mode ($m=1$) which represent an equatorial dipole (ED). All these modes are steady at the marginal point.
We have found a critical aspect ratio $H/R=0.75$ which distinguishes between dominant ED and AD fields. Above this critical value ED fields are dominant while below this value AD fields are dominant. Actually, the critical aspect ratio of 0.75 is not very far from the experimental one, which is 0.83.
Ring of rolls
-------------
In the following, we investigate anisotropic $\alpha^{2}$ dynamos in an annular space defined by a gap width of $2\delta R$ with $\delta<1$. Note that in the following $R$ refers to the radius in the middle of the gap, and not to the outer radius. We consider both the $z$-independent case with $\alpha$ given by (\[eq:matrix-alpha\]) and the $z$-dependent case with $\alpha$ given by (\[eq:matrix-alpha-z\]). In each case we have considered two types of flow distinguished by the radial dependence of their vertical velocities as defined in Appendix A.2. We have called them FW1 and FW2. We computed the critical value $C_{\alpha}^{c}$ of the dynamo number $C_{\alpha}=\mu\sigma R\tilde{\alpha}$ where $\tilde{\alpha}$ stands for $\sqrt{\left\langle \alpha_{\varphi\varphi}^{2}\right\rangle }$ with $\left\langle \cdot\cdot\cdot\right\rangle $ understood as an average over $s$. According to Avalos *et al.* (2007) the relation between $\tilde{\alpha}$ and the real velocity of the flow is given, under the first order smoothing approximation, by $\tilde{\alpha}\approx(\eta\delta/R)R_{m\perp}R_{m\parallel}$. The quantities $R_{m\perp}=u_{0\perp}R/\eta$ and $R_{m\parallel}=u_{0\parallel}R/\eta$ are the magnetic Reynolds numbers expressed in terms of the characteristic velocities in the horizontal (i.e. perpendicular to the $z$-axis) and in the axial (i.e. parallel to the $z$-axis) direction, respectively.
### $z$- independent case
In figure 2 the dynamo threshold is plotted in dependence on $H/R$ for both flows FW1 and FW2 for $\delta=0.5$ and $n=4$. Quite similar to the Karlsruhe dynamo case, a critical aspect ratio $H/R$ is also found here for both flows, which distinguishes between dominant ED and AD fields. Another critical value of $H/R$ is found where the second (i.e. subdominant) eigenmode is switching between AD and Q.
For a given ratio $H/R$ we found that the dynamo threshold increases monotonically when we reduced the magnitudes of $\alpha_{ss}$ and $\alpha_{\varphi\varphi}$ while keeping $\alpha_{z\varphi}$ unchanged. This is related to the impossibility of having dynamo action with a z-independent horizontal flow only.
$\qquad$
In figure 3, the rescaled dynamo threshold $\delta C_{\alpha}^{c}$ is plotted in dependence on $\delta$ for both flows FW1 and FW2. We introduce here another distinction between the case of ”free rolls” (for which the number of pairs of rolls is kept equal to 4 independently of the value of $\delta$) and the case of ”compact rolls” (for which the rolls have the same extension in azimuthal and radial direction and the number of pairs of rolls scales like $n=\pi/2\delta$). In neither case was there any indication for a critical value of $\delta$ below which the dominant $m=1$ mode is clearly replaced by a dominant $m=0$ mode. However, for small values of $\delta$, the values of $\delta C_{\alpha}^{c}$ for the $m=0$ mode come very close to those of the $m=1$ mode.
$\qquad$
In the case of free rolls it is remarkable that $\delta C_{\alpha}^{c}$ decreases with $\delta$ for FW1 and increases for FW2. The geometries of the magnetic field produced by FW1 and FW2 for $\delta=0.3$ have indeed different symmetries. This is illustrated in figures 4 and 5 where poloidal vectors and azimuthal contour of the magnetic field are plotted. On the other hand the symmetries are similar for $\delta=0.9$.
[>m[1cm]{}>m[5cm]{}m[0.5cm]{}>m[5cm]{}]{} $\delta$& $\qquad\qquad$$m=0$ & & $\qquad\qquad$$m=1$ [\
]{}0.9& & & [\
]{}0.3& & & [\
]{}
[>m[1cm]{}>m[5cm]{}m[0.5cm]{}>m[5cm]{}]{} $\delta$& $\qquad\qquad$$m=0$ & & $\qquad\qquad$$m=1$ [\
]{}0.9& & & [\
]{}0.3& & & [\
]{}
### $z$-dependent case
In figure 6, $C_{\alpha}^{c}$ is plotted in dependence on $H/R$ in the $z$ dependent case for $\delta=0.5$ and $n=4$. We find now that oscillatory non-axisymmetric fields are always the most easily excitable solutions for both flow types FW1 and FW2. However, the axisymmetric solutions are getting closer to non-axisymmetric ones as $H/R$ is reduced. For the FW1 flow a transition between steady and oscillatory magnetic fields is observed for the mode $m=0$ at a certain value $H/R\sim1$ (the precise value could not be determined since the numerical solution of the problem is quite time consuming) . In all cases non-dipolar fields only were found.
$\quad$
Conclusions
===========
We have explored the influence of geometrical parameters on spatial structure and temporal variations of magnetic fields generated by kinematic anisotropic $\alpha^{2}$ dynamos working in a finite cylinder. The $\alpha$ coefficients were calculated for specific flow patterns, following the lines of mean field concept, and the corresponding dynamo solutions were calculated using the integral equation approach. The obtained results show that this kind of dynamos can switch from dominant equatorial dipoles to dominant axial dipoles just by reducing the aspect ratio of the cylinder. This transition occurs for quite different forms of $\alpha$: constant, as in the Karlsruhe dynamo experiment, or having a purely radial dependence, as the one obtained in a flow described by axially invariant helical columns. On the other hand, such a transition does not occur when the relative gap width $\delta$ is reduced (at least not for the considered aspect ratio). When $\alpha$ has an additional axial dependence, dominant dynamo solutions are only oscillatory $m=1$ modes. In addition for the $m=0$ mode both steady and oscillatory solutions were obtained.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported by Deutsche Forschungsgemeinschaft in frame of SFB 609 and Grant No. GE 682/14-1. We are grateful to Karl-Heinz Rädler for many valuabel comments on the paper.
Appendix A: Specification of the velocity field {#appendix-a-specification-of-the-velocity-field .unnumbered}
===============================================
A.1 General assumptions {#a.1-general-assumptions .unnumbered}
-----------------------
We specify the motion of an incompressible conducting fluid ${\mathrm{\mathbf{u}}}$, so that it corresponds to a ring of columnar vortices.The ring is coated by an interval $1-\delta\leq s/R\leq1+\delta$ with $\delta<1$. Outside this interval the fluid is assumed to be at rest. It is assumed that ${\mathrm{\mathbf{u}}}$ is steady, $z$-independent and varies with $\varphi$ like $\exp(\iu n\varphi)$, where $n$ is the number of vortex pairs. We use the representation
$$\begin{aligned}
{\mathrm{\mathbf{u}}}& = & -{\nabla}\times(\mathrm{\mathbf{e}}_{z}\times{\nabla}\Phi)-\mathrm{\mathbf{e}}_{z}\times{\nabla}\Psi,\\
\qquad\qquad\qquad\qquad\Phi & = & u_{0\parallel}R^{2}\phi(s)\cos(n\varphi)\,,\quad\Psi=u_{0\perp}R\,\psi(s)\cos(n\varphi).\qquad\;\;{\rm (A.1)}\end{aligned}$$
The two terms on the right-hand side of ${\mathrm{\mathbf{u}}}$ correspond to the vertical (poloidal) and horizontal (toroidal) parts of the velocity. The constant quantities $u_{0\perp}$ and $u_{0\parallel}$ define the intensity of the considered flow. We further express ${\mathrm{\mathbf{u}}}$ by $$\qquad\qquad u_{s}=\hat{u}_{s}(s)\sin(n\varphi),\quad u_{\varphi}=\hat{u}_{\varphi}(s)\cos(n\varphi),\quad u_{z}=\hat{u}_{z}(s)\cos(n\varphi).\qquad\;{\rm (A.2)}$$ The connection between (A.1) and (A.2) is given by $$\qquad\qquad\hat{u}_{s}=-u_{0\perp}R\,\frac{n}{s}\psi\,,\quad\hat{u}_{\varphi}=-u_{0\perp}R\,\frac{\partial\psi}{\partial s}\,,\quad\hat{u}_{z}=-u_{0\parallel}R^{2}D_{n}\phi\,,\qquad\qquad\,\mathrm{(A.3)}$$ where $D_{n}\phi=s^{-1}\partial_{s}(s\,\partial_{s}\,\phi)-(n/s)^{2}\phi.$
A.2 Specific examples {#a.2-specific-examples .unnumbered}
---------------------
We consider two flows which differ only in the radial dependence of $u_{z}$. The first flow (FW1) is defined by $$\begin{aligned}
& & \psi=C_{\psi}\left(1-\xi^{2}\right)^{3},\quad\hat{u}_{z}/u_{0\parallel}=C_{z}\left(1-\xi^{2}\right)^{2},\quad\xi=\frac{\left(s/R\right)-1}{\delta}\,,\quad\mbox{if}\quad|\xi|<1\\
\qquad & & \phi=\psi=0\quad\mbox{otherwise}\,.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\:\mathrm{(A.4)}\end{aligned}$$ The second one (FW2) by $$\begin{aligned}
& & \psi=C_{\psi}\left(1-\xi^{2}\right)^{3},\quad\phi=C_{\phi}\left(1-\xi^{2}\right)^{3},\quad\xi=\frac{\left(s/R\right)-1}{\delta}\,,\quad\mbox{if}\quad|\xi|<1\\
\qquad & & \phi=\psi=0\quad\mbox{otherwise}\,.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\:\mathrm{(A.5)}\end{aligned}$$ The factors $C_{\psi}$ , $C_{\phi}$ and $C_{z}$ were chosen such that the average of $u_{z}/u_{0\parallel}$ over a surface given by $1-\delta\leq s/R\leq1+\delta$, $-\pi/2n\leq\varphi\leq\pi/2n$ and $z/R=\mbox{constant}$ as well as the average of $u_{\varphi}/u_{0\perp}$ at $\varphi=0$ over $1\leq s/R\leq1+\delta$ are equal to unity,
$$C_{\psi}=\delta,\quad C_{z}=\frac{{15\pi}}{16},$$
$$\quad\quad C_{\phi}=\left.{\frac{15\pi\delta^{7}}{n^{2}}}\right/\left[2\delta\left(15-40\delta^{2}+33\delta^{4}\right)+15\left(1-\delta^{2}\right)^{3}\:\log\left(\frac{1-\delta}{1+\delta}\right)\right].\qquad\;\mathrm{(A.6)}$$
The flow definitions (A.4) and (A.5) ensure that $u_{s}$, $u_{\varphi}$ and $u_{z}$ are continuous and have continuous derivatives everywhere. In figure 7 we give an example of both flow geometries.
$\qquad$
Appendix B: Determination of ${{{\mbox{\boldmath $\cal{E}$}}}}$ {#appendix-b-determination-of-mboxboldmath-cale .unnumbered}
===============================================================
We consider an electromotive force ${{{\mbox{\boldmath $\cal{E}$}}}}$ generated by a flow structured in helical columns between two concentric cylinders that was coined a ”ring of rolls”. Assuming that ${\mathrm{\mathbf{u}}}$ and ${\mathrm{\mathbf{B}}}$ do not depend on $z$, we look for representations of ${{{\mbox{\boldmath $\cal{E}$}}}}$ in the general form $$\qquad\qquad\qquad\qquad\quad{\mathcal{{E}}}_{\kappa}(s)=\int_{0}^{\infty}K_{\kappa\lambda}(s,s')\,{\overline{B}}_{\lambda}(s')\, s'\,\dd s'\,,\qquad\qquad\qquad\qquad\quad\:{\rm (B.1)}$$ where $\kappa$ and $\lambda$ stand for $s$, $\varphi$ or $z$. Using a Taylor expansion of ${\overline{B}}_{\lambda}$ , we write the last equation as $$\qquad\qquad\qquad\qquad\quad{\mathcal{{E}}}_{\kappa}(s)=\alpha_{\kappa\lambda}(s)\,{\overline{B}}_{\lambda}(s)+\beta_{\kappa\lambda s}(s)\,\frac{1}{R}\frac{\partial{\overline{B}}_{\lambda}}{\partial s}(s)+\cdots\qquad\qquad\;\;\quad\mathrm{(B.2)}$$
with $$\begin{aligned}
\alpha_{\kappa\lambda}(s) & = & \int_{0}^{\infty}K_{\kappa\lambda}(s,s')\, s'\,\dd s',\qquad\qquad\qquad\qquad\qquad\;\:\,\mathrm{(B.3)}\\
\qquad\qquad\qquad\qquad\quad\beta_{\kappa\lambda s}(s) & = & R\int_{0}^{\infty}K_{\kappa\lambda}(s,s')\,(s'-s)\, s'\,\dd s'.\qquad\qquad\qquad\;\;\mathrm{(B.4)}\end{aligned}$$
The first term on the r.h.s of (B.2) represents the $\alpha$ effect, the second term represents the $\beta$ effect, which will be omitted throughout the paper. The kernel $K_{\kappa\lambda}(s,s')$ depends only on ${\mathrm{\mathbf{u}}}$. Under the first order smoothing approximation (FOSA), and using a definition of mean-fields by $\varphi$ averaging, an analytical expression of $K_{\kappa\lambda}(s,s')$ was found in Avalos *et al.* (2007). The results are: $$\begin{aligned}
2K_{ss}(s,s') & = & -\frac{R^{2}}{\eta}\,\left(\frac{\partial h_{n}}{\partial s'}(s,s')\,\hat{u}_{\varphi}(s)\,\hat{u}_{z}(s')+\frac{\partial h_{n}}{\partial s}(s,s')\,\hat{u}_{z}(s)\,\hat{u}_{\varphi}(s')\right),\end{aligned}$$ $$\begin{aligned}
2K_{\varphi\varphi}(s,s') & = & \frac{R^{2}}{\eta}\, n\,\left(\frac{h_{n}(s,s')}{s}\,\hat{u}_{z}(s)\,\hat{u}_{s}(s')+\frac{h_{n}(s,s')}{s'}\,\hat{u}_{s}(s)\,\hat{u}_{z}(s')\right),\end{aligned}$$ $$\begin{aligned}
2K_{z\varphi}(s,s') & = & -\frac{R^{2}}{\eta}\,\left(\frac{\partial h_{n}}{\partial s}(s,s')\,\hat{u}_{s}(s)\,\hat{u}_{s}(s')+n\frac{h_{n}(s,s')}{s}\,\hat{u}_{\varphi}(s)\,\hat{u}_{s}(s')\right),\end{aligned}$$
$$\begin{aligned}
\qquad\qquad\qquad\qquad\quad K_{s\varphi} & = & K_{\varphi s}=K_{zs}=K_{zz}=0\,.\qquad\qquad\qquad\qquad\quad\;\;\quad{\rm (B.5)}\end{aligned}$$
The coefficients $K_{sz}$ and $K_{\varphi z}$ are not zero, but the integrals $\int_{0}^{\infty}K_{sz}(s,s')\, s'\,\dd s'$ and $\int_{0}^{\infty}K_{\varphi z}(s,s')\, s'\,\dd s'$ can be shown to vanish.
The Green’s function $h_{n}$ are defined by
$$\begin{aligned}
h_{n}(s,s')=\frac{1}{2n}\left(\frac{s'}{s}\right)^{2} & \mbox{for} & s'\leq s\\
\qquad\qquad\qquad\qquad\quad h_{n}(s,s')=\frac{1}{2n}\left(\frac{s}{s'}\right)^{2} & \mbox{for} & s\leq s'\,.\qquad\qquad\qquad\qquad\qquad\mathrm{(B.6)}\end{aligned}$$
As in Avalos *et al.* (2007), we can further represent $$\alpha_{\kappa\lambda}=\frac{\eta}{R}\, R_{m\perp}\left\{ \begin{array}{c}
R_{m\perp}\\
R_{m\parallel}\end{array}\right\} \widetilde{\alpha}_{\kappa\lambda}\quad\textrm{if}\quad(\kappa\lambda)=\left\{ \begin{array}{cc}
(z\varphi)\\
(ss), & (\varphi\varphi)\end{array},\right.\;\widetilde{\alpha}_{\kappa\lambda}=0\quad\textrm{otherwise.}$$ $$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\,\mathrm{(B.7)}$$ where $\widetilde{\alpha}_{\kappa\lambda}$ is a dimensionless quantity independent of magnetic Reynold numbers $R_{m\perp}=u_{0\perp}R/\eta$ and $R_{m\parallel}=u_{0\parallel}R/\eta$. In figure 8, the $s/R$ profile of the three non-zero dimensionless $\widetilde{\alpha}_{\kappa\lambda}$ coefficients are represented for both flows FW1 and FW2.
$\;\;$
Appendix C: Numerical approach {#appendix-c-numerical-approach .unnumbered}
==============================
The correct handling of the non-local boundary conditions for the magnetic field is a notorious problem for the simulation of dynamos in non-spherical domains. Here, the kinematic eigenvalue problem in finite cylinders is solved by the integral equation approach (Stefani *et al.* 2000, Xu *et al.* 2004a, Xu *et al.* 2004b, Xu *et al.* 2006). Basically, we use the following three integral equations: $$\begin{aligned}
{\mathrm{\mathbf{B}}}({\mathbf{r}}) & = & \frac{\mu\sigma}{4\pi}\int_{V}\frac{(\alpha\circ{\mathrm{\mathbf{B}}}({\mathbf{r}}'))\times{(\mathbf{r}-\mathbf{r}')}}{|{\mathbf{r}}-{\mathbf{r}}'|^{3}}{\rm d}V'-\frac{\mu\sigma p}{4\pi}\int_{V}\frac{{\mathbf{A}}({\mathbf{r}}')\times({\mathbf{r}}-{\mathbf{r}}')}{|{\mathbf{r}}-{\mathbf{r}}'|^{3}}{\rm d}V'\\
\qquad\qquad\qquad & & -\frac{\mu\sigma}{4\pi}\int_{S}\phi({{{{\mbox{\boldmath $\zeta$}}}}}')\:{\mathbf{n}}({{{{\mbox{\boldmath $\zeta$}}}}}')\times\frac{{\mathbf{r}}-{{{{\mbox{\boldmath $\zeta$}}}}}'}{|{\mathbf{r}}-{{{{\mbox{\boldmath $\zeta$}}}}}'|^{3}}{\rm d}S',\qquad\qquad\qquad\qquad\qquad\mathrm{(C.1)}\end{aligned}$$ $$\begin{aligned}
\frac{1}{2}\phi({{{{\mbox{\boldmath $\zeta$}}}}}) & = & \frac{1}{4\pi}\int_{V}\frac{(\alpha\circ{\mathrm{\mathbf{B}}}({\mathbf{r}}'))\cdot({{{{\mbox{\boldmath $\zeta$}}}}}-{\mathbf{r}}')}{|{{{{\mbox{\boldmath $\zeta$}}}}}-{\mathbf{r}}'|^{3}}{\rm d}V'-\frac{p}{4\pi}\int_{V}\frac{{\mathbf{A}}({\mathbf{r}}')\cdot({{{{\mbox{\boldmath $\zeta$}}}}}-{\mathbf{r}}')}{|{{{{\mbox{\boldmath $\zeta$}}}}}-{\mathbf{r}}'|^{3}}{\rm d}V'\\
\qquad\qquad\qquad & & -\frac{1}{4\pi}\int_{S}\phi({{{{\mbox{\boldmath $\zeta$}}}}}')\:{\mathbf{n}}({{{{\mbox{\boldmath $\zeta$}}}}}')\cdot\frac{{{{{\mbox{\boldmath $\zeta$}}}}}-{{{{\mbox{\boldmath $\zeta$}}}}}'}{|{{{{\mbox{\boldmath $\zeta$}}}}}-{{{{\mbox{\boldmath $\zeta$}}}}}'|^{3}}{\rm d}S',\qquad\qquad\qquad\quad\;\;\quad\quad\quad\mathrm{(C.2)}\end{aligned}$$ $$\begin{aligned}
\qquad\qquad{\mathbf{A}}({\mathbf{r}}) & = & \frac{1}{4\pi}\int_{V}\frac{{\mathrm{\mathbf{B}}}({\mathbf{r}}')\times({\mathbf{r}}-{\mathbf{r}}')}{|{\mathbf{r}}-{\mathbf{r}}'|^{3}}{\rm d}V'+\frac{1}{4\pi}\int_{S}{\mathbf{n}}({{{{\mbox{\boldmath $\zeta$}}}}}')\times\frac{{\mathrm{\mathbf{B}}}({{{{\mbox{\boldmath $\zeta$}}}}}')}{|{\mathbf{r}}-{{{{\mbox{\boldmath $\zeta$}}}}}'|}{\rm d}S',\quad{\rm (C.3)}\end{aligned}$$ where ${\mathrm{\mathbf{B}}}$ is the magnetic field, ${\mathbf{A}}$ the vector potential, $\phi$ the electric potential, ${\mathbf{n}}$ the outward directed unit vector at the boundary $S$. The complex constant $p$ contains as its real part the growth rate and as its the imaginary part the frequency of the eigenfield. The matrix $\alpha$ represents the $\alpha$-effect defined by (\[eq:matrix-alpha\]) or by (\[eq:matrix-alpha-z\]).
The reduction of the problem to cylindrical problems with azimuthal waves $\exp\left({\textrm{i}}\, m\varphi\right)$ was described in Xu *et al.* (2006). Finally, we end up with a generalised eigenvalue problem for the critical dynamo number $C_{\alpha}^{c}$ (in the steady case), or for the complex constant $p$ (in the unsteady case). The QR method is employed to solve this eigenvalue problem which gives also the eigenmodes of the magnetic field.
References {#references .unnumbered}
==========
Aubert, J. and Wicht, J., Axial versus equatorial dipolar dynamo models with implications for planetary magnetic fields. *Earth. Plan. Sci. Lett.*, 2004, **221**, 409-419.
Avalos-Zúñiga, R., Plunian, F., and Rädler, K.-H., Rossby waves and $\alpha$-effect. 2007, to be submitted.
Busse, F.H., Model of geodynamo. *Geophys. J. R. Astron. Soc.*, 1975, **42**, 437-459.
Gailitis, A., Self-excitation conditions for a laboratory model of geomagnetic dynamo. *Magnetohydrodynamics*, 1967, **3**, 23-29.
Giesecke, A., Rüdiger, G. and Elstner, D., Oscillating $\alpha^{2}$-dynamos and the reversal phenomenon of the global geodynamo. *Astron. Nachr.*, 2005a, **326**, 693-700.
Giesecke, A., Ziegler, U. and Rüdiger, G., Geodynamo $\alpha$-effect derived from box simulations of rotating magnetoconvection. *Phys. Earth Planet. Inter.*, 2005b, **152**, 90-102.
Grote, E. and Busse, F.H., Hemispherical dynamos generated by convection in rotating spherical shells. *Phys. Rev. E*, 2000, **62**, 4457-4460.
Gubbins, D., Barber, C.N., Gibbons, S. and Love, J.J., Kinematic dynamo action in a sphere. II Symmetry selection. *Proc. R. Soc. Lond. A*, 2000, **456**, 1669-1683.
Ishihara, N. and Kida, S., Dynamo mechanism in a rotating spherical shell: competition between magnetic field and convection vortices. *J. Fluid Mech.*, 2002, **465**, 1-32.
Melbourne, I., Proctor, M.R.E., & Rucklidge, A.M., A heteroclinic model of geodynamo reversals and excursions, in: *Dynamo and Dynamics, a Mathematical Challenge* (eds. P. Chossat, D. Armbruster and I. Oprea), Kluwer, Dordrecht, 2001, pp. 363-370.
Olson, P., Christensen, U. and Glatzmaier, G.A., Numerical modelling of the geodynamo: Mechanisms of field generation and equilibration. *J. Geophys. Res.*, 1999, **104**, 10383-10404.
Phillips, C.G., Mean dynamos, 1993, Sydney University Ph.D. Thesis
Rädler, K.-H., Some new results on the generation of magnetic fields by dynamo action. *Mem. Soc. Roy. Sci. Liege, Ser. 6*, 1975, **VIII**, 109-116.
Rädler, K.-H., Mean-Field Approach to Spherical Dynamo Models, *Astron. Nachr.*, 1980, **301**, 101-129.
Rädler, K.-H, Investigations of spherical kinematic mean-field dynamos. *Astron. Nachr.*, 1986, **307**, 89-113.
Rädler, K.-H., Apstein, E., Rheinhardt, M. and Schüler, M., The Karlsruhe dynamo experiment. A mean field approach, *Stud. Geophys. Geodaet.*, 1998, **42**, 224-231.
Rädler, K.-H.,Rheinhardt,M., Apstein, E. and Fuchs, H., On the mean-field theory of the Karlsruhe dynamo experiment. *Nonlin. Proc. Geophys.*, 2002, **9**, 171-187.
Rüdiger, G., Rapidly rotating $\alpha^{2}$-dynamos models. *Astron. Nachr.*, 1980, **301**, 181-187.
Rüdiger, G. and Elstner, D., Non-axisymmetry vs. axi-symmetry in dynamo-excited stellar magnetic fields. *Astron. Astrophys.*, 1994, **281**, 46-50.
Rüdiger, G., Elstner, D. and Ossendrijver M., Do spherical $\alpha^{2}$-dynamos oscillate? *Astron. Astrophys.*, 2003, **406**, 15-21.
Sarson, G.R . and Jones, C.A., A convection driven geodynamo reversal model. *Phys. Earth Planet. Inter.*, 1999, **111**, 3-20.
Schaeffer N. and Cardin, P., Quasi-geostrophic kinematic dynamos at low magnetic Prandtl number. *Earth Planet. Sci. Lett.*, 2006, **245**, 595-604.
Stefani, F., Gerbeth, G. and Rädler, K.-H., Steady dynamos in finite domains: an integral equation approach. *Astron. Nachr.*, 2000, **321**, 65-73.
Stefani, F. and Gerbeth, G., Asymmetry polarity reversals, bimodal field distribution, and coherence resonance in a spherically symmetric mean-field dynamo model. *Phys. Rev. Lett.*, 2005, **94**, Art. No. 184506.
Stefani, F., Gerbeth, G., Günther, U. and Xu, M., Why dynamos are prone to reversals. *Earth Planet. Sci. Lett.*, 2006a, **143**, 828-840.
Stefani, F., Gerbeth, G. and Günther, U., A paradigmatic model of Earth’s magnetic field reversals. *Magnetohydrodynamics*, 2006b, **42**, 123-130.
Stefani, F., Xu, M., Gerbeth, G., Ravelet, F., Chiffaudel, A., Daviaud, F. and Leorat, J., Ambivalent effects of added layers on steady kinematic dynamos in cylindrical geometry: application to the VKS experiment. *Eur. J. Mech. B/Fluids*, 2006c, **25**, 894-908.
Stieglitz R. and Müller U., Experimental demonstration of the homogeneous two-scale dynamo. *Phys. Fluids*, 2001, **13**, 561-564.
Tilgner, A., Small scale kinematic dynamos: beyond the $\alpha$-effect, *Geophys. Astrophys. Fluid Dyn.*, 2004, **98**, 225-234.
Weisshaar, E, A numerical study of $\alpha^{2}$- dynamos with anisotropic $\alpha$-effect. *Geophys. Astrophys. Fluid Dyn.*, 1982, **21**, 285-301.
Wicht, J. and Olson, P., A detailed study of the polarity reversal mechanism in a numerical dynamo model. *Geochem. Geophys. Geosys.*, 2004, **5**, Art. No Art. No. Q03H10.
Xu, M., Stefani, F. and Gerbeth, G., The integral equation method for a steady kinematic dynamo problem. *J. Comp. Phys.*, 2004a, **196**, 102-125.
Xu, M., Stefani, F. and Gerbeth, G. Integral equation approach to time-dependent kinematic dynamos in finite domains. *Phys. Rev. E*, 2004b, **70**, Art. No. 056305.
Xu, M., Stefani, F. and Gerbeth, G., The integral equation approach to kinematic dynamo theory and its application to dynamo experiments in cylindrical geometry. in: *Proceedings of ECCOMAS CFD 2006*, (eds: P. Wesseling, E. Onate, J. Periaux), TU Delft, paper 497 (CD).
Yoshimura, H., Wang, Z. and Wu, F., Linear astrophysical dynamos in rotating spheres: mode transition between steady and oscillatory dynamos as a function of dynamo strength and anisotropic turbulent diffusivity. *Astrophys. J.*, 1984, **283**, 870-878.
[^1]: Email: raaz@xanum.uam.mx. Current Address: Universidad Autónoma Metropolitana-Iztapalapa. Av. San Rafael Atlixco 186, col. Vicentina, 09340, D.F., México.
|
---
abstract: 'We study the $L_2$-approximation of functions from a Hilbert space and compare the sampling numbers with the approximation numbers. The sampling number $e_n$ is the minimal worst case error that can be achieved with $n$ function values, whereas the approximation number $a_n$ is the minimal worst case error that can be achieved with $n$ pieces of arbitrary linear information (like derivatives or Fourier coefficients). We show that $$e_n \,\lesssim\, \sqrt{\frac{1}{k_n} \sum_{j\geq k_n} a_j^2},$$ where $k_n \asymp n/\log(n)$. This proves that the sampling numbers decay with the same polynomial rate as the approximation numbers and therefore that function values are basically as powerful as arbitrary linear information if the approximation numbers are square-summable. Our result applies, in particular, to Sobolev spaces $H^s_{\rm mix}(\mathbb{T}^d)$ with dominating mixed smoothness $s>1/2$ and we obtain $$e_n \,\lesssim\, n^{-s} \log^{sd}(n).$$ For $d>2s+1$, this improves upon all previous bounds and disproves the prevalent conjecture that Smolyak’s (sparse grid) algorithm is optimal.'
address: 'Institut für Analysis, Johannes Kepler Universität Linz, Austria'
author:
- David Krieg
- Mario Ullrich
title: |
Function values are enough\
for $L_2$-approximation
---
Let $H$ be a *reproducing kernel Hilbert space*, i.e., a Hilbert space of real-valued functions on a set $D$ such that point evaluation $$\delta_x\colon H \to {\ensuremath{\mathbb{R}}},\quad f\mapsto f(x)$$ is a continuous functional for all $x\in D$. We consider numerical approximation of functions from such spaces, using only function values. We measure the error in the space $L_2=L_2(D,\mathcal{A},\mu)$ of square-integrable functions with respect to an arbitrary measure $\mu$ such that $H$ is embedded into $L_2$. This means that the functions in $H$ are square-integrable and two functions from $H$ that are equal everywhere are also equal point-wise.
We are interested in the *$n$-th minimal worst-case error* $$e_n \,:=\, e_n(H) \,:=\,
\inf_{\substack{x_1,\dots,x_n\in D\\ \varphi_1,\dots,\varphi_n\in L_2}}\,
\sup_{f\in H\colon \|f\|_H\le1}\,
\Big\|f - \sum_{i=1}^n f(x_i)\, \varphi_i\Big\|_{L_2},$$ which is the worst-case error of an optimal algorithm that uses at most $n$ function values. These numbers are sometimes called *sampling numbers*. We want to compare $e_n$ with the *$n$-th approximation number* $$a_n \,:=\, a_n(H) \,:=\,
\inf_{\substack{L_1,\dots,L_n\in H'\\ \varphi_1,\dots,\varphi_n\in L_2}}\,
\sup_{f\in H\colon \|f\|_H\le1}\,
\Big\|f - \sum_{i=1}^n L_i(f)\, \varphi_i\Big\|_{L_2},$$ where $H'$ is the space of all bounded, linear functionals on $H$. This is the worst-case error of an optimal algorithm that uses at most $n$ linear functionals as information. Clearly, we have $a_n\leq e_n$ since the point evaluations form a subset of $H'$.
The approximation numbers are quite well understood in many cases because they are equal to the singular values of the embedding operator ${\rm id}\colon H\to L_2$. However, the sampling numbers still resist a precise analysis. For an exposition of such approximation problems we refer to [@NW08; @NW10; @NW12], especially [@NW12 Chapter 26 & 29], and references therein. One of the fundamental questions in the area asks for the relation of $e_n$ and $a_n$ for specific Hilbert spaces $H$. The minimal assumption on $H$ is the compactness of the embedding ${\rm id}\colon H\to L_2$. It is known that $$\lim_{n\to\infty} e_n = 0
\quad \Leftrightarrow \quad
\lim_{n\to\infty} a_n = 0 \quad
\quad \Leftrightarrow \quad
H \hookrightarrow L_2 \text{ compactly},$$ see [@NW12 Section 26.2]. However, the compactness of the embedding is not enough for a reasonable comparison of the speed of this convergence, see [@HNV08]. If $(a_n^*)$ and $(e_n^*)$ are decreasing sequences that converge to zero and $(a_n^*)\not\in\ell_2$, one may construct $H$ and $L_2$ such that $a_n=a_n^*$ for all $n\in{\ensuremath{\mathbb{N}}}$ and $e_n\geq e_n^*$ for infinitely many $n\in{\ensuremath{\mathbb{N}}}$. In particular, if $$\operatorname{ord}(c_n)= \sup{\left\{s\geq 0 \colon \lim_{n\to\infty} c_n n^{s}=0\right\}}$$ denotes the (polynomial) order of convergence of a positive sequence $(c_n)$, it may happen that $\operatorname{ord}(e_n)=0$ even if $\operatorname{ord}(a_n)=1/2$.
It thus seems necessary to assume that $(a_n)$ is in $\ell_2$, i.e., that ${\rm id}\colon H\to L_2$ is a Hilbert-Schmidt operator. This is fulfilled, e.g., for *Sobolev spaces* defined on the unit cube, see Corollary \[cor:sob\]. Under this assumption, it is proven in [@KWW09] that $$\operatorname{ord}(e_n) \geq \frac{2 \operatorname{ord}(a_n)}{2\operatorname{ord}(a_n) + 1}\, \operatorname{ord}(a_n).$$ In fact, the authors of [@KWW09] conjecture that the order of convergence is the same for both sequences. We give an affirmative answer to this question. Our main result can be stated as follows.
\[thm:main\] There are absolute constants $C,c>0$ and a sequence of natural numbers $(k_n)$ with $k_n\ge c n/\log(n+1)$ such that the following holds. For any $n\in{\ensuremath{\mathbb{N}}}$, any measure space $(D,\mathcal A,\mu)$ and any reproducing kernel Hilbert space $H$ of real-valued functions on $D$ that is embedded into $L_2(D,\mathcal A,\mu)$, we have $$e_n(H)^2 \,\le\, \frac{C}{k_n} \sum_{j\geq k_n} a_j(H)^2.$$
In particular, we obtain the following result on the order of convergence. This solves Open Problem 126 in [@NW12 p. 333], see also [@NW12 Open Problems 140 & 141].
\[cor:order-of-convergence\] Consider the setting of Theorem \[thm:main\]. If $a_n(H)\lesssim n^{-s}\log^\alpha(n)$ for some $s>1/2$ and $\alpha\in{\ensuremath{\mathbb{R}}}$, then we obtain $$e_n(H) \,\lesssim\, n^{-s}\log^{\alpha+s}(n).$$ In particular, we always have $\operatorname{ord}(e_n)=\operatorname{ord}(a_n)$.
Let us now consider a specific example. Namely, we consider *Sobolev spaces with (dominating) mixed smoothness* defined on the $d$-dimensional torus $\mathbb{T}^d \cong [0,1)^d$. These spaces attracted quite a lot of attention in various areas of mathematics due to their intriguing attributes in high-dimensions. For history and the state of the art (from a numerical analysis point of view) see [@DTU16; @Tem18; @Tri10].
Let us first define a one-dimensional and real-valued orthonormal basis of $L_2({\mathbb{T}})$ by $b_0^{(1)}=1$, $b_{2k}^{(1)}=\sqrt{2}\cos(2\pi k x)$ and $b_{2k-1}^{(1)}=\sqrt{2}\sin(2\pi k x)$ for $k\in{\ensuremath{\mathbb{N}}}$. From this we define a basis of $L_2({\mathbb{T}}^d)$ using $d$-fold tensor products: We set $\mathbf{b}_{\bf k}:=\bigotimes_{j=1}^d b_{k_j}^{(1)}$ for ${\bf k}=(k_1,\dots,k_d)\in{\ensuremath{\mathbb{N}}}_0^d$. The Sobolev space with dominating mixed smoothness $s>0$ can be defined as $$H=
H^s_{\rm mix}({\mathbb{T}}^d)=\Big\{ f \in L_2({\mathbb{T}}^d)
\,\Big|\, \|f\|_H^2 := \sum_{\bf k \in {\ensuremath{\mathbb{N}}}_0^d} \prod_{j=1}^d(1+|k_j|^{2s}) {\left\langlef,\bf b_{\bf k}\right\rangle}_{L_2}^2 <\infty \Big\}.$$ This is a Hilbert space. It satisfies our assumptions whenever $s>1/2$. It is not hard to prove that an equivalent norm in $H^s_{\rm mix}({\mathbb{T}}^d)$ for $s\in{\ensuremath{\mathbb{N}}}$ is given by $$\|f\|_{H^s_{\rm mix}({\mathbb{T}}^d)}^2
\,=\, \sum_{\alpha\in\{0,s\}^d} \|D^\alpha f\|_{L_2}^2.$$ The approximation numbers $a_n = a_n(H)$ are known for some time to satisfy $$a_n \,\asymp\, n^{-s} \log^{s(d-1)}(n)$$ for all $s>0$, see e.g. [@DTU16 Theorem 4.13]. The sampling numbers $e_n = e_n(H)$, however, seem to be harder to tackle. The best bounds so far are $$n^{-s} \log^{s(d-1)}(n) \,\lesssim\; e_n
\;\lesssim\, n^{-s} \log^{(s+1/2)(d-1)}(n)$$ for $s>1/2$. The lower bound easily follows from $e_n\ge a_n$, and the upper bound was proven in [@SU07], see also [@DTU16 Chapter 5]. For earlier results on this prominent problem, see [@Si03; @Si06; @Tem93; @Ul08]. Note that finding the right order of $e_n$ in this case is posed as *Outstanding Open Problem 1.4* in [@DTU16]. From Theorem \[thm:main\], setting $\alpha=s(d-1)$ in the second part, we easily obtain the following.
\[cor:sob\] Let $H^s_{\rm mix}({\mathbb{T}}^d)$ be the Sobolev space with mixed smoothness as defined above. Then, for $s>1/2$, we have $$e_n\big(H^s_{\rm mix}({\mathbb{T}}^d)\big) \,\lesssim\, n^{-s} \log^{sd}(n).$$
The bound in Corollary \[cor:sob\] improves on the previous bounds if $d>2s+1$, or equivalently $s<(d-1)/2$. With this, we disprove Conjecture 5.26 from [@DTU16] and show, in particular, that Smolyak’s algorithm is not optimal in these cases. Although our techniques do not lead to an explicit deterministic algorithm that achieves the above bounds, it is interesting that $n$ i.i.d. random points are suitable with positive probability (independent of $n$).
While this paper was under review, Theorem \[thm:main\] has been extended to the case of complex-valued functions and non-injective operators ${\rm id}\colon H\to L_2$ in [@KUV19], including explicit values for the constants $c$ and $C$ and several applications.
It is quite a different question whether the sampling numbers and the approximation numbers also behave similarly with respect to the dimension of the domain $D$. This is a subject of tractability studies. We refer to [@NW12 Chapter 26] and especially [@NW16 Corollary 8]. Here, we only note that the constants of Theorem \[thm:main\] are, in particular, independent of the domain, and that this may be utilized for these studies, see also [@KUV19].
The Proof {#the-proof .unnumbered}
=========
The result follows from a combination of the general technique to assess the quality of *random information* as developed in [@HKNPU19b; @HKNPU19a], together with bounds on the singular values of random matrices with independent rows from [@MP06].
Before we consider algorithms that only use function values, let us briefly recall the situation for arbitrary linear functionals. In this case, the minimal worst-case error $a_n$ is given via the singular value decomposition of ${\rm id}: H\to L_2$ in the following way. Since $W={\rm id}^*{\rm id}$ is positive, compact and injective, there is an orthogonal basis $\mathcal B={\left\{b_k \colon k\in{\ensuremath{\mathbb{N}}}\right\}}$ of $H$ that consists of eigenfunctions of $W$. Without loss of generality, we may assume that $H$ is infinite-dimensional. It is easy to verify that $\mathcal B$ is also orthogonal in $L_2$. We may assume that the eigenfunctions are normalized in $L_2$ and that $\|b_1\|_H \leq \|b_2\|_H \leq \dots$. From these properties, it is clear that the Fourier series $$f\,=\,\sum_{j=1}^\infty f_j b_j,
\qquad \text{ where } \quad f_j={\left\langlef,b_j\right\rangle}_{L_2},$$ converges in $H$ for every $f\in H$, and therefore also point-wise. The optimal algorithm based on $n$ linear functionals is given by $$P_n: H \to L_2, \quad P_n(f)=\sum_{j\leq n} f_j b_j,$$ which is the $L_2$-orthogonal projection onto $V_n:={\rm span}\{b_1,\hdots,b_n\}$. We refer to [@NW08 Section 4.2] for details. We obtain that $$a_n(H)=\sup_{f\in H\colon \|f\|_H\le1} \big\Vert f - P_n(f) \big\Vert_{L_2} =\|b_{n+1}\|_H^{-1}.$$
We now turn to algorithms using only function values. In order to bound the minimal worst-case error $e_n$ from above, we employ the *probabilistic method* in the following way. Let $x_1,\dots,x_n\in D$ be i.i.d. random variables with $\mu$-density $$\varrho: D\to {\ensuremath{\mathbb{R}}}, \quad \varrho(x) = \frac12 \left(
\frac1k \sum_{j< k} b_{j+1}(x)^2 + \frac{1}{\sum_{j\geq k} a_j^2} \sum_{j\geq k} a_j^2 b_{j+1}(x)^2
\right),$$ where $k\leq n$ will be specified later. Given these sampling points, we consider the algorithm $$A_n: H\to L_2, \quad A_n(f)=\sum_{j=1}^k (G^+ N f)_j b_j,$$ where $N:H\to {\ensuremath{\mathbb{R}}}^n$ with $N(f)=(\varrho(x_i)^{-1/2}f(x_i))_{i\leq n}$ is the weighted *information mapping* and $G^+\in {\ensuremath{\mathbb{R}}}^{k\times n}$ is the Moore-Penrose inverse of the matrix $$G=(\varrho(x_i)^{-1/2} b_j(x_i))_{i\leq n, j\leq k} \in {\ensuremath{\mathbb{R}}}^{n\times k}.$$ This algorithm is a weighted least squares estimator: If $G$ has full rank, then $$A_n(f)=\underset{g\in V_k}{\rm argmin}\, \sum_{i=1}^n \frac{\vert g(x_i) - f(x_i) \vert^2}{\varrho(x_i)}. $$ In particular, we have $A_n(f)=f$ whenever $f\in V_k$. The *worst-case error* of $A_n$ is defined as $$e(A_n) = \sup_{f\in H\colon \|f\|_H\le1}\, \big\|f - A_n(f)\big\|_{L_2}.$$ Clearly, we have $e_n\leq e(A_n)$ for every realization of $x_1,\dots,x_n$. Thus, it is enough to show that $e(A_n)$ obeys the desired upper bound with positive probability.
If $\mu$ is a probability measure and if the basis is uniformly bounded, i.e., if $\sup_{j\in{\ensuremath{\mathbb{N}}}}\, \Vert b_j\Vert_\infty < \infty$, we may also choose $\varrho\equiv 1$ and consider i.i.d. sampling points with distribution $\mu$.
Weighted least squares estimators are widely studied in the literature. We refer to [@Bj96; @CM17]. In contrast to previous work, we show that we can choose a fixed set of weights and sampling points that work *simultaneously* for all $f\in H$. We do not need additional assumptions on the function $f$, the basis $(b_j)$ or the measure $\mu$. For this, we think that our modification of the weights is important.
The worst-case error $e(A_n)$ of the randomly chosen algorithm $A_n$ is not to be confused with the Monte Carlo error of a randomized algorithm, which can be defined by $$e^{\rm ran}(A_n) \,:=\, \sup_{f\in H\colon \|f\|_H\le1}\,
\left({\ensuremath{\mathbb{E}}}\left\|f - A_n(f)\right\|_{L_2}^2 \right)^{1/2}.$$ Clearly, the latter is a weaker error criterion. A small Monte Carlo error does not imply that the worst-case error $e(A_n)$ is small for any realization of $A_n$. For the $n$-th minimal Monte Carlo error it is known that the additional logarithmic factors in Corollaries \[cor:order-of-convergence\] and \[cor:sob\] are not needed [@Kr19; @WW06].
To give an upper bound on $e(A_n)$, let us assume that $G$ has full rank. For any $f\in H$ with $\Vert f\Vert_H\leq 1$, we have $$\begin{split}
{\left\Vertf-A_n f\right\Vert}_{L_2} \,&\le\, a_k + {\left\VertP_k f - A_n f\right\Vert}_{L_2}
\,=\, a_k + {\left\VertA_n(f- P_k f)\right\Vert}_{L_2} \\
&=\, a_k + {\left\VertG^+ N(f- P_k f)\right\Vert}_{\ell_2^k} \\
&\le\, a_k +{\left\VertG^+\colon \ell_2^n \to \ell_2^k\right\Vert} {\left\VertN\colon P_k(H)^\perp \to \ell_2^n\right\Vert}.
\end{split}$$ The norm of $G^+$ is the inverse of the $k$th largest (and therefore the smallest) singular value of the matrix $G$. The norm of $N$ is the largest singular value of the matrix $$\Gamma =\big(\varrho(x_i)^{-1/2} a_j b_{j+1}(x_i) \big)_{1\leq i \leq n, j\geq k} \in {\ensuremath{\mathbb{R}}}^{n\times \infty}.$$ To see this, note that $N=\Gamma \Delta$ on $P_k(H)^\perp$, where the mapping $\Delta\colon P_k(H)^\perp\mapsto\ell_2$ with $\Delta g=(g_{j+1}/a_j)_{j\ge k}$ is an isomorphism. This yields $$\label{eq:basic}
e(A_n) \leq a_k + \frac{s_{\rm max}(\Gamma)}{s_{\rm min}(G)}.$$ It remains to bound $s_{\rm min}(G)$ from below and $s_{\rm max}(\Gamma)$ from above. Clearly, any nontrivial lower bound on $s_{\rm min}(G)$ automatically yields that the matrix $G$ has full rank. To state our results, let $$\beta_k \,:=\, {\left(\frac{1}{k} \sum_{j\geq k} a_j^2\right)}^{1/2}
\qquad\text{ and }\qquad
\gamma_k\,:=\,\max\Big\{a_k,\,\beta_k\Big\}.$$ Note that $a_{2k}^2\le\frac1k(a_k^2+\hdots+a_{2k}^2)\le \beta_{k}^2$ for all $k$ and thus $\gamma_{k} \leq \beta_ {\lfloor k/2 \rfloor}$. Before we continue with the proof of Theorem \[thm:main\], we show that Corollary \[cor:order-of-convergence\] follows from Theorem \[thm:main\] by providing the order of $\beta_k$ in the following special case. The proof is an easy exercise.
\[lem:beta\] Let $a_n\asymp n^{-s}\log^{\alpha}(n)$ for some $s,\alpha\in{\ensuremath{\mathbb{R}}}$. Then, $$\beta_k \,\asymp\, \begin{cases}
a_k, & \text{if } s>1/2, \\
a_k \sqrt{\log(k)}, & \text{if } s=1/2 \,\text{ and }\, \alpha<-1/2,\\
\end{cases}$$ and $\beta_k=\infty$ in all other cases.
The rest of the paper is devoted to the proof of the following two claims: There exist constants $c,C>0$ such that, for all $n\in {\ensuremath{\mathbb{N}}}$ and $k= \lfloor c\,n/\log n\rfloor$, we have\
[**Claim 1:**]{} $${\ensuremath{\mathbb{P}}}\Big(s_{\rm max}(\Gamma) \,\leq\, C\, \gamma_{k}\, n^{1/2} \Big) > 1/2.$$
[**Claim 2:**]{} $${\ensuremath{\mathbb{P}}}\Big(s_{\rm min}(G) \,\geq\, n^{1/2}/2 \Big) > 1/2.
\medskip$$
Together with , this will yield with positive probability that $$e(A) \,\le\, a_k + 2C\,\gamma_{k}
\leq (2C+1)\, \gamma_{k} \leq (2C+1)\, \beta_{\lfloor k/2 \rfloor},$$ which is the statement of Theorem \[thm:main\].
Both claims are based on [@MP06 Theorem 2.1], which we state here in a special case. Recall that, for $X\in \ell_2$, the operator $X\otimes X$ is defined on $\ell_2$ by $X\otimes X(v)=\langle X,v\rangle X$. By ${\left\VertM\right\Vert}$ we denote the spectral norm of a matrix $M$.
\[prop:MP\] There exists an absolute constant $c>0$ for which the following holds. Let $X$ be a random vector in ${\ensuremath{\mathbb{R}}}^k$ or $\ell_2$ with $\|X\|_2\le R$ with probability 1, and let $X_1,X_2,\dots$ be independent copies of $X$. If $D={\ensuremath{\mathbb{E}}}(X\otimes X)$ is a diagonal matrix, $$A \,:=\, R^2\, \frac{\log n}{n}
\qquad\text{ and }\qquad
B \,:=\, R\, \|D\|^{1/2} \sqrt{\frac{\log n}{n}},$$ then, for any $t>0$, $${\ensuremath{\mathbb{P}}}\left(\bigg\|\sum_{i=1}^n X_i\otimes X_i - nD\bigg\|
\,\ge\, c\, t\, \max\{A, B\}\, n\right)
\,\le\, 2e^{-t}.$$
For this formulation we just employ that $${\ensuremath{\mathbb{E}}}{\left\langleX,\theta\right\rangle}_2^4
\le \sup {\left\langleX,\theta\right\rangle}_2^2 \cdot {\ensuremath{\mathbb{E}}}{\left\langleX,\theta\right\rangle}_2^2
\le R^2 \cdot \|D\|$$ for any $\theta\in{\ensuremath{\mathbb{R}}}^k$ (or $\ell_2$) with $\|\theta\|_2=1$ (This “trick” leads to an improvement over [@MP06 Corollary 2.6]). Here, we used that $D$ is diagonal. Moreover, $\|X\|_2\le R$ implies that $\|Z\|_{\psi_\alpha}\le 2R$ for $Z=\|X\|_2$ and all $\alpha\ge1$. Therefore, we can take the limit $\alpha\to\infty$ in [@MP06 Theorem 2.1].
Consider independent copies $X_1,\hdots,X_n$ of the vector $$X=\varrho(x)^{-1/2} (a_k b_{k+1}(x), a_{k+1} b_{k+2}(x), \hdots),$$ where $x$ is a random variable on $D$ with density $\varrho$. Clearly, $\sum_{i=1}^n X_i\otimes X_i = \Gamma^* \Gamma$ with $\Gamma$ from above. First observe $${\left\VertX\right\Vert}_2^2 \,=\, \varrho(x)^{-1} \sum_{j\geq k} a_j^2\, b_{j+1}(x)^2
\,\leq\, 2 \sum_{j\geq k} a_j^2
\,=\, 2 k\, \beta_{k}^2 \,=:\, R^2.$$ Since $D={\ensuremath{\mathbb{E}}}(X\otimes X)={\mathop{\mathrm{diag}}}(a_k^2, a_{k+1}^2, \hdots)$ we have $\|D\|=a_k^2$. This implies, with $A$ and $B$ defined as in Proposition \[prop:MP\], that $$A \,\le\, 2 k\, \beta_{k}^2\, \frac{\log n}{n}$$ and $$B \,\le\, (2 k\, \beta_{k}^2\,)^{1/2} a_k\, \sqrt{\frac{\log n}{n}}.$$ Choosing $k= \lfloor c\,n/\log n\rfloor$ for $c$ small enough, we obtain $${\ensuremath{\mathbb{P}}}\Big({\left\Vert\Gamma^*\Gamma - nD\right\Vert} \geq t\,\gamma_{k}^2\, n\Big)
\leq 2\exp{\left(-t\right)}.$$ By choosing $t=2$, we obtain with probability greater $1/2$ that $$s_{\rm max}(\Gamma)^2 = {\left\Vert\Gamma^*\Gamma\right\Vert} \leq {\left\VertnD\right\Vert} + {\left\Vert\Gamma^*\Gamma - nD\right\Vert}
\leq n\, a_k^2 + 2 \gamma_{k}^2 n
\leq 3\, \gamma_{k}^2\, n.$$ This yields Claim 1.
Consider $X=\varrho(x)^{-1/2}(b_1(x), \hdots, b_k(x))$ with $x$ distributed according to $\varrho$. Clearly, $\sum_{i=1}^n X_i\otimes X_i = G^*G$ with $G$ from above. First observe $${\left\VertX\right\Vert}_2^2 \,=\, \varrho(x)^{-1} \sum_{j\le k} b_j(x)^2
\,\leq\, 2 k \,=:\, R^2.$$ Since $D={\ensuremath{\mathbb{E}}}(X\otimes X)={\mathop{\mathrm{diag}}}(1, \hdots,1)$ we have $\|D\|=1$. This implies, with $A$ and $B$ defined as in Proposition \[prop:MP\], that $$A \,\le\, 2 k\, \frac{\log n}{n}$$ and $$B \,\le\, (2 k)^{1/2} \sqrt{\frac{\log n}{n}}.$$ Again, choosing $k= \lfloor c\,n/\log n\rfloor$ for $c$ small enough, we obtain $${\ensuremath{\mathbb{P}}}\left({\left\VertG^*G - nD\right\Vert} \geq \frac{t\, n}{4}\right)
\leq 2\exp{\left(-t\right)}.$$ By choosing $t=2$, we obtain with probability greater $1/2$ that $$s_{\rm min}(G)^2 = s_{\rm min}(G^*G) \,\geq\, s_{\rm min}(nD) - \|G^*G - nD\|
\,\geq\, n/2.$$ This yields Claim 2.
[16]{}
Å. Björk. *Numerical methods for least squares problems*. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 1996.
A. Cohen, G. Migliorati. Optimal weighted least-squares methods. , 3:181–203, 2017.
Dũng, V.N. Temlyakov, and T. Ullrich. yperbolic [C]{}ross [A]{}pproximation. . Springer International Publishing, 2018.
A. Hinrichs, D. Krieg, E. Novak, J. Prochno, and M. Ullrich. On the power of random information. , 2019.
A. Hinrichs, D. Krieg, E. Novak, J. Prochno, and M. Ullrich. Random sections of ellipsoids and the power of random information. , 2019.
A. Hinrichs, E. Novak, and J. Vybíral. Linear information versus function evaluations for $\ell_2$-approximation. , 153:97–107, 07 2008.
L. Kämmerer, T. Ullrich, T. Volkmer. Worst case recovery guarantees for least squares approximation using random samples. , 2019.
D. Krieg. Optimal Monte Carlo methods for $L_2$-approximation. , 49:385–403, 2019.
F. Kuo, G. W. Wasilkowski, and H. Woźniakowski. On the power of standard information for multivariate approximation in the worst case setting. , 158:97–125, 05 2009.
S. Mendelson and A. Pajor. On singular values of matrices with independent rows. , 12(5):761–773, 2006.
E. Novak and H. Woźniakowski. , volume 6 of [*EMS Tracts in Mathematics*]{}. European Mathematical Society (EMS), Zürich, 2008.
E. Novak and H. Woźniakowski. , volume 12 of [*EMS Tracts in Mathematics*]{}. European Mathematical Society (EMS), Zürich, 2010.
E. Novak and H. Woźniakowski. , volume 18 of [*EMS Tracts in Mathematics*]{}. European Mathematical Society (EMS), Zürich, 2012.
E. Novak and H. Woźniakowski. Tractability of multivariate problems for standard and linear information in the worst case setting: Part I. , 207:177–192, 2016.
W. Sickel. Approximate recovery of functions and Besov spaces of dominating mixed smoothness. , 404–411, DARBA, Sofia, 2003.
W. Sickel. Approximation from sparse grids and function spaces of dominating mixed smoothness. , 271–283, Banach Center Publ., 72, Polish Acad. Sci. Inst. Math., Warsaw, 2006.
W. Sickel and T. Ullrich. The Smolyak algorithm, sampling on sparse grids and function spaces of dominating mixed smoothness. , 13(4):387–425, 2007.
V. N. Temlyakov. , Computational Mathematics and Analysis Series, Nova Science Publishers, Inc., Commack, NY, 1993.
V. N. Temlyakov. , volume 32 of [*Cambridge Monographs on Applied and Computational Mathematics*]{}. Cambridge University Press, 2018.
H. Triebel. , volume 11 of [*EMS Tracts in Mathematics*]{}. European Mathematical Society (EMS), Zürich, 2010.
T. Ullrich. Smolyak’s algorithm, sampling on sparse grids and function spaces of dominating mixed smoothness. , 14(1):1–38, 2008.
G. W. Wasilkowski and H. Woźniakowski. The power of standard information for multivariate approximation in the randomized setting. , 76:965–988, 2006.
|
---
abstract: 'Scenarios for the emergence or bootstrap of a lexicon involve the repeated interaction between at least two agents who must reach a consensus on how to name $N$ objects using $H$ words. Here we consider minimal models of two types of learning algorithms: cross-situational learning, in which the individuals determine the meaning of a word by looking for something in common across all observed uses of that word, and supervised operant conditioning learning, in which there is strong feedback between individuals about the intended meaning of the words. Despite the stark differences between these learning schemes, we show that they yield the same communication accuracy in the realistic limits of large $N$ and $H$, which coincides with the result of the classical occupancy problem of randomly assigning $N$ objects to $H$ words.'
author:
- 'José F. Fontanari'
- Angelo Cangelosi
title: 'Cross-situational and supervised learning in the emergence of communication'
---
Introduction
============
How a coherent lexicon can emerge in a group of interacting agents is a major open issue in the language evolution and acquisition research area (Hurford, 1989; Nowak & Krakauer, 1999; Steels, 2002; Kirby, 2002; Smith, Kirby, & Brighton, 2003). In addition, the dynamics in the self-organization of shared lexicons is one of the issues to which computational and mathematical modeling can contribute the most, as the emergence of a lexicon from scratch implies some type of self-organization and, possibly, threshold phenomenon. This cannot be completely understood without a thorough exploration of the parameter space of the models (Baronchelli, Felici, Loreto, Caglioli, & Steels, 2006).
There are two main research avenues to investigate the emergence or bootstrapping of a lexicon. The first approach, inspired by the seminal work of Pinker and Bloom (1990) who argued that natural selection is the main design principle to explain the emergence and complex structure of language, resorts to evolutionary algorithms to evolve the shared lexicon. The key element here is that an improvement on the communication ability of an individual results, in average, in an increase of the number of offspring it produces (Hurford, 1989; Nowak & Krakauer, 1999; Cangelosi, 2001; Fontanari & Perlovsky, 2007, 2008). The second research avenue, which we will follow in this paper, argues for a culturally based view of language evolution and so it assumes that the lexicons are acquired and modified solely through learning during the individual’s lifetime (Steels, 2002; Smith, Kirby, & Brighton, 2003).
Of course, if there is a fact about language which is uncontroversial, it is that the lexicon must be learned from the active or passive interaction between children and language-proficient adults. The issue of whether this ability to learn the lexicon is due to some domain-general learning mechanism, or is an innate ability, unique to humans, is still on the table (Bates & Elman, 1996). In the problem we address here, there is simply no language-proficient individuals, so it is not so far-fetched to put forward a biological rather than a cultural explanation for the emergence of a self-organized lexicon. Nevertheless, in this contribution we will use many insights produced by research on language acquisition by children (see, e.g., Gleitman, 1990; Bloom, 2000) to study different learning strategies.
From a developmental perspective, there are basically two competing schemes for lexicon acquisition by children (Rosenthal & Zimmerman, 1978). The first scheme, termed cross-situational or observational learning, is based on the intuitive idea that one way that a learner can determine the meaning of a word is to find something in common across all observed uses of that word (Pinker, 1984; Gleitman, 1990; Siskind, 1996). Hence learning takes place through the statistical sampling of the contexts in which a word appears. Since the learner receives no feedback about its inferences, we refer to this scheme as unsupervised learning. The second scheme, known generally as operant conditioning, involves the active participation of the agents in the learning process, with exchange of non-linguistic cues to provide feedback on the hearer inferences. This supervised learning scheme has been applied to the design of a system for communication by autonomous robots – the so-called language game in the Talking Heads experiments (Steels, 2003). Despite the technological appeal, the empirical evidence is that most part of the lexicon is acquired by children as a product of unsupervised learning (Pinker, 1984; Gleitman, 1990; Bloom, 2000).
Interestingly, from the perspective of evolving or bootstrapping a lexicon, the unsupervised scheme is very attractive too, since it eliminates altogether the issue of honest signaling (Dawkins & Krebs, 1978), as no signaling is involved in the learning process, which requires only observation and some elements of intuitive psychology (e.g. Theory of Mind).
Many different computational implementations and variants of these two schemes for bootstrapping a lexicon have been proposed in the literature. For example, Smith (2003a, 2003b), Smith, Smith, Blythe, & Vogt (2006), and De Beule, De Vylder, & Belpaeme (2006) have addressed the unsupervised learning scheme, whereas Steels & Kaplan (1999), Ke, Minett, Au, Wang (2002), Smith, Kirby, & Brighton, (2003), and Lenaerts, Jansen, Tuyls, & De Vylder (2005), the supervised scheme. However, except for the extensive statistical analysis of a variant of the supervised learning algorithm which reduces the problem to that of naming a single object (Baronchelli, Felici, Loreto, Caglioli, & Steels, 2006), the study of the effects of changing the parameters of those models have been usually limited to the display of the time evolution of some measure of the communication accuracy of the population. Although at first sight the supervised learning scheme may seem to be clearly superior to the unsupervised one (albeit less realistic in the context of language acquisition by children), we are not aware of any thorough comparison between the performances of these two learning scenarios. In fact, in this contribution we show that in a realistic limit of very large lexicon sizes the supervised and unsupervised learning performances are essentially identical.
In this paper we study minimal models of the supervised and unsupervised learning schemes which preserve the main ingredients of these two classical language acquisition paradigms. For the sake of simplicity, here we interpret the lexicon as a mapping between objects and words (or sounds) rather than as a mapping between meanings (conceptual structures) and sounds. A more complete scenario would involve first the creation of meanings, i.e., the bootstrapping of an object-meaning mapping (Steels, 1996; Fontanari, 2006) and then the emergence of a meaning-sound mapping (see, e.g., Smith, 2003a, 2003b; Fontanari & Perlovsky, 2006).
Model
=====
Following a common assumption in lexicon bootstrapping models, such as the popular iterated learning model (Smith, Kirby, & Brighton, 2003; Brighton, Smith, & Kirby, 2005 ), we consider here only two agents who play in turns the roles of speaker and hearer. The agents live in a fixed environment composed of $N$ objects and have $H$ words available to name these objects. As we are interested in the limit where $N$ and $H$ are very large with the ratio $\alpha \equiv H/N$ finite we do not need to account for the possibility of creation of new words as in some variants of the supervised learning scheme (Baronchelli, Felici, Loreto, Caglioli, & Steels, 2006).
We assume that each agent is characterized by a $N \times H$ verbalization matrix $P$ the entries of which $p_{nh} \in \left [ 0,1 \right ]$, with $p_{nh} \in \left [ 0,1 \right ]$ for all values of $n=1,\ldots,N$, being interpreted as the probability that object $n$ is associated with word $h$. This assumption rules out the existence of objects without names, but it allows for words which are never used to name objects. To describe the communicative behavior of the agents through the verbalization matrix (i.e., the associations between objects and words for use both in production and interpretation) we need to specify how the speaker chooses a word for any given object as well as how the hearer infers the object the speaker intended to name by that word.
To name an object, say object $n$, the speaker simply chooses the word $h^*$ which is associated to the largest entry of row $n$ of the matrix $P$, i.e., $h^* = \max_{h} \left \{ p_{nh}, h = 1, \ldots, H \right \}$. In addition, to guess which object the speaker named by word $h$ the hearer selects the object that corresponds to the largest of the $N$ entries $p_{nh}$, $n=1, \ldots, N$. In other words, the hearer chooses the object that it itself would be most likely to associate with word $h$ (Smith, 2003a, 2003b). This amounts to assuming that the agents are endowed with a ‘Theory of Mind’ (ToM), i.e., that the hearer is somehow able to understand that the speaker thinks similar to itself and hence would behave likewise when facing the same situation (Donald, 1991). We note that the original inference scheme, termed “obverter” (Oliphant & Batali, 1997), assumed that the hearer has access to the verbalization matrix of the speaker (through mind reading, as the critics were ready to point out). Here we follow the more reasonable scheme, dubbed “introspective obverter” (Smith, 2003a), which requires endowing the agents with a Theory of Mind rather than with telepathic abilities.
Effective communication takes place when the two agents reach a consensus on which word must be assigned to each object. To achieve this, we must provide a prescription to modify their initially random verbalization matrices. Here we will consider two learning procedures that differ basically on whether the agents receive feedback (supervised learning) or not (unsupervised learning) about the success of a communication episode. But before doing this we need to set up the language game scenario where the agents interact.
From the list of $N$ objects, the agent who plays the speaker role chooses randomly $C$ objects without replacement. This set of $C$ objects forms the context. Then the speaker chooses randomly one object in the context and produces the word associated to that object, according to the procedure sketched before. The hearer has access to that word as well as to the $C$ objects that comprise the context. Its task is to guess which object in the context is named by that word. This is then an ambiguous language acquisition scenario in which there are multiple object candidates for any word. Once the verbalization matrices are updated the two agents interchange the roles of speaker and hearer and a new context is generated following the same procedure.
To control the convergence properties of the learning algorithms described next we assume that the entries $p_{nh}$ are discrete variables that can take on the values $0,1/M,2/M,\ldots,1-1/M,1$. In our simulations we choose $M=10^4$. The reciprocal of $M$ can be interpreted as the algorithm learning rate. In addition, as there are two agents who alternate in the roles of speaker and hearer, henceforth we will add the superscripts I or J to the verbalization matrix in order to identify the agent it corresponds to. At the beginning of the language game each agent has a different, randomly generated verbalization matrix. More pointedly, to generate the row $n$ of $P^I$ we distribute with equal probability $M$ balls among $H$ slots and set the value of entry $p_{nh}^I$ as the ratio between the number of balls in slot $h$ and the total number of balls $M$. An analogous procedure is used to set the initial value of $P^J$.
Unsupervised learning
---------------------
In this scheme, the list of objects in the context $n_1, \ldots, n_c$ and the accompanying word $h^*$ is the only information fed to the learning algorithm. Hence, in the unsupervised scheme, only the hearer’s verbalization matrix is updated. Of course, since the agents change roles at each learning episode, the verbalization matrices of both agents are updated during the learning stage. For concreteness, let us assume that agent $I$ is the speaker and so agent $J$ is the hearer in a particular learning episode. As pointed out before, the idea here is to model the cross-situational learning scenario (Siskind, 1996) in which the agents infer the meaning of a given word by monitoring its occurrence in a variety of contexts. Accordingly, the learning procedure increases the entries $p_{n_1 h^*}^J, \ldots, p_{n_c h^*}^J$ by the amount $1/M$. In addition, for each object in the context, say $n_1$, a word, say $h$, is chosen randomly and the entry $p_{n_1 h}^J$ is decreased by the same amount $1/M$, thus keeping the correct normalization of the rows of the verbalization matrix. (The possibility that $h=h^*$ is not ruled out.) This procedure which is inspired by Moran’s model of population genetics (Ewens, 2004) guarantees a minimum disturbance in the verbalization matrix and can be interpreted as the lateral inhibition of the competing word-object associations. We note that during the learning stage the agent playing the hearer role does not need to guess which object in the context is named by word $h^*$.
An extra rule is needed to keep the entries $p_{nh}^J$ within the unit interval $\left [ 0,1 \right ]$: we assume that once an entry reaches the values $p_{nh}^J = 1$ or $p_{nh}^J = 0$ it becomes fixed, so the extremes of the unit interval act as absorbing barriers for the stochastic dynamics of the learning algorithm.
Supervised learning
-------------------
The setting is identical to that described before except that now the hearer must guess which object in the context the speaker named by $h^*$ and then communicate its choice to the speaker (using some nonlinguistic means, such as pointing to the chosen object). In turn, the speaker must provide another nonlinguistic hint to indicate which object in the context it named by word $h^*$. Let us assume that the speaker associates word $h^*$ to object $n_1$. If the hearer’s guess happens to be the correct one, then both entries $p_{n_1 h^*}^I$ and $p_{n_1 h^*}^J$ are incremented by the amount $1/M$. Furthermore, two words, say $h_s$ and $h_h$, are chosen randomly and the entries $p_{n_1 h_s}^I$ and $p_{n_1 h_h}^J$ are decreased by $1/M$ so the normalization of row $n_1$ is preserved in both verbalization matrices. Suppose now the hearer’s guess is wrong, say, object $n_2$ instead of $n_1$. Then both entries $p_{n_1 h^*}^I$ and $p_{n_2 h^*}^J$ are decreased by the amount $1/M$ and, as before, two words $h_s$ and $h_h$ are chosen randomly and the entries $p_{n_1 h_s}^I$ and $p_{n_2 h_h}^J$ are increased by $1/M$. As in the unsupervised case, the extremes $p_{nh}^{I,J} = 1$ and $p_{nh}^{I,J} = 0$ are absorbing barriers.
The weak point of this learning scheme is the need for nonlinguistic hints to communicate the success or failure of the communication episode. This implies that, prior to learning, the agents are already capable to communicate (and understand) sophisticated meanings such as success and failure and behave (by updating their verbalization matrices) accordingly. In fact, feedback about the outcome of the communication episode may be seen as a form of telepathic meaning transfer.
Results
=======
Simulation experiments of the two learning algorithms described above show, not surprisingly, that after a transient the two agents become identical, in the sense that they are described by the same verbalization matrix. In addition, in the case of unsupervised learning the stochastic dynamics always leads to binary verbalization matrices, i.e., matrices whose entries $p_{nh}$ can take on the values 1 or 0 only. Of course, once the dynamics produces a binary matrix it becomes frozen. This same outcome characterizes the supervised case as well, except in the cases that the lexicon size $H$ is on the same order of the context size $C$. However, as we focus on the regime where $C$ is finite and $N$ and $H$ are large we can guarantee that the stochastic dynamics leads to binary verbalization matrices regardless of the learning procedure.
Once the dynamics becomes frozen (and so the learning stage is over) we measure the average communication error $\epsilon$ as follows. The speaker chooses object $n$ from the list of $N$ objects and emits the corresponding word (there is a unique word assigned to any given object, i.e., there is a single entry 1 in any row of the verbalization matrix). The hearer must then infer which object is named by that word. Since the same word can name many objects (i.e., there may be many entries 1 in a given column), the probability $\phi_n$ that the hearer’s guess is correct is simply the reciprocal of the number of objects named by that word. This probability is the communication accuracy regarding object $n$. The procedure is repeated for the $N$ objects, so the average communication error is defined as $\epsilon = 1 - \phi$ where $\phi = \sum_n \phi_n/N$ is the average communication accuracy of the algorithm.
As already pointed out, the normalization condition on the rows of the verbalization matrix $P$ allows for the possibility that a certain number of words are not used by the lexicon acquisition algorithms. Let $H_u \leq H$ stand for the actual number of words used by those algorithms. Then we can easily convince ourselves that $H_u = \sum_n^N \phi_n $ simply by noting that $\sum_n' \phi_n = 1$ when the sum is restricted to objects that are associated to the same word. Finally, we note that in the definitions of these communication measures the context plays no role at all; indeed the context is relevant only during the learning stage.
It is important to estimate the optimal (minimum) communication error $\epsilon_m$ in our learning scenario since, in addition to being a lower bound to the communication error produced by the learning algorithms, it allows us to rate their absolute performances. For $H \leq N$ the optimal communication error is obtained by making a one-to-one assignment between $H-1$ words and $H-1$ objects, and then assigning the single remaining word to the remaining $N-H+1$ objects. This procedure yields $\epsilon_m = 1 - H/N = 1 - \alpha$. For $ H > N$ we can obtain $\epsilon_m =0$ simply by discarding $H-N$ words and making a one-to-one word-object assignment with the other $N$ words. In fact, using our finding that $\phi = H_u/N$ we see that, as expected, the optimal performance is obtained by setting $H_u = H$ if $H \leq N$ and $H_u = N$ if $H > N$.
Figure \[fig:1\] shows the comparison between the optimal performance and the actual performances of the two learning algorithms as function of the ratio $\alpha$. In this, as well as in the other figures of this paper, each symbol stands for the average over $10^4$ independent samples or language games. The performance of the supervised algorithm deteriorates as the number of objects $N$ increases, in contrast to that of the unsupervised algorithm which actually shows a slight improvement in this case. For $N \to \infty$, both algorithms produce the same communication error (see Fig. \[fig:2\]), which is shown by the solid line in Fig. \[fig:1\]. We note that a preliminary comparative analysis of these algorithms for $N=8$ led to an incorrect claim about the general superiority of the supervised learning scheme (Fontanari & Perlovsky, 2006). For small values of $\alpha$ the performances of the two learning algorithms are practically indistinguishable from the optimal performance, but as we will argue below the algorithms actually never achieve that performance, except for $\alpha=0$.
It is instructive to calculate the communication error in the case that the $N$ objects are assigned randomly to the $H$ words. This is a classical occupancy problem discussed at length in the celebrated book by Feller (1968). In this occupancy problem, the probability $P_m$ that the number of words $m$ not used in the assignment of the $N$ objects to the $H$ words (i.e., $m = H - H_u$) is $$\label{m}
P_m = \left ( \begin{array}{c} H \\ m \end{array} \right )
\sum_{\nu =0}^{H-m} \left ( \begin{array}{c} H-m \\ \nu \end{array} \right ) \left ( -1 \right )^{\nu}
\left ( 1 - \frac{m + \nu}{H} \right )^N ,$$ which in the limits $N \to \infty $ and $H \to \infty$ reduces to the Poisson distribution $$\label{poisson}
p \left (m; \lambda \right ) = \mbox{e}^{-\lambda} \frac{\lambda^m}{m!}$$ where $\lambda = H \exp \left ( - N/H \right )$ remains bounded (Feller, 1968). Hence the average communication accuracy resulting from the random assignment of objects to words is simply $ \left ( H - \left \langle m \right \rangle \right )/N$, which yields the communication error $$\label{Er}
\epsilon_r = 1 - \alpha + \alpha \mbox{e}^{-1/\alpha} .$$ Surprisingly, this equation describes perfectly the communication error of the two learning algorithms in the limit $N \to \infty$ (solid line in Fig. \[fig:1\]). We note that the (small) discrepancy observed in Fig. \[fig:2\] for the extrapolated data of the unsupervised algorithm and the analytical prediction can be reduced to zero by decreasing the learning rate $1/M$. Equation (\[Er\]) explains also why the performances of the algorithms are practically indistinguishable from the optimal performance for small $\alpha$, since the difference between them vanishes as $\exp \left ( -1/\alpha \right )$. In addition, Eq. (\[Er\]) shows that in the limit of large $\alpha$, the communication error vanishes as $1/\alpha$.
A word is in order about the effect of the context size $C$ on the performance of the two learning algorithms, since Figs. \[fig:1\] and \[fig:2\] exhibit the results for $C=2$ only. Simulations for larger values of $C$ show that this parameter is completely irrelevant for the performance of the supervised algorithm. Of course, this is expected since regardless of the context size, at most two rows (object labels) of the verbalization matrices are updated. But the situation is far from obvious for the unsupervised algorithm since $C$ determines the number of rows to be updated in each round of the game. However, the results summarized in Fig. \[fig:3\] for $C=4$ indicate that, despite strong finite-size effects particularly for small $\alpha$, the communication error ultimately tends to $\epsilon_r$ in the limit of large $N$.
Conclusion
==========
In this paper we have unveiled two remarkable results. First, the supervised and unsupervised schemes for bootstrapping a lexicon yield the same communication accuracy in the limit of very large lexicon sizes. For finite lexicon sizes the supervised scheme always outperforms the unsupervised one, but its performance degrades as the lexicon size increases, whereas the performance of the unsupervised learning algorithm improves slightly with increasing lexicon size (see Fig. \[fig:1\]). Second, those performances tend to the communication accuracy obtained by a random occupancy problem in which the $N$ objects are assigned randomly to the $H$ words. These findings reveal a surprising inefficiency of traditional lexicon bootstrapping scenarios when evaluated in the realistic regime of very large lexicon sizes. It would be most interesting to devise sensible scenarios that reproduce the optimal communication performance or, at least, that exhibit an communication error that decays faster than the random occupancy result, $1/\alpha=N/H$, in the case the number of available words is much greater than the number of objects ($H \gg N$).
The scenarios studied here are easily adapted to model the problem of lexicon acquisition (rather than bootstrapping): we have just to assume that one of the agents, named the master in this case, knows the correct lexicon and so its verbalization matrix is kept fixed during the entire learning procedure; the verbalization matrix of the other agent – the pupil – is allowed to change following the update algorithms described before (see, e.g., Fontanari, Tikhanoff, Cangelosi, Ilin, & Perlovsky, 2009). Most interestingly, in this context, statistical world learning has been observed in controlled experiments involving infants (Smith & Yu, 2008) and adults (Yu & Smith, 2007). Similar experiments, but now aiming at bootstrapping a lexicon, could be easily carried out by replacing our virtual agents by two adults, who would then resort to some conscious or unconscious mechanism to track the co-occurrence of words and objects. Of course, the very emergence of pidgin - a means of communication between two or more groups which lack a common language (Thomason & Kaufman, 1988) - can be seen as a realization of such an experiment and serves as additional justification for the study of lexicon bootstrapping.
Acknowledgments {#acknowledgments .unnumbered}
===============
The research at São Carlos was supported in part by CNPq, FAPESP and SOARD grant FA9550-10-1-0006. J.F.F. thanks the hospitality of the Adaptive Behaviour & Cognition Research Group, University of Plymouth, where this research was initiated. The visit was supported by euCognition.org travel grant NA-097-6. Cangelosi also acknowledges the contribution of the ITALK project from the European Commission (FP7 ICT Cognitive Systems and Robotics).
References {#references .unnumbered}
==========
Baronchelli, A., Felici, M., Loreto, V., Caglioli, E., & Steels, L. (2006). Sharp transition towards shared vocabularies in multi-agent systems. [*Journal of Statistical Mechanics*]{}, P06014.
Bates, E., & Elman, J. (1996). Learning rediscovered. [*Science*]{}, 274, 1849-1850.
Bloom, P. (2000). [*How children learn the meaning of words*]{}. Cambridge, MA: MIT Press.
Brighton, H., Smith, K., & Kirby, S. (2005). Language as an evolutionary system. [*Physics of Life Reviews*]{}, 2, 177-226.
De Beule, J., De Vylder, B., & Belpaeme, T. (2006). A cross-situational learning algorithm for damping homonymy in the guessing game. In L.M. Rocha, M. Bedau, D. Floreano, R. Goldstone, A. Vespignani, & L. Yaeger (Eds.), [*Proceedings of the Xth Conference on Artificial Life*]{} (pp. 466-472). Cambridge, MA: MIT Press.
Cangelosi, A. (2001). Evolution of Communication and Language using Signals, Symbols and Words. [*IEEE Transactions on Evolutionary Computation*]{}, 5, 93-101.
Dawkins, R., & Krebs, J.R. (1978). Animal signals: information or manipulation? In: J.R. Krebs, & N. B. Davies (Eds.), [*Behavioural ecology: an evolutionary approach*]{} (pp. 282-309). Oxford, UK: Blackwel Scientific Publications.
Donald, M. (1991). [*Origins of the Modern Mind*]{}. Cambridge, MA: Harvard University Press.
Ewens, W.J. (2004). [*Mathematical Population Genetics*]{}. New York: Springer-Verlag.
Feller, W. (1968). [*An Introduction to Probability Theory and Its Applications*]{}. Vol. I, 3rd Edition. New York: Wiley.
Fontanari, J.F. (2006). Statistical analysis of discrimination games. [*European Physical Journal B*]{}, 54, 127-130.
Fontanari, J.F., & Perlovsky, L.I. (2006). Meaning creation and communication in a community of agents. In [*Proceedings of the 2006 International Joint Conference on Neural Networks*]{} (pp. 2892-2897). Piscataway, NJ: IEEE Press.
Fontanari, J.F., & Perlovsky, L.I. (2007). Evolving compositionality in evolutionary language games. [*IEEE Transactions on Evolutionary Computation*]{}, 11, 758-769.
Fontanari, J.F., & Perlovsky, L.I. (2008). A game theoretical approach to the evolution of structured communication codes. [*Theory in Biosciences*]{}, 127, 205-214.
Fontanari, J.F., Tikhanoff , V., Cangelosi, A., Ilin, R., & Perlovsky, L.I. (2009). Cross- situational learning of object-word mapping using Neural Modeling Fields. [*Neural Networks*]{}, 22, 579-585.
Gleitman, L. (1990). The structural sources of verb meanings. [*Language Acquisition*]{}, 1, 1-55.
Hurford, J.R. (1989). Biological evolution of the Saussurean sign as a component of the language acquisition device. [*Lingua*]{}, 77, 187-222.
Ke, J, Minett, J.W., Au, C.-P., & Wang, W.S.-Y. (2002). Self-organization and Selection in the Emergence of Vocabulary. [*Complexity*]{}, 7, 41-54.
Kirby, S. (2002). Natural language from artificial life. [*Artificial Life*]{}, 8, 185-215.
Lenaerts, T., Jansen, B., Tuyls, K., & De Vylder, B. (2005). The evolutionary language game: An orthogonal approach. [*Journal of Theoretical Biology*]{}, 235, 566-582.
Nowak, M.A., & Krakauer, D.C. (1999).The evolution of language. [*Proceedings of the National Academy of Sciences USA*]{}, 96, 8028-8033.
Oliphant, M., & Batali, J. (1997). Learning and the emergence of coordinated communication, [*Center for Research on Language Newsletter*]{}, 11.
Pinker, S. (1984). [*Language learnability and language development*]{}. Cambridge, MA: Harvard University Press.
Pinker, S., & Bloom, P. (1990). Natural languages an natural selection, [*Behavioral and Brain Sciences*]{}, 13, 707-784.
Rosenthal, T., & Zimmerman, B. (1978). [*Social Learning and Cognition*]{}. New York: Academic Press.
Siskind, J.M. (1996). A computational study of cross-situational techniques for learning word-to-meaning mappings. [*Cognition*]{}, 61, 39-91.
Smith, A.D.M. (2003a). Semantic generalization and the inference of meaning. [*Lecture Notes in Artificial Intelligence*]{}, 2801, 499-506.
Smith, A.D.M. (2003b). Intelligent meaning creation in a clumpy world helps communication. [*Artificial Life*]{}, 9, 557-574.
Smith, K., Kirby, S., & Brighton, H. (2003). Iterated Learning: a framework for the emergence of language. [*Artificial Life*]{}, 9, 371-386.
Smith, K., Smith, A.D.M, Blythe, R.A., & Vogt, P. (2006). Cross-Situational Learning: A Mathematical Approach. [*Lecture Notes in Computer Science*]{}, 4211, 31-44.
Smith, L.B., & Yu, C. (2008). Infants rapidly learn word-referent mappings via cross- situational statistics. [*Cognition*]{}, 106, 1558-1568.
Steels, L. (1996). Perceptually grounded meaning creation. In M. Tokoro (Ed.), [*Proceedings of the Second International Conference on Multi-Agent Systems*]{} (pp. 338-344). Menlo Park, CA: AAAI Press.
Steels, L., & Kaplan, F. (1999). Situated grounded word semantics. In [*Proceedings of the Sixteenth International Joint Conferences on Artificial Intelligence*]{} (pp. 862-867). San Francisco, CA: Morgan Kauffman.
Steels, L. (2002). Grounding symbols through evolutionary language games. In A. Cangelosi, & D. Parisi (Eds.), [*Simulating the Evolution of Language*]{} (pp. 211-226). London: Springer-Verlag.
Steels, L. (2003). Evolving Grounded Communication for Robots. [*Trends in Cognitive Sciences*]{}, 7, 308-312.
Thomason, S.G., & Kaufman, T. (1988). [*Language contact, creolization, and genetic linguistics*]{}. Berkeley: University of California Press.
Yu, C., & Smith, L.B (2007). Rapid word learning under uncertainty via cross-situational statistics. [*Psychological Science*]{}, 18, 414-420.
|
---
abstract: 'We present a general procedure to solve numerically the general relativistic magnetohydrodynamics (GRMHD) equations within the framework of the $3+1$ formalism. The work reported here extends our previous investigation in general relativistic hydrodynamics [@banyuls:97] where magnetic fields were not considered. The GRMHD equations are written in conservative form to exploit their hyperbolic character in the solution procedure. All theoretical ingredients necessary to build up high-resolution shock-capturing schemes based on the solution of local Riemann problems (i.e. Godunov-type schemes) are described. In particular, we use a renormalized set of regular eigenvectors of the flux Jacobians of the relativistic magnetohydrodynamics equations. In addition, the paper describes a procedure based on the equivalence principle of general relativity that allows the use of Riemann solvers designed for special relativistic magnetohydrodynamics in GRMHD. Our formulation and numerical methodology are assessed by performing various test simulations recently considered by different authors. These include magnetized shock tubes, spherical accretion onto a Schwarzschild black hole, equatorial accretion onto a Kerr black hole, and magnetized thick accretion disks around a black hole prone to the magnetorotational instability.'
author:
- |
Luis Antón, Olindo Zanotti, Juan A. Miralles, José M$^{\underline{\mbox{a}}}$ Martí,\
José M$^{\underline{\mbox{a}}}$ Ibáñez, José A. Font, José A. Pons
bibliography:
- 'ms.bib'
title: 'Numerical $3+1$ General Relativistic Magnetohydrodynamics: A local characteristic approach '
---
Introduction {#intro}
============
In several astrophysical scenarios both magnetic and gravitational fields play an important role in determining the evolution of the matter. In these scenarios it is a common fact the presence of compact objects such as neutron stars, most of which have intense magnetic fields of order $10^{12}-10^{13}$G, or even larger at birth, $\sim
10^{14}-10^{15}$G, as inferred from studies of anomalous X-ray pulsars and soft gamma-ray repeaters [@kouveliotou]. In some cases, i.e. in the so-called magnetars, the magnetic fields can be so strong to affect the internal structure of the star [@bocquet]. In a different context, the most promising mechanisms for producing relativistic jets like those observed in AGNs and microquasars, and the ones conjectured to explain gamma-ray bursts involve the hydromagnetic centrifugal acceleration of material from an accretion disk, or the extraction of rotational energy from the ergosphere of a Kerr black hole [@penrose; @blandford:77; @blandford:82]. In addition, the differential rotation of the magnetized plasma in the disk is responsible of the magnetorotational instability, which plays an important role in transporting angular momentum outward [@balbus].
If the gravitational field is strong enough, as in the vicinity of a compact object, the Newtonian description of gravity is only a rough approximation and general relativity becomes necessary. In such a theory, the so called 3+1 formalism [@ADM] has proved particularly useful for numerical simulations involving time-dependent computations of hydrodynamical flows in curved spacetimes, either static or dynamic. The interested reader is addressed to @fontlr and references therein for an up-to-date overview of the different approaches that have been introduced during the years for solving the general relativistic hydrodynamics equations.
On the other hand, the inclusion of magnetic fields and the development of mathematical formulations of the magnetohydrodynamic (MHD) equations in a form suitable for efficient numerical implementations is still in an exploratory phase, although considerable progress has already been achieved in the last few years.
Numerical studies in special relativistic magnetohydrodynamics (SRMHD) have been undertaken by a growing number of authors [@komissarov99; @balsara01; @koldoba; @delzanna; @tobias]. In particular, @komissarov99, @balsara01, and @koldoba developed independent [*upwind*]{} high-resolution shock-capturing (HRSC) schemes (also referred to as Godunov-type schemes), providing the characteristic information of the corresponding system of equations, which is the crucial building block in such type of schemes. In addition, @komissarov99 and @balsara01 proposed a comprehensive sample of tests to validate numerical MHD codes in special relativity (SR). Recently, @delzanna have developed a third order shock-capturing [*central*]{} scheme for SRMHD which sidesteps the use of Riemann solvers in the solution procedure (see, e.g. @toro:97 for general definitions on HRSC schemes). Simulations of the morphology and dynamics of magnetized relativistic jets with Godunov-type schemes have been reported by @tobias. In addition, the exact solution of the Riemann problem in SRMHD, for some particular orientation of the magnetic field and the fluid velocity field, has been obtained by @romero.
Correspondingly, 3+1 representations of GRMHD were first analyzed by @sloan85, @evans88, @zhang, @yokosawa93, and, more recently, by @koide98, @devilliers1, @baumgarte1, @gammie:03, @komissarov05, @duez05 and @shibata:05. Most of the existing applications to date are in the field of black hole accretion and jet formation. In [@yokosawa93; @yokosawa95] the transport of energy and angular momentum in magneto-hydrodynamical accretion onto a rotating black hole was studied adopting Wilson’s formulation for the hydrodynamic equations [@wilson79], conveniently modified to account for the magnetic terms. The magnetic induction equation was solved using the constrained transport method of [@evans88]. Later on, Koide and coworkers performed the first MHD simulations of jet formation in general relativity [@koide98; @koide00] in the context of the Blandford-Payne mechanism. These authors solved the MHD equations in the test-fluid approximation (in the background geometry of Schwarzschild/Kerr spacetimes) using a second-order finite difference central scheme with nonlinear dissipation. Employing the same numerical approach @koide02a and @koide03 studied the validity of the so-called MHD Penrose process to extract rotational energy from a Kerr black hole by simulating the evolution of a rarefied plasma with a uniform magnetic field. @komissarov05 has also recently investigated this topic finding evidence in favour of the extraction of rotational energy of the black hole by the Blandford-Znajek mechanism [@blandford:77] but against the development of strong relativistic outflows or jets. The long term solution found by @komissarov05 shows properties which are significantly different from those of the short initial (transient) phase studied by @koide03. An additional astrophysical application in the context of electromagnetic extraction of energy from a Kerr black hole is represented by the analysis of [@mckinney:04], who have compared the analytic prediction of [@blandford:77] with time evolution calculations. Finally, two different groups [@devilliers1; @devilliers:03; @gammie:03] have started programs to investigate the time-varying behaviour of magnetized accretion flows onto Kerr black holes, with great emphasis on the issue of the development of the magnetorotational instability in thick accretion disks (see also @yokosawa05). While [@devilliers1; @devilliers:03] adopt a nonconservative (ZEUS-like) scheme, the approach followed by [@gammie:03] is based on a conservative HRSC scheme, namely the so-called HLL scheme of @harten:83.
To the light of the existing literature on the subject it is clear that astrophysical applications of Godunov-type schemes in general relativistic MHD have only very recently been reported [@gammie:03; @komissarov05; @duez05]. Our goal in this paper is to present the evolution equations for the magnetic field and for the fluid within the 3+1 formalism, formulated in a suitable way to apply Godunov-type schemes based on (approximate) Riemann solvers. Our numerical procedure uses two original ingredients. On the one hand, the code incorporates a local coordinate transformation to Minkowskian coordinates, similar to the one developed for relativistic hydrodynamics in @pons:98, prior to the computation of the numerical fluxes. In this way, Riemann solvers designed for SRMHD can be straightforwardly used in GRMHD calculations. We note that @komissarov05 applies the same approach, using a HRSC scheme based on the SR Riemann solver described in @komissarov99 and adapted to general relativity following the procedure laid out in @pons:98. We present here, however, a number of tests assessing the feasibility of the approach. As a second novel ingredient, we use a [*renormalized*]{} set of right and left eigenvectors of the flux vector Jacobians of the SRMHD equations which are regular and span a complete basis in any physical state, including degenerate states.
The organization of the paper is as follows. We start by introducing the mathematical framework in §2, including the essentials of the 3+1 formalism, the description of the magnetic field, the induction equation and the conservation equations of particle number, and stress-energy tensor in conservative form. A brief analysis of the hyperbolic structure of the GRMHD system of equations is given in §3. The numerical procedure to solve the equations is described in §4. Finally, in §5 we present the results of some numerical tests and applications in order to assess our formulation and methodology. The summary of our work is given in §6. Throughout the paper Latin indices run from 1 to 3 and Greek indices from 0 to 3. Four-vectors are indistinctly denoted using index notation or boldface letters, e.g. $u^{\mu}$, ${\bf u}$. We adopt geometrized units by setting $c=G=1$.
Mathematical framework {#I}
======================
The Eulerian observer in the 3+1 formalism
------------------------------------------
In the 3+1 formalism the line element of the spacetime can be written as $$ds^{2} = -(\alpha^{2}-\beta_{i}\beta^{i}) dt^{2}+
2 \beta_{i} dx^{i} dt + \gamma_{ij} dx^{i}dx^{j},$$ where $\alpha$ (lapse function), $\beta^i$ (shift vector) and $\gamma_{ij}$ (spatial metric) are functions of the coordinates $t$, $x^i$. A natural observer associated with the 3+1 splitting is the one with four velocity ${\bf n}$ perpendicular to the hypersurfaces of constant $t$ at each event in the spacetime. This is the so-called [*Eulerian observer*]{}[^1]. The contravariant and covariant components of ${\bf n}$ are given by $$n^\mu=\frac{1}{\alpha}(1,-\beta^i),$$ and $$n_\mu=(-\alpha,0,0,0) , \$$ respectively. In spacetimes containing matter an additional natural observer is the one that follows the fluid during its motion, also called the [*comoving observer*]{}, with four-velocity ${\bf u}$. With the standard definition, the three-velocity of the fluid as measured by the Eulerian observer can be expressed as $$\label{3vel}
v^i\equiv \frac{h^i_\mu u^\mu}{-{\bf u}\cdot {\bf n}},$$ where $-{\bf u}\cdot {\bf n}\equiv W$ is the relative Lorentz factor between ${\bf u}$ and ${\bf n}$, while $h_{\mu\nu}=g_{\mu\nu} +
n_\mu n_\nu$ is the the projector onto the hypersuface orthogonal to ${\bf n}$, whose spatial terms are given by $h_{ij}=\gamma_{ij}$. From Eq. (\[3vel\]) it follows that $$v^i=\frac{u^i}{\alpha u^t}+\frac{\beta^i}{\alpha} \ ,$$ while $v_i=u_i/W$. Note that the Lorentz factor satisfies the relation $W=1/\sqrt{(1-v^2)}=\alpha u^t$, where $v^2=\gamma_{ij}v^iv^j$ is the squared modulus of the three-velocity of the fluid with respect to the Eulerian observer.
Magnetic field evolution
------------------------
A complete description of the electromagnetic field in general relativity is provided by the Faraday electromagnetic tensor field $F^{\mu\nu}$. This tensor is related to the electric and magnetic field, $E^\mu$ and $B^\mu$, measured by a generic observer with four-velocity $U^\mu$, as follows, $$F^{\mu\nu}=U^\mu E^\nu- U^\nu E^\mu -
\eta^{\mu\nu\lambda\delta} U_\lambda B_\delta,
\label{eq:faraday}$$ $\eta^{\mu\nu\lambda\delta}$ being the volume element, $$\eta^{\mu\nu\lambda\delta}=\frac{1}{\sqrt{-g}}
[\mu\nu\lambda\delta],$$ where $g$ is the determinant of the 4-metric ($g=\det{g_{\mu\nu}}$) and $[\mu\nu\lambda\delta]$ is the completely antisymmetric Levi-Civita symbol. Both, ${\bf E}$ and ${\bf B}$ are orthogonal to ${\bf U}$, ${\bf E}\cdot {\bf U}={\bf B}\cdot{\bf U}=0$. The dual of the electromagnetic tensor $^*F^{\mu\nu}$ is defined as $$^*F^{\mu\nu}=\frac{1}{2}\eta^{\mu\nu\lambda\delta}
F_{\lambda\delta},$$ and in terms of the electric and magnetic field measured by the observer ${\bf U}$ is given by $$^*F^{\mu\nu}=U^\mu B^\nu- U^\nu B^\mu +
\eta^{\mu\nu\lambda\delta} U_\lambda E_\delta.$$ From these equations, ${\bf E}$ and ${\bf B}$ can be expressed in terms of the electromagnetic tensor and the four-velocity ${\bf U}$ as follows $$\begin{aligned}
\label{emu}
E^\mu&=&F^{\mu\nu}U_\nu, \\
\label{bmu}
B^\mu&=&^*F^{\mu\nu}U_\nu.\end{aligned}$$ In terms of the electromagnetic tensor, Maxwell’s equations are written as follows, $$\begin{aligned}
\label{max_1}
\nabla_\nu\, ^*F^{\mu\nu}&=&0, \\
\label{max_2}
\nabla_\nu F^{\mu\nu}&=&4\pi{\cal J}^\mu,\end{aligned}$$ where $\nabla_\nu$ stands for the covariant derivative and ${\cal
J}^\mu$ is the electric four-current. According to Ohm’s law, the latter can be in general expressed as $${\cal J}^\mu=\rho_q u^\mu + \sigma F^{\mu\nu}u_\nu,$$ where $\rho_q$ is the proper charge density measured by the comoving observer and $\sigma$ is the electric conductivity. Maxwell’s equations can be further simplified if one assumes that the fluid is a perfect conductor. In this case the fluid has infinite conductivity and, in order to keep the current finite, the term proportional to the conduction current, $F^{\mu\nu}u_\nu$, must vanish, which means that the electric field measured by the comoving observer is zero. This case corresponds to the so-called ideal MHD condition. We can take advantage of this condition to express the electric field measured by the observer ${\bf U}$ as a function of the magnetic field ${\bf B}$ measured by the same observer and of the four-velocities $U^\mu$ and $u^\mu$. Straightforward calculations give $$E^\mu=\frac{1}{W}\eta^{\mu\nu\lambda\delta}u_\nu U_\lambda B_\delta.
\label{EbyU}$$ If we choose ${\bf U}$ as the four-velocity of the Eulerian observer, ${\bf U}={\bf n}$, Eq. (\[EbyU\]) provides $$\begin{aligned}
E^0&=&0, \\
E^i&=&-\alpha \eta^{0ijk} v_j B_k,\end{aligned}$$ or, in terms of three-vectors, $\vec{E}=- \vec{v}\times\vec{B}$, where the arrow means that the vector lies in the ‘absolute space’ and the cross product is defined using the induced volume element in the absolute space $\eta^{ijk}= \alpha \eta^{0ijk}$. Using the above relations, the dual of the electromagnetic field can be written in terms of the magnetic field only $$^*F^{\mu\nu}=\frac{u^\mu B^\nu-u^\nu B^\mu}{W},$$ and Maxwell’s equations $\nabla_\nu ^*F^{\mu\nu}=0$ reduce to the divergence-free condition plus the induction equation for the evolution of the magnetic field $$\begin{aligned}
\frac{\partial (\sqrt{\gamma} B^i)}{\partial x^i} &=&0,
\label{divfree} \\
\frac{1}{\sqrt{\gamma}}\frac{\partial}{\partial t} (\sqrt{\gamma}
B^i)&=&\frac{1}{\sqrt{\gamma}} \frac{\partial}{\partial
x^j}\{\sqrt{\gamma} [(\alpha v^i-\beta^i)B^j \nonumber \\
&&-(\alpha
v^j-\beta^j)B^i]\},
\label{eq:evB}\end{aligned}$$ or, in terms of three-vectors, $$\begin{aligned}
\vec{\nabla}\cdot\vec{B}&=&0 \\
\frac{1}{\sqrt{\gamma}}\frac{\partial
}{\partial t} \left(\sqrt{\gamma}\vec{B}\right)& =
&\vec{\nabla}\times\left[\left(\alpha\vec{v}-
\vec{\beta}\right)\times\vec{B}\right].
\label{evolB}\end{aligned}$$
Conservation Equations {#conservation}
----------------------
Once we have established the magnetic field evolution equation in the ideal MHD case, we need to obtain the evolution equations for the matter fields. These equations can be expressed as the local conservation laws of baryon number and energy-momentum. For the baryon number we have $$\nabla_\nu J^\nu=0,
\label{eq:evrho}$$ ${\bf J}$ being the rest-mass current, $J^\mu=\rho u^\mu$, where $\rho$ denotes the rest-mass density. The conservation of the energy-momentum is given by $$\nabla_\nu T^{\mu\nu}=0,
\label{eq:evtmunu}$$ where $T^{\mu\nu}$ is the energy-momentum tensor. For a fluid endowed with a magnetic field, this tensor is obtained by adding the energy-momentum tensor of the fluid to that of the electromagnetic field: $$T^{\mu\nu}=T_{\rm Fluid}^{\mu\nu}+T_{\rm EM}^{\mu\nu} \ .$$ When the fluid is assumed to be perfect, $T_{\rm Fluid}^{\mu\nu}$ is given by $$T_{\rm Fluid}^{\mu\nu}=\rho h u^\mu u^\nu + p g^{\mu\nu},$$ where $g_{\mu\nu}$ is the metric, $p$ is the pressure, and $h$ is the specific enthalpy, defined by $h=1+\varepsilon +p/\rho$, $\varepsilon$ being the specific internal energy. The fluid is further assumed to be in local thermodynamic equilibrium, and there exists an equation of state of the form $p=p(\rho,\varepsilon)$ which relates the pressure with $\rho$ and $\varepsilon$. On the other hand, the energy-momentum tensor $T_{\rm EM}^{\mu\nu}$ of the electromagnetic field can be obtained from the electromagnetic tensor, ${\bf F}$, as follows $$\label{T_em1}
T_{\rm EM}^{\mu\nu}=\frac{1}{4\pi}\left(F^{\mu\lambda}F^\nu_{\hspace{0.2cm}\lambda} -
\frac{1}{4}g^{\mu\nu} F^{\lambda\delta}F_{\lambda\delta}\right).$$ Furthermore, from Eq. (\[eq:faraday\]) and exploiting the ideal MHD condition, the electromagnetic tensor can be expressed in terms of the magnetic field $b^\mu$ measured by the comoving observer as $$F^{\mu\nu} = -\eta^{\mu\nu\lambda\delta}u_\lambda b_\delta,$$ and Eq. (\[T\_em1\]) can be rewritten as $$T_{\rm EM}^{\mu\nu}=\left(u^\mu u^\nu+\frac{1}{2}
g^{\mu\nu}\right)b^2 - b^\mu b^\nu,$$ where $b^2=b^\nu b_\nu$ and where the magnetic field four vector has been redefined by dividing it by the factor $\sqrt{4\pi}$. As a result, the total energy-momentum tensor, fluid plus electromagnetic field, is given by $$T^{\mu\nu}=\rho h^* u^\mu
u^\nu+p^* g^{\mu\nu}- b^\mu b^\nu.$$ where we have introduced the definitions $p^*=p+b^2/2$ and $h^*=h+b^2/\rho$. Note that if we consistently define $\varepsilon^*=\varepsilon+b^2/(2\rho)$, the following relation, $h^*=1+\varepsilon^*+p^*/\rho$, is fulfilled.
In order to write the evolution equations (\[eq:evrho\]), (\[eq:evtmunu\]) in a conservation form suitable for numerical applications, let us define a basis adapted to the Eulerian observer, $${\bf e}_{(\lambda)}=\{{\bf n},\partial_i\},$$ where $\partial_i$ are the coordinate vectors that are tangent to the hypersurface $t$=const, and, therefore, ${\bf n}\cdot
\partial_i=0$. This allows us to define the following five 4-vectors ${\cal D}_{(A)}$: $${\cal D}_{(A)}=\{{\bf T}({\bf e}_{(\lambda)}, \cdot),{\bf J}\},\hspace {1 cm}
A=0,\dots,4.$$ Hence the above system of equations (\[eq:evrho\]), (\[eq:evtmunu\]) can be written as $$\nabla_\nu{\cal D}_{(A)}^\nu=s_{(A)} , \$$ where the five quantities $s_{(A)}$ on the right-hand side –[*the sources*]{}–, are $$s_{(A)}=\{T^{\alpha\beta}\nabla_\mu e_{(\lambda)\nu},0\}
\ .$$ The covariant derivatives of the basis vectors, $\nabla_\mu e_{(\lambda)\nu}$, are obtained in the usual manner as $$\nabla_\mu e_{(\lambda)\nu}=\frac{\partial e_{(\lambda)\nu}}{\partial x^\mu}-
\Gamma^\delta_{\nu\mu}e_{(\lambda)\delta},$$ where $\Gamma^\delta_{\nu\mu}$ are the Christoffel symbols, and $$e_{(0)\nu}=-\alpha \delta_{0\nu}, \hspace{0.5 cm} e_{(k)\nu}=g_{k\nu}=
(\beta_k,\gamma_{kj}).$$ In a similar way to the pure hydrodynamics case [@banyuls:97], if we now define the following quantities measured by an Eulerian observer, $$\begin{aligned}
\label{conv_1}
D&\equiv& - J_\nu n^\nu=\rho W \\
\label{conv_2}
S_j &\equiv& - {\bf T}({\bf n},{\bf
e}_{(j)})=\rho h^* W^2 v_j - \alpha b^0 b_j \\
\label{conv_3}
\tau&\equiv& {\bf T}({\bf n}, {\bf n})=\rho h^* W^2-p^* - \alpha^2(b^0)^2 - D\end{aligned}$$ i.e. the rest-mass density, the momentum density of the magnetized fluid in the $j$-direction, and its total energy density (subtracting the rest-mass density in order to consistently recover the Newtonian limit), respectively, the system of GRMHD equations can be written explicitly in conservative form. Together with the equation for the evolution of the magnetic field as measured by the Eulerian observer, Eq. (\[eq:evB\]), the fundamental GRMHD system of equations can be written in the following general form $$\frac{1}{\sqrt{-g}} \left(
\frac {\partial \sqrt{\gamma}{\bf F}^{0}}
{\partial x^{0}} +
\frac {\partial \sqrt{-g}{\bf F}^{i}}
{\partial x^{i}} \right)
= {\bf S},
\label{eq:fundsystem}$$ where the quantities ${\bf F}^{\mu}$ (${\bf F}^0$ being the state vector and ${\bf F}^i$ being the fluxes) are $$\begin{aligned}
{\bf F}^0 & =& \left[\begin{array}{c}
D \\
S_j \\
\tau \\
B^k
\end{array}\right],
\label{state_vector}\end{aligned}$$ $$\begin{aligned}
{\mathbf F}^i & =& \left[\begin{array}{c}
D \tilde{v}^i\\
S_j \tilde{v}^i + p^{*} \delta^i_j - b_j B^i/W \\
\tau \tilde{v}^i + p^{*} v^i - \alpha b^0 B^i/W \\
\tilde{v}^i B^k-\tilde{v}^k B^i
\end{array}\right]
\label{flux2}\end{aligned}$$ with $\tilde{v}^i=v^{i}-\frac{\beta^i}{\alpha}$. The corresponding sources ${\bf S}$ are given by $$\begin{aligned}
{\mathbf S} & =& \left[\begin{array}{c}
0 \\
T^{\mu \nu} \left(
\frac {\partial g_{\nu j}}{\partial x^{\mu}} -
\Gamma^{\delta}_{\nu \mu} g_{\delta j} \right) \\
\alpha \left(T^{\mu 0} \frac {\partial {\rm ln} \alpha}{\partial x^{\mu}} -
T^{\mu \nu} \Gamma^0_{\nu \mu} \right) \\
0^k
\end{array}\right],\end{aligned}$$ where $0^k \equiv(0,0,0)^T$. Note that the following fundamental relations hold between the four components of the magnetic field in the comoving frame, $b^\mu$, and the three vector components $B^i$ measured by the Eulerian observer: $$\begin{aligned}
\label{b0}
b^0 &=& \frac{WB^iv_i}{\alpha} \\
\label{bi}
b^i &=& \frac{B^i + \alpha b^0 u^i}{W} \ .\end{aligned}$$ Finally, the modulus of the magnetic field can be written as $$b^2 = \frac{B^2 + \alpha^2 (b^0)^2}{W^2} \ ,$$ where $B^2 = B^iB_i$.
Hyperbolic structure {#II}
====================
In Section \[conservation\] we have written the GRMHD equations in conservative form anticipating the use of numerical methods specifically designed to solve conservation equations, as will be explained in the next Section. These methods strongly rely on the hyperbolic character of the equations and on the associated wave structure. Following @anile, in order to analyze the hyperbolicity of the equations it is convenient to write them in a more suitable form. If we take the following set of variables, ${\bf V} = (u^\mu,b^\mu,p,s)$, where $s$ is the specific entropy, the system of equations can be written as a quasi-linear system of the form $${\cal A}^{\mu A}_B\nabla_\mu V^B=0,
\label{amuab}$$ where, $A$ and $B$ run from 0 to 9, as the number of variables, and the $10\times 10$ matrices ${\cal A}^{\mu}$ are given by $$\begin{aligned}
{\cal A}^{\mu} = \left( \begin{array}{cccc} {\cal C}
u^\mu \delta^{\alpha}_{ \beta}\; & -b^{\mu}\delta^{\alpha}_{\beta} +
P^{\alpha\mu}b_\beta & l^{\alpha\mu} & 0^{\alpha\mu} \\
b^\mu \delta^{\alpha}_{ \beta} & -u^\mu\delta^{\alpha}_{\beta} &
f^{\mu\alpha} & 0^{\alpha\mu} \\
\rho h \delta^{\mu}_\beta & 0^{\mu}_\beta & u^\mu/c_s^2 & 0^\mu \\
0^{\mu}_\beta & 0^{\mu}_\beta & 0^\mu & u^\mu \\
\end{array} \right)
\label{amu}\end{aligned}$$ where $c_s$ stands for the speed of sound $$\begin{aligned}
c_s^2=\left(\frac{\partial p}{\partial e}\right)_s,\end{aligned}$$ $e$ being the mass-energy density of the fluid $e=\rho(1+\varepsilon)$. In Eq. (\[amu\]) the following definitions are introduced: $$\begin{aligned}
{\cal C}&=&\rho h + b^2,
\\
P^{\alpha\mu}&=&g^{\alpha\mu}+2 u^\alpha u^\mu,
\\
l^{\mu\alpha}&=&(\rho h g^{\mu\alpha}+(\rho h -b^2/c_s^2) u^\mu u^\alpha)/
\rho h, \\
f^{\mu\alpha}&=&(u^\alpha b^\mu/c_s^2- u^\mu b^\alpha)/\rho h,\end{aligned}$$ as well as the notation $$0^\mu \equiv 0, \,\,\,\, 0^{\alpha \mu} \equiv (0,0,0,0)^{\rm T}, \,\,\,\,
0^\mu_\beta \equiv (0,0,0,0).$$ If $\phi(x^\mu)=0$ defines a characteristic hypersurface of the above system (\[amuab\]), the characteristic matrix, given by ${\cal A^\epsilon}\phi_\epsilon$ can be written as $$\begin{aligned}
\label{ch_matrix}
{\cal A}^{\epsilon} \phi_{\epsilon} = \left( \begin{array}{cccc}
{\cal C} a \delta^{\mu}_{ \nu} & m^{\mu}_{\nu} & l^{\mu} & 0^{\mu} \\
\mathcal{B} \delta^{\mu}_{ \nu} & -a \delta^{\mu}_{\nu} & f^{\mu} & 0^{\mu} \\
\rho h \phi_{\nu} & 0_{\nu} & a/c_s^2 & 0 \\
0_{\nu} & 0_{\nu} & 0 & a \\
\end{array}\right)\end{aligned}$$ where $ \phi_\mu = \nabla_\mu \phi$, $a = u^{\mu} \phi_{\mu}$, $\mathcal{B}= b^{\mu} \phi_{\mu}$, $l^{\mu}= l^{\mu\nu}
\phi_\nu=\phi^\mu+(\rho h - b^2/c_s^2) a u^\mu /\rho h+
\mathcal{B} b^\mu/\rho h$, $f^{\mu}= f^{\mu\nu}\phi_\nu
=(a b^\mu/c_s^2-\mathcal{B} u^\mu)/\rho h$, and $m^\mu_\nu=(\phi^\mu+2au^\mu)b_\nu-\mathcal{B}\delta^\mu_\nu$. The determinant of the matrix (\[ch\_matrix\]) must vanish, i.e. $${\rm det}({\cal A}^{\mu} \phi_{\mu})={\cal C}\,a^2
\mathcal{A}^2 {\cal N}_4 = 0 \ ,
\label{eq:det}$$ where $$\begin{aligned}
{\cal A} &=& {\cal C} a^2 -\mathcal{B}^2, \\
\label{N4}
{\cal N}_4 &=& \rho h \left( \frac{1}{c_s^2} -1 \right) a^4 -
\left(\rho h +\frac{b^2}{c_s^2} \right) a^2 G
+\mathcal{B}^2 G \ ,\end{aligned}$$ and $G = \phi^{\mu}\phi_{\mu}$. If we now consider a wave propagating in an arbitrary direction $x$ with a speed $\lambda$, the normal to the characteristic hypersurface is given by the four-vector $$\label{ppp}
\phi_\mu=(-\lambda,1,0,0), \$$ and by substituting Eq. (\[ppp\]) in Eq. (\[eq:det\]) we obtain the so called [*characteristic polynomial*]{}, whose zeroes give the characteristic speed of the waves propagating in the $x$-direction. Three different kinds of waves can be obtained according to which factor in equation (\[eq:det\]) becomes zero. For entropic waves $a=0$, for Alfvén waves $\mathcal{A}=0$, and for magnetosonic waves ${\cal N}_4=0$.
Let us next analyze in more detail the characteristic equation. First of all, since the four-vector $\phi_\mu$ must be spacelike (this is a property of the RMHD system of equations [@anile]), it follows that $\phi^\mu\phi_\mu>0$. In terms of the wave speed $\lambda$ we obtain $$-\alpha\sqrt{\gamma^{xx}}-\beta^x< \lambda <
\alpha \sqrt{\gamma^{xx}} - \beta^x.$$ The characteristic speed $\lambda$ of the entropic waves propagating in the $x$-direction, given by the solution of the equation $a=0$, is the following $$\lambda=\alpha v^x - \beta^x.$$ For Alfvén waves, given by $\mathcal{A}=0$, there are two solutions corresponding, in general, to different speeds of the waves, $$\lambda=\frac{b^x\pm\sqrt{{\cal C}}u^x}
{b^0\pm\sqrt{{\cal C}}u^t}.$$ In the case of magnetosonic waves it is however not possible, in general, to obtain explicit expressions for their speeds since they are given by the solutions of the quartic equation ${\cal N}_4=0$ with $a$, ${\cal B}$ and $G$ explicitly written in terms of $\lambda$ as $$\begin{aligned}
a &=& \frac{W}{\alpha}(-\lambda+\alpha v^x-\beta^x), \\
{\cal B} &=& b^x - b^0\lambda, \\
G &=& \frac{1}{\alpha^2}(-(\lambda+\beta^x)^2+\alpha^2 \gamma^{xx}).\end{aligned}$$ Let us note that in the previous discussion about the roots of the characteristic polynomial we have omitted the fact that the entropy waves as well as the Alfvén waves appear as double roots. These superfluous eigenvalues appear associated with unphysical waves and are the result of working with the unconstrained, $10 \times 10$ system of equations. We note that @vanputten91 derived a different augmented system of RMHD equations in constrained-free form with different nonphysical waves. Any attempt to develop a numerical procedure based on the wave structure of the RMHD equations must remove these nonphysical waves (and the corresponding eigenvectors) from the wave decomposition. In the case of SRMHD @komissarov99 and @koldoba eliminate the nonphysical eigenvectors by demanding the waves to preserve the values of the invariants $u^\mu u_\mu = -1$ and $u^\mu b_\mu = 0$ as suggested by @anile. Correspondingly, @balsara01 selects the physical eigenvectors by comparing with the equivalent expressions in the nonrelativistic limit.
It is worth noticing that just as in the classical case, the relativistic MHD equations have degenerate states in which two or more wavespeeds coincide, which breaks the strict hyperbolicity of the system. @komissarov99 has reviewed the properties of these degeneracies. In the fluid rest frame, the degeneracies in both classical and relativistic MHD are the same: either the slow and Alfvén waves have the same speed as the entropy wave when propagating perpendicularly to the magnetic field (Degeneracy I), or the slow or the fast wave (or both) have the same speed as the Alfvén wave when propagating in a direction aligned with the magnetic field (Degeneracy II). @anton05 have characterized these degeneracies in terms of the components of the magnetic field four-vector normal and tangential to the Alfvén wavefront, ${\bf b}_n$, ${\bf b}_t$. When ${\bf b}_n = 0$, the system falls within Degeneracy I, while Degeneracy II is reached when ${\bf b}_t = 0$. Let us note that the previous characterization is covariant (i.e. defined in terms of four-vectors) and hence can be checked in any reference frame. In addition, @anton05 have also worked out a single set of right and left eigenvectors which are regular and span a complete basis in any physical state, including degenerate states. The [*renormalization*]{} procedure can be understood as a relativistic generalization of the work performed by @brio in classical MHD. This procedure avoids the ambiguity inherent to a change of basis when approaching a degeneracy, as done e.g. by @komissarov99. The renormalized eigenvectors have been used in all the tests reported in the present paper using the [*full-wave decomposition*]{} Riemann solver.
Numerical Approach {#III}
==================
Writing the GRMHD equations as a first-order, flux-conservative, hyperbolic system allows us to use numerical methods specifically designed to solve such kind of equations. Among these methods, high-resolution shock-capturing (HRSC) schemes are recognized as the most efficient schemes to evolve complex flows accurately, capturing the discontinuities which appear when dealing with nonlinear hyperbolic equations.
Integral form of the GRMHD equations
------------------------------------
To apply HRSC techniques to the present GRMHD system we use Eq. (\[eq:fundsystem\]) in integral form. Let $\Omega$ be a simply connected region of the four-dimensional manifold bounded by a closed three-dimensional surface $\partial\Omega$. We take $\partial\Omega$ as the standard-oriented hyperparallelepiped made up of the two spacelike surfaces ${\Sigma_t, \Sigma_{t+\Delta t}}$ plus timelike surfaces ${\Sigma_{x^i},\Sigma_{x^i+\Delta x^i}}$, that connect the two temporal slices. Then, the integral form of Eq.(\[eq:fundsystem\]) is $$\int_\Omega \frac{1}{\sqrt{-g}}
\frac {\partial \sqrt{\gamma}{\bf F}^{0}}
{\partial x^{0}} d\Omega +
\int_\Omega\frac{1}{\sqrt{-g}} \frac{\partial \sqrt{-g}{\bf F}^{i}}
{\partial x^{i}} d\Omega
= \int_\Omega {\bf S} d\Omega,$$ which can be written, for numerical purposes, as follows
$$\begin{aligned}
(\bar{\bf F}^{0})_{t+\Delta t}-(\bar{\bf F}^{0})_{t} &=&
-\left(\int_{\Sigma_{x^1+\Delta x^1}}\sqrt{-g}\hat{\bf F}^{1} dx^0 dx^2 dx^3
-\int_{\Sigma_{x^1}} \sqrt{-g}\hat{\bf F}^{1} dx^0
dx^2 dx^3\right) \nonumber \\
&& -\left(\int_{\Sigma_{x^2+\Delta x^2}}\sqrt{-g}\hat{\bf F}^{2} dx^0 dx^1 dx^3
-\int_{\Sigma_{x^2}} \sqrt{-g}\hat{\bf F}^{2} dx^0
dx^1 dx^3\right) \nonumber \\
&& -\left(\int_{\Sigma_{x^3+\Delta x^3}}\sqrt{-g}\hat{\bf F}^{3} dx^0 dx^1 dx^2
-\int_{\Sigma_{x^3}} \sqrt{-g}\hat{\bf F}^{3} dx^0
dx^1 dx^2\right) + \int_\Omega {\bf S} d\Omega ,
\label{eq:system}\end{aligned}$$
where $$\bar{\bf F}^{0}=
\frac{1}{\Delta V}\int_{x^1}^{x^1+\Delta x^1} \int_{x^2}^{x^2+\Delta x^2}
\int_{x^3}^{x^3+\Delta x^3} \sqrt{\gamma}{\bf F}^{0} dx^1dx^2dx^3$$ and $$\Delta V= \int_{x^1}^{x^1+\Delta x^1} \int_{x^2}^{x^2+\Delta x^2}
\int_{x^3}^{x^3+\Delta x^3} \sqrt{\gamma} dx^1dx^2dx^3.$$ The carets appearing on the fluxes denote that these fluxes, which are calculated at cell interfaces where the flow conditions can be discontinuous, are obtained by solving Riemann problems between the corresponding numerical cells. These numerical fluxes are further discussed in Section \[numflux\].
We note that in order to increase the spatial accuracy of the numerical solution, the primitive variables (see Sect. \[recovery\]) are reconstructed at the cell interfaces before the actual computation of the numerical fluxes. We use a standard second order [*minmod*]{} reconstruction procedure to compute the values of $p$, $\rho$, $v_i$ and $B^i$ ($i= 1,2,3$) at both sides of each numerical interface. However, when computing the numerical fluxes along a certain direction, we do not allow for discontinuities in the magnetic field component along that direction. Furthermore, the equations in integral form are advanced in time using the method of lines in conjunction with a second order, conservative Runge-Kutta method [@shu:88].
Induction equation
------------------
The main advantage of the above numerical procedure, Eq. (\[eq:system\]), to advance in time the system of equations, is that those variables which obey a conservation law are, by construction, conserved during the evolution as long as the balance between the fluxes at the boundaries of the computational domain and the source terms are zero. This is an important property that any hydrodynamics code should fulfill.
However, as far as the magnetic field components are concerned, the system of equations (\[eq:fundsystem\]) only includes the induction equation Eq. (\[evolB\]), expressed by (\[eq:fundsystem\]) in conservation form, while the divergence-free condition, Eq. (\[divfree\]), remains as an additional constraint to be imposed. Therefore, the numerical advantage of using Eq. (\[eq:system\]) for the conserved variables does not apply straightforwardly for the magnetic field components. Indeed, there is no guarantee that the divergence is conserved numerically when updating the magnetic field if we were to use the same numerical procedure we employ for the rest of components of the state vector. Among the methods designed to preserve the divergence of the magnetic field we use the constrained transport method designed by @evans88 and first extended to HRSC methods by [@ryu:98] (see also [@londrillo:04] for a recent discussion). This scheme is based on the use of Stokes theorem after the integration of the induction equation on surfaces of constant $t$ and $x^i$, $\Sigma_{t,x^i}$. Let us write Eq. (\[evolB\]) as $$\frac{1}{\sqrt{\gamma}}\frac{\partial {\vec{\cal B}}}{\partial
t}=\vec{\nabla} \times \vec{\Omega},
\label{omegaeq}$$ where we have defined the density vector $\vec{\cal
B}=\sqrt{\gamma}\vec{B}$ and $\vec{\Omega}=(\alpha\vec{v}-\vec\beta)\times\vec{B}$.
To obtain a discretized version of Eq. (\[omegaeq\]), we proceed as follows. At a given time, each numerical cell is bounded by 6 two-surfaces. Consider, for concreteness, the two-surface $\Sigma_{t,x^3}$, defined by $t={\rm const.}$ and $x^3={\rm const.}$, and the remaining two coordinates spanning the intervals from $x^1$ to $x^1+\Delta x^1$, and from $x^2$ to $x^2+\Delta x^2$. The magnetic flux through this two-surface is given by $$\Phi_{\Sigma_{t,x^3}}=\int_{\Sigma_{t,x^3}} \vec{B} \cdot d\vec{\Sigma}.$$ Furthermore, the electromotive force ${\cal E}$ around the contour $\partial(\Sigma_{t,x^3})$ is defined as $${\cal E}(t)=-\int_{\partial(\Sigma_{t,x^3})} \Omega_i dx^i.$$ Integrating Eq. (\[omegaeq\]) on the two-surface $\Sigma_{t,x^3}$, and applying Stokes theorem to the right hand side we obtain the equation $$\frac{d\Phi_{\Sigma_{t,x^3}}}{dt}=-{\cal E}=\int_{\partial(\Sigma_{t,x^3})}
\Omega_i dx^i,$$ which can be integrated to give $$\Phi^{t+\Delta t}_{\Sigma_{t,x^3}}-\Phi^{t}_{\Sigma_{t,x^3}}=
\int_t^{t+\Delta t} \int_{\partial(\Sigma_{t,x^3})}
\hat{\Omega}_i dx^i\;\; dt,
\label{eq:mflux}$$ where the caret denotes again that quantities $\hat{\Omega}_i$ are calculated at the edges of the numerical cells, where they can be discontinuous. At each edge, as we will describe below, these quantities are calculated using the solution of four Riemann problems between the corresponding faces whose intersection defines the edge. However, irrespective of the expression we use for calculating $\hat{\Omega}_i$, the method to advance the magnetic fluxes at the faces of the numerical cells satisfies, by construction, the divergence constraint. To see this we can integrate over a computational cell the divergence of the magnetic field at a given time. After applying Gauss theorem, we obtain $$\int_{\Delta V} \nabla \cdot \vec{B} dV=\int_{\Sigma}\vec{B}\cdot
d\vec{\Sigma}=\sum_{{\rm faces}, i=1}^6 \Phi_i.
\label{gauss}$$ In the previous expression, $\Delta V$ stands for the volume of a computational cell, whereas $\Sigma$ denotes the closed surface bounding that cell. The summation is extended to the six faces (coordinate surfaces) shaping $\Sigma$. Now, taking the time derivative of Eq. (\[gauss\]) yields to $$\begin{aligned}
\frac{d}{dt}\int_{\Delta V} \nabla \cdot \vec{B} dV&=&-\sum_{{\rm faces}, i=1}^6
\frac{d}{dt}\Phi_i \nonumber \\
&=& \sum_{{\rm faces}, i=1}^6\sum_{{\rm edges}, j=1}^4
{\cal E}_{ij},\end{aligned}$$ where ${\cal E}_{ij}$ is the contribution from edge $j$ to the total electromotive force around the contour defined by the boundary of face $i$. It turns out that the above summation cancels exactly since the value of ${\cal E}$ for the common edge of two adjacent faces has a different sign for each face. Therefore, if the initial fluxes through each face of a numerical cell verify $\Sigma_{{\rm
faces}, i=1}^6\Phi_i=0$, this condition will be fulfilled during the evolution.
Numerical fluxes and divergence-free condition {#numflux}
----------------------------------------------
The numerical integration of the GRMHD equations, Eqs. (\[eq:fundsystem\]) or (\[eq:system\]), is done using a HRSC scheme. Such schemes are specifically designed to solve nonlinear hyperbolic systems of conservation laws [@leveque; @toro:97]. They are written in conservation form and use approximate or exact Riemann solvers to compute the numerical fluxes between neighbour grid zones. This fact guarantees the proper capturing of all discontinuities which may arise naturally in the solution space of a nonlinear hyperbolic system. Applications of HRSC schemes in relativistic hydrodynamics can be found in [@martilr:03; @fontlr]. Incidentally, we note that a detailed description of linearized Riemann Solvers based on the spectral decomposition can be found in [@font:94] for special relativistic hydrodynamics, and in [@banyuls:97] (diagonal metrics) and [@font:00], [@ibanez:01] (general metrics) for general relativistic hydrodynamics. For HRSC methods in classical MHD, on the other hand, we address to [@ryu:95; @ryu:98].
As discussed in Section \[II\], the existence of degeneracies in the eigenvectors of the RMHD system of equations makes it hazardous to implement linearized Riemann solvers based on the full spectral decomposition of the flux vector Jacobians. Nevertheless, we have succeeded in developing and implementing in the code a full-wave decomposition (Roe-type) Riemann solver based on a single, renormalized set of right and left eigenvectors, as discussed in detail in @anton05, which is regular for any physical state, including degeneracies. This Riemann solver is invoked in the code after a (local) linear coordinate transformation based on the procedure developed by @pons:98 that allows to use special relativistic Riemann solvers in general relativity, and which has been properly extended to include magnetic fields (see Sect. \[SRRS\]).
In addition to the Roe-type Riemann solver we also use two simpler alternative approaches to compute the numerical fluxes, namely the HLL single-state Riemann solver of @harten:83 and the second order central (symmetric) scheme of @tadmor (KT hereafter). The KT scheme has proved recently to yield results with an accuracy comparable to those provided by full-wave decomposition Riemann solvers in simulations involving purely hydrodynamical special relativistic flows [@arturo] and general relativistic flows in dynamical neutron star spacetimes [@shibata]. The interested reader is addressed to @tadmor [@arturo] for specific details on the KT central scheme.
Correspondingly, the HLL Riemann solver is based on the calculation of the maximum and the minimum left and right propagating wave speeds emanating at the interface between the two initial states, and the resulting flux is given by $$\begin{aligned}
&& \hat{\bf F}({\bf U}_L,{\bf U}_R)= \nonumber \\
&& \frac{\tilde{\lambda}_{+}{\bf F}({\bf
U}_L)-\tilde{\lambda}_-{\bf F}({\bf U}_R) +
\tilde{\lambda}_ + \tilde{\lambda}_- ({\bf U}_R-{\bf
U}_L)}{\tilde{\lambda}_+ + \tilde{\lambda}_-} \ ,\end{aligned}$$ where $\tilde{\lambda}_{\pm}=\lambda_{\pm}/\alpha$. Quantities $\hat{\bf F}$ stand for the numerical fluxes along each of the three spatial coordinate directions, namely $\hat{\bf F}^i$ ($i=
1,2,3$) in Eq. (\[eq:fundsystem\]), whereas ${\bf U}\equiv{\bf F}^0$ denotes the state vector. Subindices $L$ and $R$ stand for the left and right states defining the Riemann problems at each numerical interface. Moreover $\lambda_-$ and $\lambda_+$ are upper bounds of the speeds of the left- and right-propagating waves emanating from the cell interface, $$\begin{aligned}
\lambda_+ & = & \text{max}(0, \lambda^+_{{\rm fms},L},
\lambda^+_{{\rm fms},R}),
\\
\lambda_- & = & \text{min}(0, \lambda^-_{{\rm fms},L},
\lambda^-_{{\rm fms},R}),\end{aligned}$$ where $\lambda^s_{{\rm fms},I}$ stands for the wavespeed of the fast magnetosonic wave propagating to the left ($s= -$) or to the right ($s=+$) computed at state $I$ ($=L,R$). These speeds are obtained by looking for the smallest and largest solution of the quartic equation ${\cal N}_4=0$ and can be effectively computed with a Newton-Raphson iteration scheme starting from $\lambda = \pm \alpha \sqrt{\gamma^{ii}}- \beta^i$ ($i= 1,2,3$).
Any of the flux formulae we have discussed can be used to advance the hydrodynamic variables according to Eq. (\[eq:system\]) and also to calculate the quantities $\hat{\Omega}_i$ needed to advance in time the magnetic fluxes following Eq. (\[eq:mflux\]). At each edge of the numerical cell, $\hat{\Omega}_i$ is written as an average of the numerical fluxes calculated at the interfaces between the faces whose intersection define the edge. Let us consider, for illustrative purposes, $\hat{\Omega}_x$. If the indices $(j,k,l)$ denote the center of a numerical cell, an $x-$edge is defined by the indices $(j,k+1/2,l+1/2)$. By definition, $\Omega_x = \alpha(\tilde{v}^yB^z - \tilde{v}^zB^y)$. Since $$\label{f1}
F^y(B^z) = \tilde{v}^yB^z - \tilde{v}^zB^y$$ and $$\label{f2}
F^z(B^y) = \tilde{v}^zB^y - \tilde{v}^yB^z,$$ we can express $\hat{\Omega}_x$ in terms of the fluxes as follows $$\begin{aligned}
\label{oom}
\hat{\Omega}_{x\,j,k+1/2,l+1/2} &=& \frac{1}{4}
[\hat{F}^y_{j,k+1/2,l}+\hat{F}^y_{j,k+1/2,l+1} \nonumber \\
&& -\hat{F}^z_{j,k,l+1/2}-\hat{F}^z_{j,k+1,l+1/2}],\end{aligned}$$ where $\hat{F}^y$($\hat{F}^z$) refers to the numerical flux in the $y$ ($z$) direction corresponding to the equation for $B^z$ ($B^y$) and multiplied by $\alpha$ to account for the correct definition of $\Omega$. Also note that in the numerical implementation of the constraint transport method, a slightly different procedure can be followed [@ryu:98]. According to this procedure, in the computation of the numerical fluxes (\[f1\]) and (\[f2\]), only the terms advecting the magnetic field are considered (i.e. the first term on the rhs of (\[f1\])-(\[f2\])), while the average in Eq. (\[oom\]) is obtained dividing by a factor 2 instead of 4. Both of these procedures, the one described through Eqs. (\[f1\])-(\[oom\]) and its modification provided by @ryu:98 allow us to advance the magnetic flux at the faces of the numerical cells in the correct way. However, we have also noted that for 2D numerical tests our implementation of this modified scheme is generally more robust. We address the interested reader to [@toth:00] for additional properties of the [@ryu:98] scheme.
However, we need also to know the value of the magnetic field at the center of the cells in order to obtain the primitive variables after each time step (cf. Sect. \[recovery\]) and to compute again the numerical fluxes of the other conserved variables for the next time step. If $\hat{B}^x_{j \pm 1/2,k,l}$ is the $x$-component of the magnetic field at the interface $(j\pm 1/2,k,l)$, then the $x$-component of the magnetic field at the center of the $(j,k,l)$ cell, $B^x_{j,k,l}$, is obtained by taking the arithmetic average of the corresponding fluxes, i.e. $$\begin{aligned}
B^x_{j,k,l} =&& \frac{1}{2} (\hat{B^x}_{j-1/2,k,l}
\Delta S^x_{j-1/2,k,l}
+ \\ \nonumber
&&\hat{B^x}_{j+1/2,k,l}\Delta S^x_{j+1/2,k,l})/\Delta
S^x_{j,k,l} , \\end{aligned}$$ where $\Delta S^x_{j\pm1/2,k,l}$ is the area of the interface surface between two adjacent cells, located at $x_{j\pm1/2}$ and bounded between $[y_{k-1/2},y_{k+1/2}]$ and $[z_{l-1/2},z_{l+1/2}]$. Analogous expressions for $\hat{\Omega}_{y\,j+1/2,k,l+1/2}$ and $\hat{\Omega}_{z\,j+1/2,k+1/2,l}$, and $B^y_{j,k,l}$ and $B^z_{j,k,l}$ can be easily derived.
Special relativistic Riemann solvers in GRMHD {#SRRS}
---------------------------------------------
In @pons:98 we presented a general procedure to use any Riemann solver designed for the special relativistic hydrodynamics equations in a general relativistic framework. In this section we describe a generalization of this approach to account for the magnetic field. It will be used to compute the numerical fluxes from the special relativistic full-wave decomposition Riemann solver discussed above. The procedure is based on performing linear transformations to locally flat (or geodesic) systems of coordinates at each numerical cell interface, from which the metric becomes locally Minkowskian (plus second order terms). Notice that this approach is equivalent to the usual approach in classical fluid dynamics where one uses the solution of Riemann problems in slab symmetry for problems in cylindrical or spherical coordinates. In order to generalize this procedure to the GRMHD case one must start remembering that in the pure hydrodynamical case, the components of the shift vector transversal to the cell interface play the role of a [*grid*]{} velocity, i.e., as if we have a moving interface. As discussed in detail in @pons:98, this can be easily understood by noticing that, the fluxes through the moving interface for the local observer can be written as $\bar{F}^i - \frac{\beta^i}{\alpha} F^0$, where $\bar{F}^i$ are the fluxes when $\beta^i=0$ and $F^0$ the corresponding state vector. In terms of $D$, $S_j$, $\tau$ and $p^{*}$, the structure of the first five flux components (\[flux2\]) in the magnetic case follow the previous discussion with the conserved quantities advected with $\tilde{v}^i$ (that includes the correction term for the moving grid) and extra terms in the fluxes of momentum and energy (which do not depend explicitly on the shift vector). This allows one to proceed along the same steps as in @pons:98: i) Introduce the locally Minkowskian coordinate system at each interface; ii) solve the Riemann problem to obtain the numerical fluxes through the moving grid as seen by the locally Minkowskian observer; iii) invert the transformation to obtain the numerical fluxes in the original coordinates.
Let us now concentrate in the last three components of the fluxes (\[flux2\]), namely $\tilde{v}^i B^k-\tilde{v}^k B^i$, corresponding to the evolution of the magnetic field. The terms $\tilde{v}^i B^k - v^k B^i$ also follow the discussion for the non-magnetic case and the same numerical procedure can be then applied. However, the term $\beta^k B^i/\alpha$ couples the components of the shift vector parallel to the cell interface to the perpendicular magnetic field. This term has to be interpreted as a correction to the total electromotive force caused by the movement of the surface with respect to the Eulerian observer and has to be added to the final expression for the flux.
In Section \[results\] the validity of this approach with a full-wave decomposition Roe-type Riemann solver is assessed in a series of tests including discontinuous initial value problems, steady flows, and dynamical accretion disks. As a result of this assessment we conclude that the generalized procedure to use SR Riemann Solvers in multidimensional GRMHD is an efficient and robust alternative to develop specific solvers that need of the knowledge of the whole spectral decomposition (eigenvalues and eigenvectors) in general relativity. Since each local change of coordinates is linear and it only involves a few arithmetical operations, the additional computational cost of the approach is negligible.
Primitive variable recovering {#recovery}
-----------------------------
The numerical procedure used to solve the GRMHD equations allows us to obtain the values of the conserved variables ${\bf F}^{0}$ at time $t+\Delta t$ from their values at time $t$. However, the values of the physical variables (i.e. $\rho,\epsilon$, etc) are also needed at each time step in order to compute the fluxes. It is therefore necessary to solve the algebraic equations relating the conserved and the physical variables. For the classical MHD equations and an ideal gas equation of state the physical variables can be expressed as explicit functions of the conserved ones. Unfortunately, this cannot be done in GRMHD, a feature shared by the special and general relativistic versions of the purely hydrodynamics equations within the 3+1 approach (see @papadopoulos for an alternative formulation without this shortcoming). Therefore, the resulting nonlinear algebraic system of equations has to be solved numerically. The procedure we describe below is an extension to full general relativity of that developed by @komissarov99 in the special relativistic case.
The basic idea of this procedure relies on the fact that it is not necessary to solve the system (\[conv\_1\])-(\[conv\_3\]) for the three components of the momentum, but instead for its modulus $S^2
= S^iS_i$. The next step is to eliminate the components of $b^\alpha$ through Eqs. (\[b0\])-(\[bi\]). After some algebra it is possible to write $S^2$ as $$S^2 = (Z + B^2)^2 \frac{W^2-1}{W^2} - (2Z + B^2) \frac{(B^iS_i)^2}{Z^2},
\label{eq:s2}$$ where $Z = \rho h W^2$.
The equation for the total energy can be worked out in a similar way $$\tau = Z + B^2 - p - \frac{B^2}{2W^2} - \frac{(B^iS_i)^2}{2Z^2} - D.
\label{eq:tau}$$ Equations (\[conv\_1\]), (\[eq:s2\]) and (\[eq:tau\]), together with the definition of $Z$, form a system for the unknowns $\rho$, $p$ and $W$, assuming the function $h = h(\rho, p)$ is provided. In our calculations we restrict ourselves to both, an ideal gas equation of state (EOS), $p=\rho\epsilon(\gamma-1)$, for which $h = 1 + \gamma p/\rho(\gamma-1)$, where $\gamma$ is the adiabatic index, and a polytropic EOS (valid to describe isoentropic flows), $p=K\rho^\gamma$, where $K$ is the polytropic constant. In this last case the integration of the total energy equation can be avoided and the equation for the specific enthalpy is given by $$h = 1 + \frac{\gamma K}{\gamma - 1} \rho^{\gamma - 1}.$$ Then Eqs. (\[conv\_1\]) and (\[eq:s2\]) are solved to obtain $\rho$ and $W$.
Results
=======
{width="8.2cm" height="9.0cm"} {width="8.2cm" height="9.0cm"}
We turn now to assess the formulation of the GRMHD equations we have presented as well as the numerical techniques we employ to solve them. The simulations reported in this section are introduced in a way which gradually increases the level of complexity of the flow to solve, starting first with shock tube tests in both purely Minkowski spacetime and flat spacetimes suitably modified by the presence of artificial gauge terms. Next we turn to one-dimensional tests of accreting magnetized flows onto Schwarzschild and Kerr black holes, to finally discuss two-dimensional simulations of thick accretion disks orbiting around black holes. This collection of tests allows us to validate our approach by comparing the numerical simulations with analytic solutions (in the cases where such a comparison is possible), by investigating the ability of the code to preserve stationary solutions in the strong gravitational field regime, and by comparing with available numerical results reported in the literature.
For those tests which involve (background) black hole spacetimes we adopt Boyer-Lindquist coordinates and we fix the unit of length to $r_g\equiv M$, $M$ being the mass of the black hole.
Relativistic Brio-Wu shock tube test
------------------------------------
The first test is the relativistic analog of the classical Brio-Wu shock tube problem [@brio; @balsara01], as adapted to the relativistic MHD case by [@vanputten93]. The computational setup consists of two constant states which are initially at rest and separated through a discontinuity placed at the middle point of a unit length domain. The two states are characterized by the following initial conditions: Left state: $\rho = 1.0$, $v^x = 0.0$, $v^y = 0.0$, $p =
1.0$, and $B^y = 1.0$. Right state: $\rho = 0.125$, $v^x = 0.0$, $v^y
= 0.0$, $p = 0.10$, $B^y = -1.0$. The adiabatic index of the ideal gas EOS is $\gamma = 2 $, and the $x$ component of the magnetic field is equal for both left and right states, $B^x = 0.5$. The test is performed using a Cartesian grid with 1600 cells. Results are reported for the HLL Riemann solver (as the other two schemes yield similar results) and for a CFL parameter equal to 0.5.
The results of the simulation are shown in Fig. \[fig1\], which displays the wave structure for various quantities after the removal of the membrane. This wave structure comprises a fast rarefaction wave, a slow compound wave (both moving to the left), a contact discontinuity, and, moving to the right, a slow shock wave and a fast rarefaction wave. The short dashed line in the six panels of Fig. \[fig1\] shows the wave pattern produced in the purely Minkowski spacetime at time $t=0.4$. It is in good overall agreement with the results obtained by @balsara01, in particular regarding the location of the different waves, the maximum value achieved by the Lorentz factor ($W=1.457$), and the smearing of the numerical solution. In addition to this solution we use open circles to denote the results of this test in flat spacetime but incorporating [*gauge*]{} effects by selecting a value of the lapse function different from unity, namely $\alpha=2$. The solution, which is shown at $t=0.2$, matches as expected with that represented by the short dashed line, obtained in flat spacetime at time $t=0.4$. Finally, the open squares refer to a third version of this test carried out in a flat spacetime with a nonvanishing shift vector, namely $\beta^x=0.4$. The numerical displacement that is thus produced is in perfect agreement with the expected one. This is emphasized in the figure by translating the short dashed line into the long dashed one by the predicted amount, $\beta^x t=0.16$.
Magnetized spherical accretion {#michel}
------------------------------
{width="8.2cm"} {width="8.2cm"}
In the second test we check the ability of the code to numerically maintain with a time-dependent system of equations the stationarity of the spherically symmetric accretion solution of a perfect fluid onto a Schwarzschild black hole in the presence of a radial magnetic field. It is worth emphasizing that a consistent solution for magnetized spherical accretion with a force-free magnetic field satisfying the whole set of Maxwell equations does not exist (see Appendix \[app\_A\] for a proof). However, it is easy to show that any magnetic field of the type $b^\alpha=(b^t,b^r,0,0)$ does not affect the spherically symmetric hydrodynamical solution. Therefore, although the resulting configuration is nonphysical, it provides a useful numerical test and has been used in the literature for this purpose [@gammie:03; @devilliers1; @duez05].
The initial setup consists of a perfect isoentropic fluid obeying a polytropic EOS with $\gamma=4/3$. The critical radius of the solution is located at $r_c=8.0$ and the rest mass density at the critical radius is $\rho_c=6.25
\times 10^{-2}$. These parameters suffice to provide the full description of the spherical accretion onto a Schwarzschild black hole as described in detail by @michel:72. The radial magnetic field component, which can in principle follow any radial dependence, is chosen to satisfy the divergence-free condition. Moreover, its strength is characterized by the ratio $\beta=b^2/2p$ between the magnetic pressure and the gas pressure, computed at the critical radius of the flow. These initial conditions are evolved in time using the Roe-type Riemann solver described in Sec. \[SRRS\] on a uniform radial grid covering the region between $r_{\rm {min}}=r_{\rm {horizon}}+ \delta$ and $r_{\rm {max}} =10.0$, where $\delta$ varies from $0.1$ to $0.3$.
Figure \[fig2\] shows the comparison between the analytic solution (solid lines) and the numerical solution (circles) for one representative case with pressure ratio $\beta=1.0$ and $\delta=0.3$. These results are obtained with a numerical grid of $N=100$ radial zones, for which convergence is reached at time $t=250 M$. The order of accuracy of the code is computed by monitoring the error ${\rm L}\equiv\sum_{i=1}^N|Q_i - Q_{a,i}|/\sum_{i=1}^N Q_{a,i}$ for quantity $Q=\rho$ as the number of grid points $N$ is increased, where $Q_a$ represents the analytic solution. This procedure is repeated for different values of the ratio $\beta$, namely for $\beta=0, 1, 10, 100$, and $1000$ and the results, which are reported in Fig. \[fig3\], show that the global order of convergence of the code is $2$, irrespective of the parameter $\beta$.
A comparison of the accuracy of the three methods we use to compute the numerical fluxes is reported in Table \[table1\], for $\beta=10.0$ and $N=70$ radial zones. The results for the magnetized spherical accretion test appear in the upper half of the table. This table reports the global error of some representative quantities when numerical convergence is reached. For the particular test discussed in this section we find that there is not a single method providing the smallest error in all of the quantities, and the Roe-type Solver, which is the most accurate in the computation of the hydrodynamic variables, is the least accurate in the computation of the magnetic field.
Equatorial Kerr accretion
-------------------------
{width="8.2cm"} {width="8.2cm"}
A further one-dimensional test of the code is provided by the stationary magnetized inflow solution in the Kerr metric derived by @takahashi. This solution was subsequently adapted to the case of equatorial inflow in the region between the black hole horizon and the marginally stable orbit by [@gammie]. This test has been used by @devilliers1 and @gammie:03 in the validation of their GRMHD codes. It represents a step forward in the level of complexity of the equations to solve with respect to those used in the previous two sections, since the test involves the Kerr metric, albeit specialized to the equatorial plane. As a result, additional terms due to the increased number of nonvanishing Christoffel symbols appear in the equations.
As described by @gammie and adopting his notation, the inflow solution is determined once four conserved quantities are specified, namely the accretion rate $F_M$, the angular momentum flux $F_L$, the energy flux $F_E$ and the component $F_{\theta\phi}$ of the electromagnetic tensor, which is related to the magnetic flux through the inner edge of the disk. For the sake of comparison we consider an initial setup with the same numerical values used by @gammie:03, namely a Kerr black hole with spin parameter $a=0.5$, $F_M=-1.0$, $F_L=-2.815344$, $F_E=-0.908382$, $F_{\theta\phi}=0.5$.
The numerical grid consists of $N_r\times N_\theta$ gridpoints in the radial and angular directions, respectively. The radial grid covers the region between $r_{\rm min}=r_{\rm horizon} + 0.2 $ and $r_{\max}=4.0$, while the angular grid consists of $N_\theta=3$ gridpoints subtending a small angle of $10^{-5}\pi$ accross the equatorial plane. The radial profiles of some significant variables, obtained with the Roe-type Riemann solver, are reported in Fig. \[fig4\] for a radial grid of $N_r=100$ zones. The open circles indicate the numerical results while the underlying solid lines correspond to the analytic solution. It is found that the stationarity of the solution is preserved to high accuracy by the numerical code. For the long-term evolutions considered there are no significant deviations from the analytic profiles.
As we did for the magnetized spherical accretion test we use the current test to compute again the order of convergence of the code as the grid is refined. The global order of convergence for some representative quantities is reported in Fig. \[fig5\], which shows that the code is second order accurate. As already commented by [@gammie:03], the worsening of the order of convergence for $B^{\phi}$ at high grid resolution is due to the fact that the initial condition is “semi-analytic”, requiring the solution of an algebraic equation. Thus, the inaccuracies produced at time $t=0$ become more pronunced for large numbers of radial zones $N_r$.
The performance of the code using the HLL and KT solvers has also been checked with this test. While the order of convergence is preserved irrespective of the numerical shemes used to compute the fluxes, the actual accuracy can vary significantly. The results of this comparison for the equatorial Kerr accretion solution are summarized in the lower half of Table \[table1\], which reports the global error of representative quantities, when convergence is reached, on a numerical grid with $N_r=60$ radial points. It is worth stressing that the HLL scheme, at least in our implementation, turns out to be the most accurate in the computation of the magnetic field.
[lcccc]{} Michel test & & & &\
HLL & $3.76\times 10^{-3}$ & $3.92\times 10^{-3}$ & $7.64\times 10^{-17}$ & $-$\
Roe-type & $2.97\times 10^{-3}$ & $3.45\times 10^{-3}$ & $1.09\times 10^{-12}$ & $-$\
KT & $3.36\times 10^{-3}$ & $3.54\times 10^{-3}$ & $1.94\times 10^{-18}$ & $-$\
Gammie test & & & &\
HLL & $1.92\times 10^{-2}$ & $2.54\times 10^{-3}$ & $2.28\times 10^{-9}$ & $1.48\times 10^{-3} $\
Roe-type & $6.90\times 10^{-3}$ & $3.01\times 10^{-3}$ & $3.96\times 10^{-3}$ & $2.14\times 10^{-3} $\
KT & $1.63\times 10^{-2}$ & $9.72\times 10^{-4}$ & $2.30\times 10^{-9}$ & $9.89\times 10^{-3} $\
Thick accretion disks around black holes
----------------------------------------
An intrinsic two-dimensional test for the code is provided by the stationary solution of a thick disk (or torus) orbiting around a black hole, described by @fish:76, @kow:78, and more recently by @font:02a. The resulting configuration consists of a perfect barotropic fluid in circular non-Keplerian motion around a Schwarzschild or Kerr black hole, with pressure gradients in the vertical direction accounting for the disk thickness. These thick disks may posses a cusp on the equatorial plane through which matter can accrete onto the black hole.
In the following two subsections we describe our numerical tests for unmagnetized and magnetized thick disks, respectively. In both cases the effective potential at the inner edge of the disk is smaller than that at the cusp, thus providing initial conditions which are strictly stationary. For simplicity we limit our simulations to models with constant distribution of specific angular momentum $\ell=-u_\phi/u_t$, although the same qualitative results have been obtained with more general rotation laws.
### Unmagnetized disk {#hydro_torus}
In testing the evolution of a purely hydrodynamical torus we consider a model similar to the one used by [@devilliers1] for the Schwarzschild metric, namely a torus with specific angular momentum $\ell=4.5$, position of the maximum density at $r_{\rm center}=15.3$, and an effective potential at the inner edge such that the inner and outer radii on the equatorial plane are $r_{\rm in}=9.34$ and $r_{\rm out}=39.52$, respectively. We choose a polytropic EOS with $\gamma=4/3$ and a polytropic constant $K$ such that the torus-to-hole mass ratio is $M_t/M\sim
0.07$.
We have checked that the code can keep the stationarity of the initial equilibrium torus when evolved in time. Figure \[fig6\] shows the global order of convergence as computed from the rest mass density $\rho$. The corresponding global error $\rm{L}$ reported in the figure, and defined as ${\rm L}\equiv\sum_{i,j=1}^N|\rho_{ij} -
\rho_{a,ij}|/\sum_{i,j=1}^N\rho_{a,ij}$, is computed after $10$ orbital periods for each model, using a uniform numerical grid consisting of $N\times N$ gridpoints, whose specific values can be read off from the figure. As it is apparent from Fig. \[fig6\] the code reaches second order of convergence for reasonable high values of $N$ ($>200$).
We note that in addition to the model just discussed we have also analyzed the performance of the code by comparing the evolution of additional hydrodynamical models which were studied by @font:02a and @zanotti:03 using independent codes based on HRSC schemes. In all the cases considered, corresponding to a number of different generalizations such as disks with power-law distributions of the specific angular momentum, disks in Kerr spacetime, and disks subject to the so-called runaway instability, the GRMHD code reproduced the same quantitative results of the independent hydrodynamical codes with negligible differences.
### Magnetized disk {#mag_torus}
As a final test we consider the evolution of a magnetized torus around a Schwarzschild black hole. In this case, however, a stationary solution which might provide self-consistent initial data for such magnetized disks is not available. Indeed, it can be proved (see Appendix \[app\_B\] for a proof) that the hydrodynamical isoentropic type of models that we have used in the previous section for unmagnetized disks cannot be “dressed” with a magnetic field, to produce a force-free magnetized torus that satisfies the whole set of Maxwell’s equations. Therefore, we follow the same pragmatic approach adopted by @devilliers1 and @gammie:03, and simply add an ad-hoc poloidal magnetic field to the hydrodynamical thick disk model. The magnetic field is generated by a vector potential $A_\phi\propto
\max(\rho/\rho_c - C, 0)$, where $\rho_c$ is the maximum rest mass density of the torus and $C$ is a free parameter which determines the confinement of the field inside the torus. The hydrodynamical torus is the same as the one considered in Section \[hydro\_torus\], but endowed with a magnetic field characterized by a confinement parameter $C=0.5$ and such that the average ratio of magnetic-to-gas pressure inside the torus is $\beta=1.5\times 10^{-3}$.
The four panels of Fig. \[fig8\] display isocontours of the rest mass density, logarithmically spaced, during the first few orbital periods of the evolution. These results correspond to a simulation employing the HLL solver with a computational grid of 200 radial zones and 100 angular zones. It was first shown by [@balbus] that the dynamics of such magnetized thick disks is governed by the so-called magnetorotational instability (MRI), which generates turbulence in the disk and helps explaining the transport of angular momentum outwards. In axisymmetry the development of the MRI is much less significant than in full three dimensions and manifests itself through the appearence of the so-called “channel solution” [@devilliers:03]. This feature of the solution becomes visible in our simulation after about three orbital periods, as shown in Fig. \[fig8\], in the form of a high density elongated structure near the equatorial plane. We report in Fig. \[fig9\] two additional distinctive features that can be unambiguosly attributed to the MRI. The first one, showed in the top panel, represents the transport of angular momentum (initially constant) outward, which acquires a Keplerian profile (indicated by a thick solid line) as the evolution proceeds. Correspondingly, the botton panel shows the rapid increase of the (mean) magnetic pressure (dashed line) with respect to the gas pressure (solid line) during the first two orbital periods and due to the MRI driven turbulence.
We note, however, that the present status of the numerical code does not allow us to evolve efficiently additional simulations with higher resolutions and with increasingly larger values of the magnetization parameter. As a result, the typical distorsion of the isodensity contours produced by the MRI is not visible in Fig. \[fig8\]. A parallel version of the code is currently under development. This will allow for higher resolution simulations of magnetized disks in astrophysical contexts.
Conclusions
===========
In this paper we have presented a procedure to solve numerically the general relativistic magnetohydrodynamic equations within the framework of the $3+1$ formalism. The work reported here represents the extension of our previous investigation [@banyuls:97] where magnetic fields were not considered. The GRMHD equations have been explicitely written in conservation form to exploit their hyperbolic character in the solution procedure using Riemann solvers. Most of the theoretical ingredients which are necessary in order to build up high-resolution shock-capturing schemes based on the solution of local Riemann problems have been discussed. In particular, we have described and implemented three alternative HRSC schemes, either upwind as HLL and Roe, or symmetric as KT. Our implementation of the Roe-type Riemann solver has made use of the equivalence principle of general relativity which allows to use, locally, the characteristic information of the system of equations in the special relativistic limit, following a slight modification of the procedure first presented in @pons:98. Further information regarding the renormalization of the eigenvectors of the GRMHD flux-vector Jacobians has been deferred to an accompanying paper [@anton05]. The work reported in this paper, hence, follows the recent stir of activity in the ongoing efforts of developing robust numerical codes for the GRMHD system of equations, as exemplified by the investigations presented in the last few years by a number of groups [@devilliers1; @gammie:03; @duez05; @komissarov05].
Our formulation of the equations and numerical procedure have been assessed by performing the various test simulations discussed in earlier works in the literature, including magnetized shock tubes in flat spacetimes, spherical accretion onto a Schwarzshild black hole, equatorial magnetized accretion in the Kerr spacetime, as well as evolution of thick accretion disks subject to the development of the magnetorotational instability. The code has proved to be second order accurate and has successfully passed all considered tests. In the near future we plan to apply this code in a number of astrophysical scenarios involving compact objects where both strong gravitational fields and magnetic fields need be taken into account.
Acknowledgments {#acknowledgments .unnumbered}
===============
This research has been supported by the Spanish Ministerio de Educación y Ciencia (grant AYA2004-08067-C03-01, AYA2004-08067-C03-02 and SB2002-0128). The computations were performed on the Beowulf Cluster for Numerical Relativity [*“Albert100”*]{} at the University of Parma and on the SGI/Altix3000 computer [*“CERCA”*]{} at the Servicio de Informática de la Universidad de Valencia.
Magnetized Michel accretion {#app_A}
===========================
In this Appendix we prove that there is not a consistent solution for a force-free magnetic field added to the spherically symmetric accretion of a perfect fluid onto a Schwarzschild black hole. In general, it is not at all obvious that a hydrodynamical solution can be “dressed” with a force-free magnetic field. [@oron:02] has shown that the form of the four-current compatible with a force-free magnetic field is given by $$\label{current}
{\cal J}^\mu=\rho_q u^\mu + \eta b^\mu$$ where $\rho_q$ is the proper charge density. Note that when $\eta=0$, i.e. when the current is only due to the convective term, the assumption of force-free is automatically guaranteed by the ideal MHD condition. However, we will consider here the more general expression given by Eq. (\[current\]). If we write explicitely the four vanishing components of the electric field in the comoving frame of the accreting fluid, $F_{\mu\nu}u^\nu=0$, recalling that the velocity field is given by $u^\mu=(u^0,u^1,u^2,u^3)=(u^t,u^r,0,0)$, we find $$\begin{aligned}
\label{FF1}
F_{01}&=&0, \\
\label{FF2}
F_{02}u^0+F_{12}u^1&=&0, \\
\label{FF3}
F_{31}&=&0,\end{aligned}$$ where we have also used the fact that $F_{03}=\partial_0 A_3 - \partial_3 A_0=0$. Let us next consider the first couple of Maxwell equations $$\label{max1}
F_{[\alpha\beta,\gamma]}=0,$$ where the comma denotes partial differentiation. After writing them explicitly for all possible combinations we obtain $$\begin{aligned}
F_{01,2} + F_{12,0} + F_{20,1} &=& 0, \\
F_{01,3} + F_{13,0} + F_{30,1} &=& 0, \\
F_{02,3} + F_{23,0} + F_{30,2} &=& 0, \\
F_{12,3} + F_{23,1} + F_{31,2} &=& 0 \ .\end{aligned}$$ By the symmetries of the spacetime and by relations (\[FF1\])-(\[FF3\]) this system reduces to $$\begin{aligned}
\label{thetadependence1}
F_{02,1}&=&0, \\
\label{thetadependence2}
F_{23,1}&=&0. \end{aligned}$$ Summarizing, among the 6 components of the antisymmetric electromagnetic tensor $F_{\mu\nu}$, 3 of them vanish, namely $F_{01}=F_{03}=F_{13}=0$. Among the remaining 3, only two are independent, since the constraint (\[FF2\]) has to be fulfilled. Furthermore, according to Eqs. (\[thetadependence1\]) and (\[thetadependence2\]), $F_{02}$ and $F_{23}$ are functions of the angle $\theta$ only, $F_{02}=F_{02}(\theta)$ and $F_{23}=F_{23}(\theta)$, and are therefore constants along fluid lines. Taking all this into account we can write the components of the magnetic field explicitly, using definition (\[bmu\]) in the main text $$\begin{aligned}
\label{BB0}
b^0 &=&\frac{1}{\sqrt{-g}} F_{23}u_1,\\
\label{BB1}
b^1 &=&-\frac{1}{\sqrt{-g}} F_{23}u_0=\frac{b^0 u_0}{u_1} \\
\label{BB2}
b^2 &=& 0, \\
\label{BB3}
b^3 &=&\frac{1}{\sqrt{-g}} ( F_{02}u_1 - F_{12}u_0)=-\frac{F_{02}}{\sqrt{-g}u^1}.\end{aligned}$$ Note that Eq. (\[BB3\]) can be alternatively computed from the condition $b^\mu u_\mu=0$.
Up to this point we have shown that the magnetic field is completely determined by two constants, $F_{23}$ and $F_{02}$. We now consider the second couple of Maxwell equations, namely $\nabla_\nu
F^{\mu\nu}=4\pi {\cal J}^\mu$. According to the assumption on the four-current, Eq. (\[current\]), and on the four-velocity in the case of spherical accretion, these equations become $$\begin{aligned}
\label{mm0}
\partial_2(\sqrt{-g}F^{02})&=&4\pi\sqrt{-g}(\rho_q u^0 + \eta b^0), \\
\label{mm1}
\partial_2(\sqrt{-g}F^{12})&=&4\pi\sqrt{-g}(\rho_q u^1 + \eta b^1), \\
\label{mm2}
\partial_1(\sqrt{-g}F^{21})&=& 0, \\
\label{mm3}
\partial_2(\sqrt{-g}F^{32})&=&4\pi\sqrt{-g}\eta b^3 \ , \end{aligned}$$ where $F^{02}=F_{02}/g_{00} g_{22}$ and $F^{12}=F_{12}/g_{11} g_{22}$. From (\[mm2\]) it follows that the term $F_{12}/g_{11}$ must be a function of the angular coordinate $\theta$ only which, recalling (\[FF2\]) and the fact that both $u^0$ and $u^1$ are functions of $r$, implies that $F_{12}=F_{02}=0$. As a result, the toroidal component of the magnetic field $b^3$ vanishes. Moreover, according to Eq. (\[mm3\]), the term $F_{23}/(r^2 \sin\theta)$ must be a function of $r$ only. Given that $F_{23}= F_{23}(\theta)$, it must be $F_{23}= A \sin\theta$, with $A$ a constant. Finally, (\[mm0\]) and (\[mm1\]) are now reduced to the following homogeneous system in the unknowns $\rho_q$ and $\eta$ $$\begin{aligned}
u^0 \rho_q + b^0 \eta = 0, \\
u^1 \rho_q + b^1 \eta = 0. \end{aligned}$$ Imposing the vanishing of the determinant gives $b^0/b^1=u^0/u^1$, which cannot be satisfied since it violates the constraint coming from the combination of the orthogonality condition $b^\mu u_\mu=0$ and the normalization condition $u^\mu u_\mu=-1$. This concludes the proof that it is not possible to add a force-free magnetic field to the hydrodynamic solution of spherical accretion in a Schwarzschild spacetime that satisfies the full set of Maxwelll equations.
Magnetized thick accretion disk {#app_B}
===============================
In this Appendix we show that it is not possible to build a consistent stationary and axisymmetric solution for a magnetized torus by simply adding a force-free magnetic field to the hydrodynamic equilibrium model of an isoentropic thick accretion disk [@kow:78; @font:02a]. The proof, that for simplicity we limit to the case of Schwarzschild spacetime but can be extended to a Kerr black hole as well, could follow the same reasoning of the previous Appendix. However, the demonstration is more direct if one exploits some topological properties of the expected solution. In fact, from Maxwell equations it is possible to show that the magnetic field of a perfectly conducting medium endowed with a purely toroidal motion has to be purely poloidal, i.e. $ b^r \neq 0$, $b^\theta \neq 0$, while $b^t=b^\phi=0$. Under these conditions the magnetic field lines lie on the surfaces of constant magnetic potential $A_\phi$ (magnetic surfaces), which coincide with the surfaces of constant angular velocity $\Omega=u^\phi/u^t$. This property prevents the generation of a toroidal component of the magnetic field, even in the presence of differential rotation (Ferraro’s theorem), and allows to introduce a new coordinate system $(x_1,x_2)$ such that $x_1$ varies along the poloidal field lines and $x_2$ is constant along them [@oron:02]. In this new coordinate system the magnetic field will only have one non-vanishing component $b^{1}$, while $b^{2}=0$.
According to @bekenstein, for a force-free magnetic field in an isoentropic flow the quantity $U=u^t u_\phi$ is constant along the magnetic surfaces, and it can be used to define the new coordinate $x_2$. In the case of circular motion in Schwarzschild spacetime this quantity reads $$U=-\frac{\Omega g_{\phi\phi}}{g_{tt}(1-\Omega \ell)}$$ where $\ell=-u_\phi/u_t$ is the specific angular momentum. According to von Zeipel’s theorem [@vonzeipel] $\ell$ is constant along surfaces of constant $\Omega$ for the class of barotropic hydrodynamic models that we are considering. Therefore, both $\Omega$ and $\ell$ are constant along magnetic surfaces and the new coordinate $x_2$ can be defined as $$\begin{aligned}
\label{x2}
x_2 = \left(\frac{U}{\Omega}(1-\Omega \ell)\right)^{1/2} =
\left(-\frac{g_{\phi\phi}}{g_{tt}}\right)^{1/2}=\frac{r
\sin\theta}{(1-\frac{2M}{r})^{1/2}} , \ \end{aligned}$$ which is the so-called von Zeipel parameter [@chak:85]. The other coordinate $x_1$ can be chosen such that orthogonality between $x_1$ and $x_2$ is preserved, i.e. $g_{12}=0$. After some calculations involving straightforward metric coefficient transformations, this choice yields to $$\begin{aligned}
\label{x1}
x_1 = (r-3M)\cos\theta \ .\end{aligned}$$ In computing Eq. (\[x1\]) we have made the reasonable ansatz that $x_1$ is factorized as $x_1=p(r)q(\theta)$. [@oron:02] has shown that, in order to satisfy the second couple of Maxwell’s equations and the scalar equation $\nabla_\mu(h b^\mu)=0$, which can be proved to hold for any isoentropic magnetized flow, the following factorization in terms of generic functions of $x_1$ and $x_2$ must exist $$\label{cond}
\frac{g_{11}}{g_{22}\Delta (u^t)^4} = f(x_1) h(x_2) ,$$ where $\Delta=-g_{tt}g_{\phi\phi}$ in the Schwarzschild metric. From the normalization condition $u^\mu u_\mu=-1$ it follows that $(u^t)^2=1/ [(g_{tt}(1-x_2^2\Omega^2)]$, and Eq. (\[cond\]) becomes $$\label{qui}
\left(1-\frac{2M}{r}\right)^2 (x_2)^2 (1-\Omega^2 (x_2)^2)^{-2} = f(x_1) h(x_2) .$$ Since $\Omega=\Omega(x_2)$, Eq. (\[qui\]) requires that the term $1-2M/r$ is factorizable as $f(x_1) h(x_2)$, which can be shown not to be possible. Hence, the constraint (\[cond\]) cannot be met, and a force-free magnetized torus built from the isoentropic hydrodynamic model of a thick accretion disk cannot be obtained.
[^1]: In the Kerr metric this Eulerian observer is indeed the observer with zero azimuthal angular momentum (ZAMO) as measured from infinity.
|
---
abstract: 'Data are given for sixteen extragalactic objects (predominantly AGN) behind the Magellanic Clouds and for 146 quasar candidates behind the nearby galaxies NGC 45, 185, 253, 2366, 2403 and 6822, IC 1613, M31 and M33. The Magellanic Cloud objects were identified by their X-ray emission, and precise optical and X-ray positions and optical photometry and spectra are presented for all of these. The quasar candidates surrounding the other nearby galaxies were identified through a CFHT slitless spectral survey. Although redshifts for only eight of these candidates have been obtained, previous observations indicate that the majority are likely to be quasars. A subsample of 49 of the brighter objects could confidently be used, in addition to the Magellanic Cloud sources, as probes of the gas in nearby galaxies for rotation curve studies, for studies of their halos, for comparison with higher redshift QSO absorption lines, or as references for proper motion studies.'
author:
- 'David Crampton and G. Gussie'
- 'A.P. Cowley and P.C. Schmidtke'
title: Probes for Nearby Galaxies
---
INTRODUCTION
============
The UV spectroscopic capabilities of HST allow nearby galaxies to be probed via absorption lines in the same way that QSO absorption line systems probe galaxies at high redshift. Although recent studies have shown that Mg II absorption lines at z $<$ 1 appear to be associated with luminous, massive galaxies (Steidel, Dickinson & Persson 1994), there is still a debate about the nature of the absorbing material. For example, it is unclear whether the absorption occurs in an extended disk, halo or even satellite galaxies. Studying absorption lines in AGN and QSOs behind nearby galaxies allow these galaxies to be probed along multiple sightlines, thereby yielding information on how the absorption correlates with various parameters of the foreground galaxy. Accurate measurements of velocities of the interstellar material can also be used to extend rotation curves to larger galactocentric distances, thereby providing improved estimates of the mass distributions. Furthermore, the background quasars can be used for precise proper motion studies of the foreground objects. Unfortunately, however, most surveys for quasars have avoided directions towards nearby galaxies. Monk et al. (1986) give a list of the brightest (m $<$ 17.5) quasars located within $\sim$200 kpc of nearby galaxies. In this paper we report results from an X-ray survey of the Magellanic Clouds and an optical slitless-spectra survey of nearby northern hemisphere galaxies.
For several years we have been undertaking a census of the X-ray sources in the Magellanic Clouds, first with data from [*Einstein*]{} Observatory (e.g., Cowley et al. 1984) and, more recently, with $ROSAT$ (Schmidtke et al. 1994, hereafter Paper I; Cowley et al. 1997, hereafter Paper II). Several background galaxies, AGN, and QSOs were detected in addition to X-ray bright objects within the Clouds. Some of these were reported in Papers I and II, but the identifications and redshifts of most have only recently been determined. In this paper we bring together the data for all of these objects. We give photometry and redshifts for sixteen X-ray selected extragalactic objects, five in the SMC field and eleven in the vicinity of the LMC. We also present finding charts for those that are not already in the literature.
In the course of another project, searches for quasar candidates were carried out on a series of CFHT grens plates of nearby galaxies. These plates are ideally suited for the detection of QSO candidates (e.g., Crampton, Schade & Cowley 1985). Although we have not been able to confirm the identifications or measure the redshifts for most of these candidates, our previous high success rate in identifying QSOs (Crampton, Cowley & Hartwick 1987) demonstrates that most of them can confidently be expected to be AGN. For this reason we present these data now since, apart from the faintest low quality identifications, most will be quasars and hence useful as probes regardless of their redshift. However, a low S/N spectrum confirming the QSO identification is recommended before investing large amounts of telescope time for, say, absorption line studies.
X-RAY SELECTED AGN BEHIND THE MAGELLANIC CLOUDS
===============================================
Nearly 200 point X-ray sources in the direction of the Magellanic Clouds were detected by [*Einstein*]{} Observatory (Long, Helfand, Grabelsky 1981; Wang et al. 1991; Seward & Mitchell 1981, Wang & Wu 1992; Cowley et al. 1984). Details of recent $ROSAT$ X-ray observations of many of these Magellanic Cloud sources are given in Papers I and II. Identifications of the optical counterparts have been carried out through extensive photometric and spectroscopic observations at CTIO. This program reveals that many of these sources are associated with foreground stars or background extragalactic objects (e.g. Papers I and II). The optical data which we have obtained are described below.
Our $ROSAT$-HRI survey of the Magellanic Clouds was aimed at obtaining improved positions of $Einstein$ sources which had not been already identified to enable detection of new optical counterparts. Therefore, our material does not cover the entire fields of the LMC and SMC and thus does not comprise a complete sample of AGN behind these galaxies. The objects which are included in our X-ray sample are listed in Table 1 together with their $ROSAT$ positions and count rates. We have carefully investigated the positional accuracy delivered by the $ROSAT$ detectors (see discussion in Papers I and II) and the positions reported here are accurate to about $\pm5^{\prime\prime}$, making the search for optical counterparts relatively straightforward. Figures 1 and 2 show the distribution of X-ray bright extragalactic objects we have found near these two galaxies. Finding charts are given in Figures 3 and 4 for those which are not already in the literature. The X-ray positions are indicated by a ‘[**$+$**]{}’ and the optical counterpart is marked with a dash. Three of the X-ray sources are spatially extended, and for those we have overlaid their X-ray contours on the optical finding charts. Each of these appears to be associated with a cluster of galaxies.
Photometry
----------
All the photometric data were obtained from CCD photometry carried out at CTIO during observing runs between 1992 and 1996 using the 0.9-m telescope. The $V$ magnitudes presented in Table 1 are based on aperture photometry, calibrated using observations of Landolt (1992) standard stars. The accuracy of the magnitudes is about $\pm0.02$ mag. A few special cases are mentioned below where the object was extended or complicated in some way. The astrometry of each CCD frame used for the finding charts has been tied to the coordinate system of the [*HST Guide Star Catalogue*]{} (Lasker et al. 1990) by measuring the positions of $\sim$6–8 stars on the digitized $GSC$.
Spectroscopy
------------
Our highest resolution AGN spectra were obtained with the CTIO 4-m telescope in November 1996 with the KPGL1 grating and Loral 3K detector. These spectra cover the wavelength range 3700–6700Å and have a resolution of $\sim$1.0Å per pixel. With a 15 slit, corresponding to three pixels, the spectral resolution is 3Å. The spectra of the optical counterparts of CAL 21 and RX J0532.0$-$6920 were taken with the CTIO ARGUS fiber system in December 1995. These spectra cover the range 3650-5800Å with a resolution of 1.8Å per pixel. The spectrum of RX J0547.8-6745 has a wavelength range of 3720–5850Å and also has a resolution of 1.8Å per pixel. One-dimensional spectra were extracted and processed following standard techniques with [IRAF]{} to yield wavelength-calibrated spectra. The spectra are shown in Figure 5, shifted to restframe wavelengths according to their redshifts (Table 1).
Some of the earlier spectra were taken with the SIT vidicon detector on the CTIO 4-m RC spectrograph. These are not shown in Figure 5, but the objects observed are included in Table 1.
Individual Sources
------------------
Some of the newly identified individual sources deserve comment beyond just listing them in Table 1. Of the ones previously published (see references in Table 1), we point out that RX J0534.8$-$6739 was only listed as a “note added in proof" by Cowley et al. (1997) so it would be very easy to overlook it in that paper. The AGN is “star" 2 in the finding chart for this source in Paper II.
RX J0005.3$-$7427 (SMC 1): This X-ray point source falls nearly on the galaxy shown in the finding chart in Fig. 3. Its spectrum, shown in Fig. 5, indicates the object is a Seyfert I galaxy with a redshift of z $=$ 0.1316. The optical image shows the galaxy is just resolved.
RX J0033.3$-$6915 (SMC 70): The X-ray contours for this source are very extended, as shown in Fig. 3. The X-ray position falls on a large cD galaxy in the center of the cluster Abell 2789. Thus, the X-rays appear to result from hot gas in the cluster. The redshift of the cD galaxy is z $=$ 0.0975.
RX J0119.5$-$7301 (SMC 66): The extended X-ray contours suggest that this source also is arises from hot gas, but this is a previously unknown cluster. Many galaxies are visible in the field (see finding chart in Fig. 3). A spectrum of the bright cD galaxy to the west of the central X-ray contour shows it to have a redshift of z $=$ 0.0658. The optical image of this galaxy has three parts, consisting of a foreground star and two non-stellar condensations. The magnitude given in Table 1 refers to the brighter (south-western) of the two non-stellar parts.
RX J0135.4$-$7048: This weak, extended source also appears to be associated with an unknown cluster of galaxies (visible on our original image, but not easily seen in Figure 3). The center of the extended X-ray contours is not coincident with any bright galaxy, but we have observed the nearest (bright) one, which is south-east of the third X-ray contour (as marked on the finding chart in Fig. 3). Its spectrum gives a redshift of z $=$ 0.0647. The identification of this X-ray source with the cluster, and whether the galaxy we observed is a member, should be verified by obtaining redshifts of some of the fainter galaxies.
RX J0136.4$-$7105 (SMC 68), RX J0454.2$-$6643, and RX J0550.5$-$7110: These three sources are all associated with AGN, as shown by their redshifts in Table 1 and their spectra in Fig. 5.
RX J0534.0$-$7145: This X-ray point source appears to be located in or near to the nucleus of the very large optical galaxy, Up 053448$-$7147.3, which is listed as an S0(r). The $V$ magnitude was measured in an 80$^{\prime\prime}$ aperture. The optical spectrum shows narrow \[O II\] and \[O III\] emission but otherwise is relatively normal. It thus appears to be one of the narrow-emission-line galaxies that comprise about $\sim$15% of the extragalactic X-ray galaxy population (e.g., Griffiths et al. 1996). The large extent of the optical disk can be seen in Fig. 4.
RX J0547.8$-$6745: This point X-ray source is identified with an AGN with redshift z $=$ 0.3905. It was also recently found to be a compact radio source, MDM 100 (Marx et al. 1997).
QSOS AND QSO CANDIDATES AROUND NEARBY GALAXIES
==============================================
Quasars with z $<$ 3.4 and $B <$ 21 can be easily recognized from CFHT blue grens images. The blue grens, a grating-prism-lens combination designed by E.H. Richardson, produces spectra with a dispersion of 945Åmm$^{-1}$ and a wavelength range of 3500-5300Å when recorded on IIIaJ emulsion. The grens plates cover a 55$\times$ 55 field, although some parts of this field may be vignetted by the guide probe. Grens exposures were obtained of eight fields centered on northern nearby galaxies and two “halo" M31 fields, centered on field C29 of Sargent et al. (1977) and a field to the SW, primarily to study objects in the galaxies themselves. In some cases, observations with different grens orientations were taken to alleviate problems of overlapping images. A list of the plates is given in Table 2. Unfortunately, the seeing was not very good ($> 1\arcsec$) during most of these observations.
As in previous surveys, four visual searches for objects with a UV excess and/or emission lines were made of each plate by at least two of the authors (in this case, by DC, GG and APC). Objects satisfying these criteria but which were likely to be associated with the nearby galaxy were ignored in this survey. Subsequently, PDS scans were made of the spectra of all candidates with a 50 micron square aperture. These spectra were converted to intensity versus wavelength with a software package written by Graham Hill. As in previous quasar surveys (e.g., Crampton, Schade & Cowley 1985), the candidates were assigned a class based on these tracings and a final visual inspection of the original image. Class 1 candidates are certain quasars with strong emission lines, class 4 objects show UV excess with no definite spectral features, and classes 2 and 3 are intermediate between these extremes. Recognizable white dwarfs are included as class 5 since they might otherwise be selected as quasar candidates on the basis of their colors. Emission-line galaxies with no significant spatial extension or “extragalactic H II regions" are assigned class 6. Spectroscopic follow-up observations with the MMT of candidates with m $<$ 20.5 indicate that, on average, 100% of class 1 objects are quasars, 93% of class 2, 76% of class 3 and 42% of class 4 candidates are quasars. Further details of the observational and identification procedures are given by Crampton, Schade & Cowley (1985). Due to the low galactic latitude of many of these fields, and to internal absorption in the galaxies themselves, fewer candidates than typical were identified in these fields.
Positions and magnitudes of all candidates were measured from glass copies of the Palomar Sky Survey O plates using the method and software described by Stetson (1979). Scans of 100$\times$100 10$\micron$ square pixel boxes were made of each candidate with the PDS and the positions were related to nearby SAO stars. Subsequently, the new [SKYCAT]{} software was used in conjunction with the digitized Palomar Observatory Sky Survey to double-check the coordinates and charts and, in some cases, to correct for errors. The resulting positions are accurate to $\sim2\arcsec$ and magnitudes to $\sim$0.3 mag.
A list of all candidates is given in Table 3. The first column gives the candidate name derived from truncated 2000 coordinates in the form HHMM.M+DDMM. An internal identification symbol is given in the second column, followed by the 2000 coordinates, the magnitude as estimated from the POSS plates, and the class or certainty of the identification. In the notes column we give: (1) the estimated wavelengths of any emission features visible on the grens spectra, listed in order of decreasing intensity (2) redshifts, listed to one digit accuracy if they were estimated from the grens observations, (3) other comments or remarks (for explanation of the abbreviations and any measured redshifts, see the Notes to the table). Rather than give a table listing projected distances from the centers of the nearby galaxies, the locations of the candidates (marked with their ID as given in Table 3) are shown in Figures 6 – 15 so that their distances relative to the optical extent of the galaxies are obvious.
Spectra of nine of the candidates in NGC 2366 and NGC 2403 were obtained in rather poor weather conditions with the MMT in 1987 February with the photon-counting spectrograph. The spectra cover the 3000–8000Å region with a resolution of $\sim$7Å. Identified features and redshifts for these candidates are given in the notes to Table 3.
SUMMARY
=======
Sixteen galaxies, clusters, and AGN behind the Large and Small Magellanic Clouds have been identified through their X-ray emission. Redshifts of most of these indicate that they are relatively nearby, with only three having z $>$ 0.3. Quasar candidates in ten fields in the direction of nearby galaxies have been identified on the basis of their colors and emission lines. Of these, twenty-two have magnitudes brighter than m$=$19 and thus are excellent targets for high spectral resolution studies with HST or 10-m class telescopes. Forty-nine candidates with magnitudes brighter than m$=$20.5 and classes 1 – 3 have an extremely high probability of being quasars, based on similar surveys.
We thank Dr. Martha Hazen of the Harvard College Observatory who located the photographs of the Magellanic Clouds and kindly sent them to us, and Dr. Sidney van den Bergh for permission to use his plate of M 33. We also thank Y. Yuan for her careful checking of the material in Table 3 and for assistance with the diagrams. The excellent new [SKYCAT]{} tool developed jointly by ESO and the CADC was extremely useful in verifying and confirming positions and magnitudes of all our objects. We thank D.Durand for his support in its use and help in installing additional catalogs. A.P.C. and P.C.S. gratefully acknowledge support from NSF for this work.
Chaffee, F.H. et al. 1991, AJ, 102, 461
Clowes, R.G. & Savage, A. 1983, MN, 204, 365
Cowley, A.P., Crampton, D., Hutchings, J.B., Helfand, D.J., Hamilton, T.T., Thorstensen, J.R., & Charles, P.A. 1984, , 286, 196
Cowley, A.P., Schmidtke, P.C., McGrath, T.K., Ponder, A.L., Fertig, M.R., Hutchings, J.B., & Crampton, D. 1997, , 109, 21 (Paper II)
Crampton, D., Cowley, A.P., & Hartwick, F.D.A. 1987, , 314, 129
Crampton, D., Schade, D., & Cowley, A.P. 1985, , 90, 987
Cristiani, S. & Tarenghi, M. 1984, A&A, 132, 351
Garilli, B., Bottini, D., Maccagni, D., Vettolani, G., & Maccacaro, T. 1992, AJ, 104
Griffiths, R.E., Della Ceca, R., Georgantopoulos, I., Boyle, B.J., Stewart, G.C., Shanks, T. & Fruscione, A. 1996, MNRAS, 281, 71
Landolt, A.U. 1992, , 104, 340
Lasker, B.M., Sturch, C.R., McLean, B.J., Russell, J.L, Jenkner, H., and Shara, M.M. 1990, , 99, 2019
Long, K.S., Helfand, D.J., & Grabelsky, D.A. 1981, , 248, 925
Marx, M., Dickey, J.M., & Mebold, U. 1997, A&A, in press
Monk, A.S., Penston, M.V., Pettini, M., & Blades, J.C. 1986, MN, 222, 787
Sargent, W.L.W., Kowal, C.T., Hartwick, F.D.A., & van den Bergh, S. 1977, , 82, 947
Schmidtke, P.C., Cowley, A.P., Frattare, L.M., McGrath, T.K., Hutchings, J.B., & Crampton, D. 1994, , 106, 843 (Paper I)
Seward, F.D. & Mitchell, M. 1981, , 243, 736
Steidel, C.C., Dickinson, M., & Persson, E. 1994, , 437, L75
Stetson, P.B. 1979, , 84, 1056
Tytler, D. & Fan, X.-M. 1992, , 79, 1
Wang, Q., Hamilton, T., Helfand, D.J., & Wu, X. 1991, , 374, 475
Wang, Q. & Wu, X. 1992, , 78, 391
|
---
abstract: 'The behavior of an ideal $D$-dimensional boson gas in the presence of a uniform gravitational field is analyzed. It is explicitly shown that, contrarily to an old standing folklore, the three-dimensional gas does not undergo Bose-Einstein condensation at finite temperature. On the other hand, Bose-Einstein condensation occurs at $T\neq 0$ for $D=1,2,3$ if there is a point-like impurity at the bottom of the vessel containing the gas.'
address:
- |
Instituto de Física, Universidade Federal do Rio de Janeiro\
Caixa Postal 68528, 21945-970 Rio de Janeiro, RJ, Brazil
- |
Dipartimento di Fisica, Universitá di Bologna and Istituto Nazionale\
di Fisica Nucleare, Sezione di Bologna, 40126 Bologna, Italia
author:
- 'R. M. Cavalcanti[^1]'
- 'P. Giacconi,[^2] G. Pupillo,[^3] and R. Soldati[^4]'
title: 'BOSE-EINSTEIN CONDENSATION IN THE PRESENCE OF A UNIFORM FIELD AND A POINT-LIKE IMPURITY'
---
[*Accepted for publication in Physical Review A*]{}
DFUB/14/01 November 2001
Introduction
============
The response of quantum systems to the influence of external background fields is of utmost importance in a wide number of physical applications. As well, the role of disorder, i.e., the presence of impurities in condensed matter systems, is often crucial in the occurrence of remarkable physical effects. It is the aim of the present paper to investigate the behavior of an ideal boson gas in the presence of a uniform (i.e., constant and homogeneous) gravitational field and of extremely localized (actually point-like) impurities affecting the quantum dynamics of the bosonic particles.
It is well known since a long time [@Hua; @Pat] that an ideal three-dimensional boson gas in free space undergoes a phase transition called [*Bose-Einstein condensation*]{} (BEC), in which a finite fraction of its constituent molecules condenses in the single-particle ground state. Such a condensation differs from the usual condensation of a vapor into a liquid in that there is no phase separation. For this reason, BEC is commonly described as a phase transition in momentum space — the particles condense into the $|{\bf p}={\bf 0}\rangle$ state, which has a uniform spatial distribution. It is also well known [@Pat] that such a phase transition is no longer possible, for free bosons, in one and two dimensions — although in both cases it does occur in the presence of a point-like attractive potential [@IGH; @GMS2]. A long standing popular belief [@Hua; @Gol; @Lam; @Groot; @Halpern; @Gersch; @Bagnato1] is that if the particles of a 3D ideal boson gas were placed in a (uniform) gravitational field, then BEC would still occur, but in the condensation region there would be a spatial separation of the two phases, just as in a gas-liquid condensation.
In the present paper we study the exactly solvable quantum mechanical model of an ideal boson gas in $D=1,2,3$ dimensions in the presence of a uniform gravitational field and of a point-like impurity formally described by a $\delta$-function potential. In order to make the Hamiltonian bounded from below, so that the system may attain a state of thermodynamic equilibrium, we shall enclose the gas in a container with impenetrable walls. Concerning the mathematical description of a point-like impurity, it should be remarked that a $\delta$-potential is generally ill-defined when $D>1$, and some renormalization procedure is mandatory. Actually, the rigorous mathematical procedure to deal with point-like interactions involves the analysis of the self-adjoint extensions of the symmetric Hamiltonian operator [@Alb]. In the present work, however, we prefer to follow a more informal approach [@Jac] which is closer to the physical intuition, but reaches the same final result as the rigorous though more involved method of self-adjoint extensions [@ReS]. To be specific, we formally treat the contact interaction as a $D$-dimensional $\delta$-potential, then proceed to the renormalization procedure in physical terms, and finally obtain the so-called Krein’s formula for the Green’s function, from which it is possible to extract the energy spectrum of the single-particle Hamiltonian.
In Section \[no-imp\] we prove that an ideal boson gas in the presence of a uniform gravitational field does not undergo BEC at finite temperature, except in the one-dimensional case. This implies, in particular, that in the three-dimensional case no phase separation occurs in the thermodynamic limit, at variance with the above quoted conventional wisdom. We also provide a rather general [*sufficient*]{} condition for the occurrence of BEC in a trapped ideal gas, which generalizes some results obtained by other authors [@Bagnato2; @Li; @Yan; @Yan2; @Salasnich] for power-law potentials. In Section \[Floor\] we show that the onset of BEC in a uniform gravitational field is made possible in $D=2,3$ if a point-like impurity (i.e., a $\delta$-potential) is placed at the bottom of the vessel containing the gas. The reason is that the presence of the impurity entails the existence of a bound state, whose energy gap with respect to the continuous spectrum is what is needed for the ideal gas to undergo BEC. In Section \[Conclusions\] we draw our conclusions, whereas some technical details are presented in two Appendices.
$D$-dimensional boson gas in a uniform field {#no-imp}
============================================
It is convenient to first analyze and discuss the impurity-free case, which turns out to exhibit, as we shall see below, rather surprising features. Thus, in this Section we shall study the quantum mechanical behavior of an ideal boson gas in the presence of a uniform gravitational field. The existence of a (single-particle) ground state is guaranteed by the presence of an impenetrable wall at the bottom of the vessel containing the gas. The single-particle Hamiltonian is given by $$\label{4.1}
H_0^{(D)}(g)={{\bf p}^2\over 2m}+mgx,$$ in which we have set $${\bf x}=(x_1,\ldots,x_D)\equiv ({\bf r},x),\qquad
{\bf p}=(p_1,\ldots,p_D)\equiv ({\bf k},p).$$ The gas is supposed to be enclosed in a rectangular box of sides $L_1,L_2,\ldots,L_D$, with its bottom fixed at the plane $x=0$. Since we are interested in the thermodynamic limit, we can, without lack of generality, impose periodic boundary conditions in the $x_1,\ldots,x_{D-1}$ directions and Neumann boundary condition[^5] at $x=0$ and $x=L_D$, i.e., $$\psi(x_1,\ldots,x_j+L_j,\ldots,x_D)=\psi(x_1,\ldots,x_j,\ldots,x_D),
\qquad j=1,\ldots,D-1,$$ $$%\partial_x\psi({\bf r},x)|_{x=0,L_D}=0,
\partial_x\psi({\bf r},x=0)=\partial_x\psi({\bf r},x=L_D)=0,$$ and then take the limits $L_j\to\infty$, $j=1,\ldots,D$. After these limits are taken, the eigenfunctions and eigenvalues of $H_0^{(D)}(g)$ read $$\psi_{n,{\bf k}}({\bf r})=
{\exp\left\{(i/\hbar)\,{\bf k}\cdot{\bf r}
\right\} \over (2\pi\hbar)^{(D-1)/2}}\,
\sqrt{-\frac{\kappa}{a_n'}}\,
{{\rm Ai}(\kappa x+a_n')\over{\rm Ai}(a_n')},
\label{4.2}$$ $$E_{n,{\bf k}}={{\bf k}^2\over 2m}-E_g a_n'\,,\qquad
n\in{\mathbb N},\,\,{\bf k}\in{\mathbb R}^{D-1},
\label{4.3}$$ where ${\rm Ai}(x)$ is the Airy function [@AbS], $a_n'$ are the zeros of ${\rm Ai}'(x)$, and the parameters $\kappa$ and $E_g$ are defined as $$\kappa\equiv\left({2m^2 g \over \hbar^2}\right)^{1/3},\qquad
E_g\equiv {mg\over \kappa}={\hbar^2\kappa^2\over 2m}.
\label{2.3}$$ All the zeros of ${\rm Ai}'(x)$ are negative, hence the energy levels $E_{n,{\bf k}}$ are positive.
If $D>1$ the spectrum is purely continuous and the corresponding improper eigenfunctions are normalized according to $$\langle\psi_{n^\prime,{\bf k}^\prime}|
\psi_{n,{\bf k}}\rangle
=\delta_{n,n^\prime}\,
\delta^{(D-1)}({\bf k} -{\bf k}^\prime).
\label{4.5}$$ On the other hand, in the one-dimensional case the spectrum is purely discrete, the normalized eigenfunctions and eigenvalues being respectively $$\psi_{n}(x)=\sqrt{-{\kappa\over a_n'}}\,
{{\rm Ai}(\kappa x+a_n')\over {\rm Ai}(a_n')},
\label{4.6}$$ $$E_{n}=-E_g a_n'\,, \qquad n\in{\mathbb N}.
\label{4.8}$$
Let us first analyze in detail the Bose-Einstein condensation (BEC) for such a one-dimensional system. In the grand canonical ensemble the average number of particles $N$ at temperature $T$ and chemical potential $\mu$ reads $$N = \sum_{n=1}^\infty {1 \over
\exp\left[\beta (E_n-\mu)\right]-1},
\label{4.9a}$$ where, as usual, $\beta=1/k_BT$. The criterion for the occurrence of BEC is that the average population of the excited states remains finite as the chemical potential approaches the ground state energy from below, i.e., $$\lim_{\mu\uparrow E_1}\,N_{\rm ex}=
\lim_{\mu\uparrow E_1}\,\sum_{n=2}^\infty {1\over
\exp\left[\beta (E_n-\mu)\right]-1}<\infty.
\label{4.9c}$$ Notice that the ground state population has been split off, that being the reason why the above sum begins at $n=2$. The sequence of eigenvalues (\[4.8\]) is such that the above mentioned BEC criterion is satisfied. Consequently, Bose-Einstein condensation is expected to occur, although, in order to specify the critical temperature, it would be necessary to sum up the series, which, up to our knowledge, cannot be done analytically. Nonetheless, one can estimate the critical quantities using the asymptotic behavior of $E_n$ for large $n$ [@AbS]: $$E_n=-E_g a_n' \sim E_g\left[3\pi(4n-3)/8\right]^{2/3}
,
\qquad n\gg 1.
\label{4.10}$$ This corresponds to a density of states of the form $$\label{rho}
\rho(E)\approx{dn \over dE}
\sim{1 \over \pi}\,E_g^{-3/2}\,E^{1/2},\qquad E\gg E_g\,.$$ Since $E_g\propto g^{2/3}$, as $g\to 0$ the energy spectrum becomes denser and denser and the ground state energy approaches zero. Thus, in a weak gravitational field it is reasonable to extrapolate in the continuum the density of states (\[rho\]) down to $E=0$. We can then approximate the series in Eq. (\[4.9c\]) by an integral, and eventually obtain $$N_{\rm ex} \sim
\int_0^\infty {dE \over \pi}\,{E_g^{-3/2}E^{1/2} \over
\exp\left[\beta(E-\mu)\right]-1}=
4\pi\left(\kappa\lambda_T\right)^{-3}
g_{3/2}(e^{\beta\mu}),
\label{4.12}$$ where $\lambda_T\equiv h/\sqrt{2\pi mk_BT}$ is the thermal wavelength and $g_s(x)\equiv\sum_{n=1}^{\infty}n^{-s}\,x^n$ is the Bose-Einstein function [@Hua]. To obtain the critical temperature, we take the limit $\mu\to 0$ in Eq. (\[4.12\]) and equate $N_{\rm ex}$ to the total number of particles in the gas; solving for $T$ then yields the approximate critical temperature $$T_{\rm c}\sim {E_g \over k_B}\,(4\pi)^{1/3}
\left({N \over g_{3/2}(1)}\right)^{2/3}.
\label{4.13}$$ Below $T_{\rm c}$ the fraction of particles occupying the ground state is given by $${N_0\over N}
=1-{N_{\rm ex} \over N}
=1-\left({T \over T_{\rm c}}\right)^{3/2}.
\label{4.14}$$
The reasoning which led us to the conclusion that a one-dimensional ideal boson gas in a uniform gravitational field displays BEC can be easily generalized to higher dimensions and other types of potential. This is the content of the following theorem.
Suppose the single-particle energy spectrum of an ideal boson gas satisfies the following conditions: (i) there is a gap between the fundamental and the first excited energy levels, i.e., $E_1-E_0=\Delta>0$; (ii) the single-particle partition function is finite, i.e., $Z\equiv\sum_{n=0}^{\infty}d_n\exp(-\beta E_n)<\infty$, $d_n$ being the finite degeneracy of the $n$-th eigenvalue of the single-particle Hamiltonian. Then this gas displays Bose-Einstein condensation at finite temperature.
[*Proof.*]{} If $\mu<E_0$, the number of particles in the excited states is bounded from above by $$N_{\rm ex}=\sum_{n=1}^{\infty}\frac{d_n\exp[-\beta(E_n-\mu)]}
{1-\exp[-\beta(E_n-\mu)]}
\le\frac{\exp(\beta\mu)}{1-\exp[-\beta(E_1-\mu)]}
\sum_{n=1}^{\infty}d_n\exp(-\beta E_n).$$ Therefore $$\lim_{\mu\to E_0}\,N_{\rm ex}\le\frac{\exp(\beta E_0)}
{1-\exp(-\beta\Delta)}\left[Z-d_0\exp(-\beta E_0)\right] < \infty ,$$ since, by hypothesis, $Z$ and $d_0$ are finite and $\Delta > 0$.
We notice that the above statement may be generalized to some cases in which part of the spectrum is continuous or there are infinitely degenerate energy levels. This is done under the suitable introduction of the density of particles in the excited states and of the single-particle partition function per unit volume. Some explicit examples of this generalization are discussed in Ref. [@GMS2] and in Section \[Floor\] of the present paper.
There are many papers that discuss the problem of Bose-Einstein condensation of an ideal gas confined in a power-law potential [@Bagnato2; @Li; @Yan; @Yan2; @Salasnich], mainly using some kind of semiclassical approximation. In particular, they predict that a one-dimensional gas displays BEC iff the power-law potential is [*less*]{} confining than the parabolic one, i.e., $V(x)\propto x^{\eta}$, $\eta<2$. Theorem 1 shows that this condition is too strong: BEC occurs for any positive $\eta$. It should be clear that the reason of such a discrepancy is not the semiclassical approximation [*per se*]{}, but the substitution of the discrete spectrum by a smooth density of states, which may miss some relevant features of the energy spectrum.
Let us return to the problem of an ideal boson gas in a uniform gravitational field. We shall now consider the two- and three-dimensional cases. Due to the translation invariance along the transverse direction(s), the proper quantity to be discussed is the number of particles per unit area $n^{(D)}\equiv\lim_{L_j\to\infty}N/L_1\cdots L_{D-1}$. The density of particles in the excited states is then given by $$\begin{aligned}
n_{\rm ex}^{(D)} &=& \sum_{j=1}^{\infty}\int
{d^{D-1}k \over
(2\pi\hbar)^{D-1}}\left\{\exp\left[\beta\left({{\bf k}^2 \over 2m}
-E_g a_j'-\mu\right)\right]-1\right\}^{-1}
\nonumber \\
&=& \lambda_T^{1-D}
\sum_{j=1}^{\infty}g_{(D-1)/2}\left[\exp\beta(E_g a_j'+\mu)
\right],\qquad \mu<-E_g a_1'\,.
\label{nex}\end{aligned}$$ The integral in Eq. (\[nex\]) is well defined for arbitrary $D>1$ due to the condition $\mu<-E_g a_1'$. Now, since $\lim_{x\to 1}g_s(x)=\infty$ if $s\le 1$, the first term of the series on the r.h.s. of Eq. (\[nex\]) diverges for $D\le 3$ as $\mu\to -E_g a_1'$. Therefore, a two- or three-dimensional ideal boson gas in a uniform gravitational field [*does not*]{} display Bose-Einstein condensation at $T\neq 0$.
Some remarks are in order here:
\(a) At first sight, Eq. (\[nex\]) seems to imply absence of BEC in $D=1$ too. It should be noted, however, that in one dimension there is no integration over transverse momenta. Hence, in order to remove the contribution of the ground state from the sum over states in Eq. (\[nex\]), one has to begin it at $j=2$. Then $n_{\rm ex}^{(1)}$ $(=N_{\rm ex})$ has a finite limit as $\mu\to -E_g a_1'$.
\(b) It is easy to see that the absence of BEC in a two- or three-dimensional ideal boson gas in a uniform gravitational field in the $x$-direction is due to the quantization of the motion in that direction. Thus, any potential $V$ that depends only on $x$, and such that the one-dimensional Hamiltonian $$H_x=\frac{p_x^2}{2m}+V(x)$$ has a discrete spectrum, will do the job of hindering BEC in $D=2,3$.
\(c) There are claims in the Literature [@Hua; @Gol; @Lam; @Groot; @Halpern; @Gersch; @Bagnato1] that a three-dimensional ideal boson gas in a uniform field may undergo BEC at $T\neq 0$. This is an artifact of approximating the sum in Eq. (\[nex\]) by an integral (remember that Eq. (\[nex\]) holds true for $D>1$). Indeed, using the density of states given by Eq. (\[rho\]) we obtain $$\begin{aligned}
\sum_{j=1}^{\infty}g_{(D-1)/2}\left[\exp\beta(E_ga_j'+\mu)\right]
&\approx&\frac{1}{\pi}\,E_g^{-3/2}\int_0^{\infty}dE\,E^{1/2}
\sum_{n=1}^{\infty}\frac{e^{-n\beta(E-\mu)}}{n^{(D-1)/2}}
\nonumber \\
&=&\frac{1}{\pi}\,(\beta E_g)^{-3/2}\,\Gamma(3/2)
\sum_{n=1}^{\infty}\frac{e^{n\beta\mu}}{n^{(D+2)/2}}
\nonumber \\
&=&4\pi\left(\kappa\lambda_T\right)^{-3}
g_{(D+2)/2}(e^{\beta\mu}).
\label{approxsum}\end{aligned}$$ Inserting this result into Eq. (\[nex\]), one would be led to the incorrect conclusion that BEC occurs at finite temperature in $D=2$ and $D=3$ in the presence of a uniform field, because $\lim_{\mu\to 0}\,g_{(D+2)/2}(e^{\beta\mu})<\infty$ if $D>0$.
\(d) It should be clear by now that none of our conclusions so far depends crucially on the use of Neumann boundary condition. They would remain correct, at least qualitatively, had we used Dirichlet or Robin boundary condition instead.
$D$-dimensional boson gas interacting with a point-like impurity at the bottom of the container {#Floor}
===============================================================================================
In this Section we finally come to the most interesting physical case in which, in addition to the gravitational field, there is a point-like impurity at the bottom of the vessel containing the gas. As we shall show here, such an impurity is enough to restore BEC in the three-dimensional case — and to allow its existence in the two-dimensional case, in which it is absent with or without the gravitational field. The single-particle Hamiltonian takes now the form $$H^{(D)}(g,\lambda_D)={{\bf p}^2\over 2m}+mgx
+\lambda_D\,\delta^{(D)}({\bf x})
\equiv H_0^{(D)}(g)+\lambda_D\,\delta^{(D)}({\bf x}).
\label{5.1}$$ Our main task will be to show that the $\delta$-potential creates a bound state in the two- and three-dimensional cases, thus paving the way for the occurrence of Bose-Einstein condensation, at variance with the impurity-free situation discussed in the previous Section.
Our basic tool to tackle this problem is the Green’s function $$G^{(D)}(z;{\bf x},{\bf x}')=\left<{\bf x}\left|
\left[H^{(D)}(g,\lambda_D)-z\right]^{-1}\right|{\bf x}'\right>,
\qquad z\in\mathbb{C},$$ from which it is possible to extract the energy spectrum. A formal expression for $G^{(D)}(z;{\bf x},{\bf x}')$ can be obtained by solving the Lippmann-Schwinger integral equation, $$\label{LS}
G^{(D)}(z;{\bf x},{\bf x}')=G_0^{(D)}(z;{\bf x},{\bf x}')-
\int d^Dy\,G_0^{(D)}(z;{\bf x},{\bf y})\,V({\bf y})\,
G^{(D)}(z;{\bf y},{\bf x}'),$$ where $G_0^{(D)}$ and $G^{(D)}$ are the Green’s functions associated to $H_0^{(D)}$ and $H^{(D)}=H_0^{(D)}+V({\bf x})$, respectively. For $V({\bf x})=\lambda_D\,\delta^{(D)}({\bf x})$ the integral in Eq. (\[LS\]) can be done trivially, resulting in $$\label{L-S}
G^{(D)}(z;{\bf x},{\bf x}')=G_0^{(D)}(z;{\bf x},{\bf x}')-
\lambda_D\,G_0^{(D)}(z;{\bf x},{\bf 0})\,G^{(D)}(z;{\bf 0},{\bf x}').$$ If we now set ${\bf x}={\bf 0}$, we obtain an algebraic equation for $G^{(D)}(z;{\bf 0},{\bf x}')$. Solving that equation and inserting the result into Eq. (\[L-S\]), we end up with $$\label{Krein}
G^{(D)}(z;{\bf x},{\bf x}')=G_0^{(D)}(z;{\bf x},{\bf x}')-
\frac{G_0^{(D)}(z;{\bf x},{\bf 0})\,G_0^{(D)}(z;{\bf 0},{\bf x}')}
{\frac{1}{\lambda_D}+G_0^{(D)}(z;{\bf 0},{\bf 0})}\,.$$ As we shall see below, $G_0^{(D)}(z;{\bf 0},{\bf 0})$ is formally divergent for $D\ge 2$, but one can still give a well defined meaning to Eq. (\[Krein\]) by renormalizing the coupling parameter $\lambda_D$. The resulting expression, which then makes sense also for $D=2,3$, is known as the Krein’s formula [@Alb] and encodes the one-parameter family of self-adjoint extensions of the symmetric Hamiltonian operator $H_0^{(D)}(g)$. This precisely corresponds to the mathematically rigorous description of the $\delta$-potential.
To complete the construction of $G^{(D)}$ we still have to obtain the Green’s function in the absence of the impurity. This is done in Appendix \[Green\], with the result $$\label{G0D}
G_0^{(D)}(z;{\bf x},{\bf x}')=-{\pi\kappa \over E_g}
\int\frac{d^{D-1}k}{(2\pi\hbar)^{D-1}}\,
\exp\left\{{i\over\hbar}\,{\bf k}\cdot
({\bf r}^{}-{\bf r}')\right\}
{u[\xi(x_<)]\,v[\xi(x_>)] \over {\rm Ai}'[\xi(0)]}\,,$$ where the functions $u(\xi)$ and $v(\xi)$ are defined in Eq. (\[uv\]), $\xi(x)$ is defined in Eq. (\[xi\]), and $x_<(x_>)={\rm min}({\rm max})\{x,x'\}$. Setting ${\bf x}={\bf x}'={\bf 0}$ in Eq. (\[G0D\]) we formally obtain $$\begin{aligned}
G_0^{(D)}(z;{\bf 0},{\bf 0})
&=&-{\kappa\over E_g}\int {d^{D-1}k \over (2\pi\hbar)^{D-1}}\,
{{\rm Ai}\left[\left({\bf k}^2/2mE_g\right)-\left(z/E_g\right)\right]\over
{\rm Ai}'\left[\left({\bf k}^2/2mE_g\right)-\left(z/E_g\right)\right]}
\nonumber \\
&=&-C_D\int_0^\infty dy
\,
y^{(D-3)/2}\, {{\rm Ai}(y-\zeta)\over {\rm Ai}^\prime(y-\zeta)}\,,
\label{5.A}\end{aligned}$$ where $$C_D\equiv{\kappa^D\,(4\pi)^{(1-D)/2}\over E_g\,\Gamma[(D-1)/2]}\,,
\qquad\zeta\equiv \frac{z}{E_g}\,.$$ It follows from the asymptotic behavior of the Airy function ${\rm Ai}(x)$ for large $x$ [@AbS], $$\label{asympAiry}
{\rm Ai}(x)\stackrel{x\to\infty}{\sim}\frac{1}{2\sqrt{\pi}x^{1/4}}\,
\exp\left(-\frac{2}{3}\,x^{3/2}\right)\left[1+O(x^{-3/2})\right],$$ that the integral in Eq. (\[5.A\]) diverges in the UV region for $D\ge 2$, as anticipated. (The integral is finite in the IR for $D>1$.)
Before we show how to make sense of Eq. (\[Krein\]) for $D=2,3$, let us discuss the one-dimensional case, which does not need renormalization. In this case, the energy spectrum can be obtained by solving[^6] $$\label{pole-1D}
\frac{1}{\lambda_1}+G_0^{(1)}(z;0,0)=0,$$ or, more explicitly (see Appendix \[Green\]), $$\label{K1D}
{1 \over \lambda_1}-{\kappa \over E_g}\,
{{\rm Ai}\left(-z/E_g\right) \over
{\rm Ai}'\left(-z/E_g\right)}=0.
\label{5.C}$$ This equation is equivalent to the imposition of Robin boundary condition at the origin, i.e., $\psi'(0)+c\,\psi(0)=0$. It interpolates between the Neumann boundary condition, for $\lambda_1\to 0$, and the Dirichlet one, for $\lambda_1\to\infty$. Any of these boundary conditions prevents the flow of particles across the origin, so any of them can be used to represent an impenetrable wall at the bottom of the container. Nevertheless, it is more convenient to impose Neumann boundary condition in the impurity-free case, because it is then possible to model an impurity at the bottom of the container by a $\delta$-potential. This would not be possible had we imposed Dirichlet boundary condition instead. In any case, the energy spectrum obtained by solving Eq. (\[K1D\]) will be purely discrete and bounded from below. As a consequence, we can say that in the one-dimensional case the Bose-Einstein condensation actually occurs at the lowest discrete energy level, although the ground state energy itself as well as the critical quantities are shifted with respect to the previously discussed impurity-free case.
Let us now discuss the two- and three-dimensional cases. In order to make sense of the denominator in Eq. (\[Krein\]), we first have to regularize $G_0^{(D)}(z;{\bf 0},{\bf 0})$. We shall do this by introducing a UV cutoff in Eq. (\[5.A\]), namely, $$G_{0}^{(D)}(z;{\bf 0},{\bf 0})\to
G_{0}^{(D)}(\Lambda,z;{\bf 0},{\bf 0})
=-C_D\int_0^{\Lambda}dy\,
y^{(D-3)/2}\, {{\rm Ai}(y-\zeta)\over {\rm Ai}^\prime(y-\zeta)}\,.$$ We now add to $G_{0}^{(D)}(\Lambda,z;{\bf 0},{\bf 0})$ the integral $$I_D(\Lambda,z,\alpha)\equiv -C_D\int_0^{\Lambda}dy\,
y^{(D-3)/2}\left(y+\alpha\right)^{-1/2},\qquad\alpha>0.$$ It follows from Eq. (\[asympAiry\]) that $$\begin{aligned}
\frac{{\rm Ai}(y-\zeta)}{{\rm Ai}'(y-\zeta)}&\stackrel{y\to\infty}{\sim}&
-(y-\zeta)^{-1/2}+O\left[(y-\zeta)^{-2}\right]
\nonumber \\
&\sim& -y^{-1/2}+O\left(\zeta y^{-3/2}\right);
\label{asymp2}\end{aligned}$$ hence, the integrand of $G_0^{(D)}(\Lambda,z;{\bf 0},{\bf 0})+I_D(\Lambda,z,\alpha)$ behaves like $y^{(D-6)/2}$ for large $y$. This allows us to remove the UV regulator (i.e., to take the limit $\Lambda\to\infty$) for $D<4$. At the same time, since we have added $I_D$ to $G_0^{(D)}$, we must subtract it from $\lambda_D^{-1}$ in order to keep the combination $\lambda_D^{-1}+G_0^{(D)}(z;{\bf 0},{\bf 0})$ unaltered. We may then define the renormalized coupling parameter $\lambda_D^R$ as $$\frac{1}{\lambda_D^R}=\lim_{\Lambda\to\infty}\left[
\frac{1}{\lambda_D}-I_D(\Lambda,z,\alpha)\right],$$ where it is understood that $\lambda_D$ depends on $\Lambda$ in such a way that the limit exists. We then finally arrive at a meaningful expression for the Green’s function $G^{(D)}(z;{\bf x},{\bf x}')$ for $D=2,3$, in which the denominator of Eq. (\[Krein\]) is replaced by the finite expression $$\label{gD}
\texttt{g}_D(\zeta,\alpha,\lambda_D^R)\equiv\frac{1}{\lambda_D^R}
-C_D\int_0^{\infty}dy\,y^{(D-3)/2}
\left[\frac{{\rm Ai}(y-\zeta)}{{\rm Ai}'(y-\zeta)}+
\left(y+\alpha\right)^{-1/2}\right].$$
It is possible to show (see Appendix \[g=0\]) that, for any finite value of $\lambda_D^R$, $\texttt{g}_D(\zeta,\alpha,\lambda_D^R)$ has a single zero $\zeta_0$ in the interval $-\infty<\zeta_0<-a_1'$. In physical terms, this means the existence of a bound state with energy $E_0=E_g\,\zeta_0$. The rest of the energy spectrum forms a continuum starting at $E=-E_g a_1'$. The presence of this gap in the energy spectrum is enough to guarantee the occurrence of BEC. The proof of this fact is similar to that of Theorem 1, the only difference being that what saturates in the limit $\mu\to E_0$ is not $N_{\rm ex}$, but $n_{\rm ex}^{(D)}$. Some examples of this phenomenon are discussed in detail in Ref. [@GMS2], where it is also shown how to obtain the critical quantities. Working in close analogy, one can obtain an estimate of the critical quantities in the present situation, taking Eq. (\[approxsum\]) suitably into account. If the energy gap created by the impurity is much greater than the energy splitting due to the gravitational field, i.e., $\Delta\equiv-E_g a_1'-E_0\gg -E_g a_2'+E_g a_1'$, one can obtain a good approximation to the critical temperature $T_{\rm c}$ by solving the equation $$\label{Tc}
\lambda_{T_{\rm c}}^{D-1}n^{(D)}=
4\pi\left(\kappa\lambda_{T_{\rm c}}\right)^{-3}
g_{(D+2)/2}\left[\exp(-\Delta/k_BT_{\rm c})\right].$$ It is worthwhile to stress that now, because the bound state energy $E_0$ is strictly below the continuum threshold $(-E_g a_1')$, we can safely use Eq. (\[approxsum\]) to estimate the critical quantities in $D=2,3$.
We close this section with a somewhat technical remark. Aside from being positive, the parameter $\alpha$ in Eq. (\[gD\]) is arbitrary, and has to be fixed by some renormalization prescription. One possibility is the so called Bergmann-Manuel-Tarrach [@BMT] renormalization prescription, in which the bound state energy $E_0$ labels the one-parameter family of self-adjoint extensions of the symmetric Hamiltonian $H_0^{(D)}(g)$. Then Eq. (\[gD\]) becomes equivalent to the pair of equations $$\texttt{g}_D(\zeta,\zeta_0)|_{\rm BMT}
=C_D\int_0^{\infty}dy\,y^{(D-3)/2}\left[\frac{{\rm Ai}(y-\zeta_0)}
{{\rm Ai}'(y-\zeta_0)}-\frac{{\rm Ai}(y-\zeta)}{{\rm Ai}'(y-\zeta)}
\right],$$ $$\frac{1}{\lambda_D^R(\alpha)}=C_D\int_0^{\infty}dy\,y^{(D-3)/2}
\left[\frac{{\rm Ai}(y-\zeta_0)}{{\rm Ai}'(y-\zeta_0)}
+(y+\alpha)^{-1/2}\right],$$ where $\zeta_0=E_0/E_g<-a_1'$. The parameter $\alpha>0$ is thus the subtraction point at which the “running” coupling parameter $\lambda_D^R$ is defined.
Conclusions {#Conclusions}
===========
In this paper we have explicitly solved the quantum dynamics and studied the thermodynamic equilibrium of an ideal $D$-dimensional boson gas in the presence of a uniform gravitational field and a point-like impurity at the bottom of the vessel containing the gas. For convenience, in the present analysis we have imposed Neumann boundary condition at the bottom of the container, but our results might be generalized to Dirichlet or Robin boundary conditions without any substantial modification in the physical behavior. In the impurity-free case it has been shown that Bose-Einstein condensation at finite temperature is possible only in the one-dimensional case and an estimate of the critical temperature in this case has been obtained. It has also been elucidated why the conventional wisdom that BEC (with a phase separation) might occur in the three-dimensional case does actually fail: the reason eventually lies in the illegitimate use of a continuous approximation to the density of states in the computation of the average number of particles in the excited states.
On the other hand, it has been proved that the presence of a point-like impurity is enough to allow BEC at $T\neq 0$ also in two and three dimensions. The reason is that the impurity creates a bound state in the single-particle spectrum, where particles can now accumulate. It should also be emphasized that a $\delta$-potential in the presence of a uniform field is always attractive in two and three dimensions, irrespective of the sign of the renormalized coupling parameter.
The main interest in the study of the present model is in its exact solvability. Nonetheless, it is evident that the key physical features here exhibited will persist even if more realistic impurity potentials are used. The situation is less clear if one considers an interacting boson gas (for the general definition of BEC, applicable to this case, see Ref. [@Leggett]). It is reasonable to assume that our results still hold if the mean field interaction between the particles in the gas is smaller than (i) the energy splitting due to the gravitational field, and (ii) the energy gap created by the impurity (if the latter is present). This condition, however, is likely to be violated as more and more particles accumulate in the lowest energy level, until the interaction between the particles cannot be neglected anymore. What happens then awaits further investigation.
R.M.C. acknowledges the kind hospitality of Universitá di Bologna and the financial support from FAPERJ.
{#Green}
The Green’s function $G_0^{(D)}(z;{\bf x},{\bf x}')$ satisfies the partial differential equation $$\label{G}
\left[H_0^{(D)}(g)-z\right]G_0^{(D)}(z;{\bf x},{\bf x}')
=\delta^{(D)}({\bf x}-{\bf x}').$$ We can reduce Eq. (\[G\]) to an ordinary differential equation by Fourier transforming in the transverse coordinates: $$\label{cG}
\left(-\frac{\hbar^2}{2m}\,\frac{\partial^2}{\partial x^2}
+\frac{{\bf k}^2}{2m}+mgx-z\right)
{\cal G}(z,{\bf k};x,x')=\delta(x-x');$$ the Green’s function $G_0^{(D)}$ will then be given by[^7] $$\label{int}
G_0^{(D)}(z;{\bf x},{\bf x}')=\int\frac{d^{D-1}k}
{(2\pi\hbar)^{D-1}}\,
\exp\left\{{i\over\hbar}\,{\bf k}\cdot
({\bf r}-{\bf r}')\right\}
{\cal G}(z,{\bf k};x,x').$$ Upon the change of variable $$\xi=\kappa x+E_g^{-1}\left({{\bf k}^2 \over 2m}-z\right)
,
\label{xi}$$ Eq. (\[cG\]) becomes $$\label{Airy}
\left(\frac{\partial^2}{\partial\xi^2}-\xi\right)
{\cal G}(\xi,\xi')=-{\kappa\over E_g}\,\delta(\xi-\xi').$$
When $\xi\ne\xi'$, Eq. (\[Airy\]) reduces to the Airy differential equation. Its solution must satisfy Neumann boundary condition at $x=0$, i.e., $\partial_{\xi}{\cal G}(\xi,\xi')|_{x=0}=0$, and it must vanish at infinity, $\lim_{\xi\to\infty}\,{\cal G}(\xi,\xi')=0$. Thus, $$\label{sol1}
{\cal G}(\xi,\xi')=C_1\,u(\xi)\,\theta(\xi'-\xi)
+C_2\,v(\xi)\,\theta(\xi-\xi'),$$ where $\theta(x)$ is the Heaviside step function and $$u(\xi)\equiv{\rm Bi}'(\xi_0)\,{\rm Ai}(\xi)
-{\rm Ai}'(\xi_0)\,{\rm Bi}(\xi),\qquad
v(\xi)\equiv{\rm Ai}(\xi),
\label{uv}$$ with $\xi_0\equiv\xi(x=0)$. To fix the constants $C_1$ and $C_2$, one imposes continuity of ${\cal G}(\xi,\xi')$ at $\xi=\xi'$, $$\label{cond1}
{\cal G}(\xi'+0,\xi')={\cal G}(\xi'-0,\xi'),$$ and a jump in $\partial_{\xi}{\cal G}(\xi,\xi')$ at the same point, $$\label{cond2}
\partial_{\xi}{\cal G}(\xi'+0,\xi')
-\partial_{\xi}{\cal G}(\xi'-0,\xi')=-{\kappa\over E_g},$$ obtained by integrating Eq. (\[Airy\]) from $\xi'-\epsilon$ to $\xi'+\epsilon$ and letting $\epsilon\downarrow 0$. Applying conditions (\[cond1\]) and (\[cond2\]) to the solution (\[sol1\]), and using the fact that the Wronskian of $u(\xi)$ and $v(\xi)$ is given by $$W\{u(\xi),v(\xi)\}=-{\rm Ai}'(\xi_0)\,W\{{\rm Bi}(\xi),{\rm Ai}(\xi)\}
=\frac{1}{\pi}\,{\rm Ai}'(\xi_0),$$ we finally obtain $$\label{cG2}
{\cal G}(\xi,\xi')=-\frac{\pi\kappa\,u(\xi_<)\,v(\xi_>)}
{E_g\,{\rm Ai}'(\xi_0)}\,
,$$ where $\xi_<(\xi_>)={\rm min}\,({\rm max})\{\xi,\xi'\}$. Substituting (\[cG2\]) into Eq. (\[int\]) gives us the desired integral representation of Eq. (\[G0D\]) for $G_0^{(D)}(z;{\bf x},{\bf x}')$.
{#g=0}
Here we show that $\texttt{g}_D(\zeta,\alpha,\lambda_D^R)$ has one (and only one) zero in the interval $-\infty<\zeta<-a_1'$. Indeed, for $\zeta$ large and negative we may use the first line of Eq. (\[asymp2\]) to evaluate the integral in Eq. (\[gD\]), obtaining $$\texttt{g}_2(\zeta,\alpha,\lambda_2^R)\stackrel{\zeta\to-\infty}{\sim}
\frac{1}{\lambda_2^R}-C_2\,\ln\left(-\frac{\zeta}{\alpha}\right),$$ $$\texttt{g}_3(\zeta,\alpha,\lambda_3^R)\stackrel{\zeta\to-\infty}{\sim}
\frac{1}{\lambda_3^R}-2\,C_3\left(\sqrt{-\zeta}-\sqrt{\alpha}\right).$$ In both cases, $\lim_{\zeta\to-\infty}\texttt{g}_D(\zeta,\alpha,\lambda_D^R)=-\infty$. On the other hand, the integral in Eq. (\[gD\]) becomes divergent at the origin for $D\le 3$ if $\zeta\uparrow-a_1'$, as $$\frac{{\rm Ai}(y+a_1')}{{\rm Ai}'(y+a_1')}\stackrel{y\to 0}{\sim}
\frac{{\rm Ai}(a_1')}{{\rm Ai}''(a_1')\,y}
=\frac{1}{a_1' y}\,.$$ (The last equality is a consequence of Airy differential equation.) Since $a_1'<0$, it follows that $\lim_{\zeta\uparrow-a_1'}\texttt{g}_D(\zeta,\alpha,\lambda_D^R)=+\infty$ ($D=2,3$). By continuity, we may conclude that $\texttt{g}_D(\zeta,\alpha,\lambda_D^R)$ vanishes at least once in the interval $-\infty<\zeta<-a_1'$. To show that it vanishes only once, it suffices to prove that $\texttt{g}_D(\zeta,\alpha,\lambda_D^R)$ is a monotonically increasing function of $\zeta$ in that interval. This follows from the identity $$\begin{aligned}
\frac{\partial}{\partial\zeta}\,\texttt{g}_D(\zeta,\alpha,\lambda_D^R)
&=&E_g\,\frac{\partial}{\partial z}\left[\frac{1}{\lambda_D}
+G_0^{(D)}(z;{\bf 0},{\bf 0})\right]
\nonumber \\
&=&E_g\left<{\bf 0}\left|\left[H_0^{(D)}(g)-z\right]^{-2}\right|{\bf 0}
\right>.\end{aligned}$$ It shows that $\partial_{\zeta}\texttt{g}_D(\zeta,\alpha,\lambda_D^R)>0$ if $z$ is real and does not belong to the spectrum of $H_0^{(D)}(g)$. This occurs, as we have seen in Section \[no-imp\], for $z<-E_g a_1'$, or $\zeta<-a_1'$.
K. Huang, [*Statistical Mechanics*]{} (Wiley, New York, 1987).
R. K. Pathria, [*Statistical Mechanics*]{} (Pergamon Press, Oxford, 1972).
L. C. Ioriatti, Jr., S. Goulart Rosa, Jr. and O. Hipólito, Am. J. Phys. [**44**]{}, 744 (1976).
P. Giacconi, F. Maltoni and R. Soldati, Phys. Lett. A [**279**]{}, 12 (2001).
L. Goldstein, J. Chem. Phys. [**9**]{}, 273 (1941).
W. Lamb and A. Nordsieck, Phys. Rev. [**59**]{}, 677 (1941).
S. R. de Groot, G. J. Hooyman and C. A. ten Seldam, Proc. R. Soc. London A [**203**]{}, 266 (1950).
O. Halpern, Phys. Rev. [**86**]{}, 126 (1952); [**87**]{}, 520 (1952).
H. A. Gersch, J. Chem. Phys. [**27**]{}, 928 (1957).
V. Bagnato, D. E. Pritchard and D. Kleppner, Phys. Rev. A [**35**]{}, 4354 (1987).
S. Albeverio, F. Gesztesy, R. H[ø]{}egh-Krohn and H. Holden, [*Solvable Models in Quantum Mechanics*]{} (Springer-Verlag, New York, 1988) pp. 109–110, 357–358.
R. Jackiw, in [*M. A. B. Bég Memorial Volume*]{}, edited by A. Ali and P. Hoodbhoy (World Scientific, Singapore, 1991).
M. Reed and B. Simon, [*Methods of Modern Mathematical Physics*]{} (Academic Press, Orlando, 1987), Vol. 2.
V. Bagnato and D. Kleppner, Phys. Rev. A [**44**]{}, 7439 (1991).
M. Li, L. Chen and C. Chen, Phys. Rev. A [**59**]{}, 3109 (1999).
Z. Yan, Phys. Rev. A [**59**]{}, 4657 (1999).
Z. Yan, M. Li, L. Chen, C. Chen and J. Chen, J. Phys. A [**32**]{}, 4069 (1999).
L. Salasnich, J. Math. Phys. [**41**]{}, 8016 (2000).
, edited by M. Abramowitz and I. A. Stegun (Dover, New York, 1972) pp. 446–452.
O. Bergmann, Phys. Rev. D [**46**]{}, 5474 (1992); C. Manuel and R. Tarrach, Phys. Lett. B [**268**]{}, 222 (1991).
A. J. Leggett, Rev. Mod. Phys. [**73**]{}, 307 (2001).
[^1]: E-mail: rmoritz@if.ufrj.br
[^2]: E-mail: Paola.Giacconi@bo.infn.it
[^3]: Present address: Laboratory for Physical Sciences, 8050 Greenmead Drive College Park, MD 20740; E-mail: Guido.Pupillo@physics.umd.edu
[^4]: E-mail: Roberto.Soldati@bo.infn.it
[^5]: The reason why we impose Neumann boundary condition, instead of the seemingly more natural Dirichlet one, will be explained in Section \[Floor\].
[^6]: One can easily check that the residue of $G_0^{(1)}(z;x,x')$ at $z=-E_g a_n'$ cancels against the residue of the second term on the r.h.s. of Eq. (\[Krein\]) at the same pole. Therefore, all the poles of $G^{(1)}(z;x,x')$ are given by the solutions to Eq. (\[pole-1D\]).
[^7]: In the one-dimensional case we have instead $G_0^{(1)}(z;x,x')={\cal G}(z,{\bf k}={\bf 0};x,x')$.
|
---
abstract: 'This article studies the limiting behavior of a class of robust population covariance matrix estimators, originally due to Maronna in 1976, in the regime where both the number of available samples and the population size grow large. Using tools from random matrix theory, we prove that, for sample vectors made of independent entries having some moment conditions, the difference between the sample covariance matrix and (a scaled version of) such robust estimator tends to zero in spectral norm, almost surely. This result can be applied to various statistical methods arising from random matrix theory that can be made robust without altering their first order behavior.'
author:
- |
Romain Couillet$^{1}$, Frédéric Pascal$^2$, and Jack W. Silverstein$^3$[^1]\
[*$^1$ Telecommunication department, Supélec, Gif sur Yvette, France.*]{}\
[*$^2$ SONDRA Laboratory, Supélec, Gif sur Yvette, France.*]{}\
[*$^3$ Department of Mathematics, North Carolina State University, NC, USA.*]{}
bibliography:
- '/home/romano/phd-group/papers/rcouillet/tutorial\_RMT/book\_final/IEEEabrv.bib'
- '/home/romano/phd-group/papers/rcouillet/tutorial\_RMT/book\_final/IEEEconf.bib'
- '/home/romano/phd-group/papers/rcouillet/tutorial\_RMT/book\_final/tutorial\_RMT.bib'
- 'robust\_est.bib'
title: Robust Estimates of Covariance Matrices in the Large Dimensional Regime
---
Introduction {#sec:intro}
============
Many multi-variate signal processing detection and estimation techniques are based on the empirical covariance matrix of a sequence of samples $x_1,\ldots,x_n$ from a random population vector $x\in{{\mathbb{C}}}^N$. Assuming ${{\rm E}}[x]=0$ and ${{\rm E}}[xx^*]=C_N$, the strong law of large numbers ensures that, for independent and identically distributed (i.i.d.) samples, $$\begin{aligned}
\hat{S}_N=\frac1n\sum_{i=1}^nx_ix_i^*\to C_N\end{aligned}$$ almost surely (a.s.), as the number $n$ of samples increases. Many subspace methods, such as the multiple signal classifier (MUSIC) algorithm and its derivatives [@SCH86; @SCH91], heavily rely on this property by identifying $C_N$ with $\hat{S}_N$, leading to appropriate approximations of functionals of $C_N$ in the large $n$ regime. However, this standard approach has two major limitations: the inherent inadequacy to small sample sizes (when $n$ is not too large compared to $N$) and the lack of robustness to outliers or heavy-tailed distribution of $x$. Although the former issue was probably the first historically recognized, it is only recently that significant advances have been made using random matrix theory [@MES08]. As for the latter, it has spurred a strong wave of interest in the seventies, starting with the works from Huber [@HUB64] on robust M-estimation. The objective of this article is to provide a first bridge between the two disciplines by introducing new fundamental results on robust M-estimates in the random matrix regime where both $N$ and $n$ grow large at the same rate.
Aside from its obvious simplicity of analysis, the [*sample covariance matrix*]{} (SCM) $\hat{S}_N$ is an object of primal interest since it is the maximum likelihood estimator of $C_N$ for $x$ Gaussian. When $x$ is not Gaussian, the SCM as an approximation of $C_N$ may however perform very poorly. This problem was identified in multiple areas such as multivariate signal processing or financial asset management, but was particularly recognized in adaptive radar and sonar processing where the signals under study are characterized by impulsive noise and outlying data. Robust estimation theory aims at tackling this problem [@MAR06]. Among other solutions, the so-called robust M-estimators of the population covariance matrix, originally introduced by Huber [@HUB64] and investigated in the seminal work of Maronna [@MAR76], have imposed themselves as an appealing alternative to the SCM. This estimator, which we denote $\hat{C}_N$, is defined implicitly as a solution of[^2] $$\begin{aligned}
\label{def:hatCN}
\hat{C}_N = \frac1n\sum_{i=1}^n u\left( \frac1Nx_i^*\hat{C}_N^{-1}x_i\right) x_ix_i^*\end{aligned}$$ for $u$ a nonnegative function with specific properties. These estimators are particularly appropriate as they are the maximum likelihood estimates of $C_N$ for specific distributions of $x$ and some specific choices of $u$, such as the family of elliptical distributions [@KEL70]. For any such $u$, $\hat{C}_N$ is, up to a scalar, a consistent estimate for $C_N$ for $N$ fixed and $n\to \infty$, see e.g. [@OLI12]. The robust estimators are also used to cope with distributions of $x$ with heavy tails or showing a tendency to produce outliers, such as when $\Vert x\Vert^2$ has a K-distribution often met in the context of adaptive radar processing with impulsive clutter [@WAT85]. In this article, the concept of robustness is to be understood along this general theory.
A second angle of improvement of subspace methods has recently emerged due to advances in random matrix theory. The latter aims at studying the statistical properties of matrices in the regime where both $N$ and $n$ grow large. It is known in particular that, if $x=A_N y$ with $y\in{{\mathbb{C}}}^M$, $M\geq N$, a vector of independent entries with zero mean and unit variance, then, under some conditions on $C_N=A_NA_N^*$ and $y$, in the large $N,n$ (and $M$) regime, the eigenvalue distribution of (almost every) $\hat{S}_N$ converges weakly to a limiting distribution described implicitly by its Stieltjes transform [@SIL95b]. When $C_N$ is the identity matrix for all $N$, this distribution takes an explicit form known as the Marcenko-Pastur law [@MAR67]. Under some additional moment conditions on the entries of $y$, it has also been shown that the eigenvalues of $\hat{S}_N$ cannot lie infinitely often away from the support of the limiting distribution [@SIL98]. In the past ten years, these two results and subsequent works have been applied to revisit classical signal processing techniques such as signal detection schemes [@BIA10] or subspace methods [@MES08b; @COU10b]. In these works, traditional [*$n$-consistent*]{} detection and estimation methods were improved into [*$(N,n)$-consistent*]{} approaches, i.e. they provide estimates that are consistent in the large $N,n$ regime rather than in the fixed $N$ and large $n$ regime. These improved estimators are often referred to as G-estimators.
In this article, we study the asymptotic first order properties of the robust M-estimate $\hat{C}_N$ of $C_N$, given by , in the regime where $N$, $n$ (and $M$) grow large simultaneously, hereafter referred to as the random matrix regime. Although the study of the SCM $\hat{S}_N$ for vectors $x$ with rather general distributions is accessible to random matrix theory, as in e.g. the case of elliptical distributions [@ELK09], the equivalent analysis for $\hat{C}_N$ is often very challenging. In the present article, we restrict ourselves to vectors $x$ of the type $x=A_Ny$ with $y$ having independent zero-mean entries. One important technical challenge brought by the matrix $\hat{C}_N$, usually not met in random matrix theory, lies in the dependence structure between the vectors $\{u(\frac1Nx_i^*\hat{C}_N^{-1}x_i)^\frac12x_i\}_{i=1}^n$ (as opposed to the independent vectors $\{x_i\}_{i=1}^n$ for the matrix $\hat{S}_N$). We fundamentally rely on the set of assumptions on the function $u$ taken by Maronna in [@MAR76] to overcome this difficulty. Our main contribution consists in showing that, in the large $N,n$ regime, and under some mild assumptions, $\Vert \hat{C}_N-\alpha\hat{S}_N\Vert \to 0$, a.s., for some constant $\alpha>0$ dependent only on $u$. This result is in particular in line with the conjecture made in [@FRA08] according to which $\Vert \hat{C}_N-\alpha\hat{S}_N\Vert {\overset{\rm a.s.}{\longrightarrow}}0$ for the function $u(s)=1/s$ studied extensively by Tyler [@TYL88; @KEN91]; however, the function $u(s)=1/s$ does not enter our present scheme as it creates additional difficulties which leave the conjecture open.
A major practical consequence of our result is that the matrix $\hat{S}_N$, at the core of many random matrix-based estimators, can be straightforwardly replaced by $\hat{C}_N$ without altering the first order properties of these estimators. We generically call the induced estimators [*robust G-estimators*]{}. As an application example, we shall briefly introduce an application to robust direction-of-arrival estimation accounting for large $N,n$ based on the earlier estimator [@MES08c].
The remainder of the article is structured as follows. Section \[sec:results\] provides our theoretical results along with an application to direction-of-arrival estimation. Section \[sec:conclusion\] then concludes the article. All technical proofs are detailed in the appendices.
[*Notations:*]{} The arrow ‘${\overset{\rm a.s.}{\longrightarrow}}$’ denotes almost sure convergence. For $A\in{{\mathbb{C}}}^{N\times N}$ Hermitian, $\lambda_1(A)\leq \ldots \leq \lambda_N(A)$ are its ordered eigenvalues. The norm $\Vert \cdot \Vert$ is the spectral norm for matrices and the Euclidean norm for vectors. For $A,B$ Hermitian, $A\succeq B$ means that $A-B$ is nonnegative definite. The notation $A^*$ denotes the Hermitian transpose of $A$. We also write $\imath=\sqrt{-1}$.
Main results {#sec:results}
============
Theoretical results
-------------------
Let $X=[x_1,\ldots,x_n]\in{{\mathbb{C}}}^{N\times n}$, where $x_i=A_Ny_i\in{{\mathbb{C}}}^N$, with $y_i=[y_{i1},\ldots,y_{iM}]^{{\sf T}}\in{{\mathbb{C}}}^M$ having independent entries with zero mean and unit variance, $A_N\in{{\mathbb{C}}}^{N\times M}$, and $C_N\triangleq A_NA_N^*\in{{\mathbb{C}}}^{N\times N}$ be a positive definite matrix. We denote $c_N\triangleq N/n$, $\bar{c}_N\triangleq M/N$, and define the sample covariance matrix $\hat{S}_N$ of the sequence $x_1,\ldots,x_n$ by $$\hat{S}_N\triangleq \frac1nXX^*=\frac1n\sum_{i=1}^nx_ix_i^*.$$
Let $u:{{\mathbb{R}}}^+\to{{\mathbb{R}}}^+$ (${{\mathbb{R}}}^+=[0,\infty)$) be a function fulfilling the following conditions:
- $u$ is nonnegative, nonincreasing, and continuous on ${{\mathbb{R}}}^+$;
- the function $\phi: {{\mathbb{R}}}^+\to {{\mathbb{R}}}^+,~s\mapsto su(s)$ is nondecreasing and bounded, with $\sup_{x}\phi(x) = \phi_\infty>1$. Moreover, $\phi$ is increasing in the interval where $\phi(s)<\phi_\infty$.
Classical M-estimators $\hat{C}_N$ defined by for such function $u$ include the Huber estimator, with $\phi(s)=\frac{\phi_\infty}{\phi_\infty-1}s$ for $s\in[0,\phi_\infty-1]$, $\phi_\infty>1$, and $\phi(s)=\phi_\infty$ for $s\geq \phi_\infty-1$. Since $u(s)$ is constant for $s\leq \phi_\infty-1$ and decreases for $s\geq \phi_\infty-1$, this estimator weights the majority of the samples $x_1,\ldots,x_n$ by a common factor and reduces the impact of the outliers. The widely used function $u(s)=(1+t)(t+x)^{-1}$ for some $t>0$ shows similar properties, here with $\phi_\infty=1+t$.[^3] Other classical $u$ functions, adapted to specific distributions of the samples, can be found in the survey [@OLI12]. In any of these scenarios, robustness can be controlled by properly setting $\phi_\infty$.
To pursue, we need the following statistical assumptions on the large dimensional random matrices under study.
The random variables $y_{ij}$, $i\leq n$, $j\leq M$, are independent either real or circularly symmetric complex (i.e. ${{\rm E}}[y_{ij}^2]=0$) with ${{\rm E}}[y_{ij}]=0$ and ${{\rm E}}[|y_{ij}|^2]=1$. Also, there exists $\eta>0$ and $\alpha>0$, such that, for all $i,j$, ${{\rm E}}[|y_{ij}|^{8+\eta}]<\alpha$.
$\bar{c}_N\geq 1$ and, as $n\to\infty$, $$\begin{aligned}
0<\lim\inf_n c_N\leq &\lim\sup_n c_N<1, \quad \lim\sup_n \bar{c}_n<\infty.\end{aligned}$$ There exists $C_-,C_+>0$ such that $$\begin{aligned}
C_-<\lim\inf_N \{\lambda_1(C_N)\}\leq \lim\sup_N \{\lambda_N(C_N)\}< C_+.\end{aligned}$$ Note that the assumptions neither request the entries of $y$ to be identically distributed nor impose the existence of a continuous density. This assumption is adequate for a large range of application scenarios such as factor models in finance or general signal processing models with independent entry-wise non-Gaussian noise (e.g. distributed antenna array processing), although the requirement of independence in the entries of $y$ is somewhat uncommon in the classical applications of robust estimation theory. The entry-wise independence is however central in this article for the emergence of a concentration of the quadratic forms $\frac1Nx_i^*\hat{C}_N^{-1}x_i$, $i=1,\ldots,n$. Further generalizations, e.g. to elliptical distributions for $x$, would break this effect and would certainly entail a much different asymptotic behavior of $\hat{C}_N$. These important considerations are left to future work.
Technically, [**A1**]{}–[**A3**]{} mainly ensure that the eigenvalues of $\hat{S}_N$ and $\hat{C}_N$ lie within a compact set away from zero, a.s., for all $N,n$ large, which is a consequence (although non immediate) of [@SIL98; @COU10b]. Note also that [**A2**]{} demands $\lim\inf_N c_N>0$, so that the following results [*do not*]{} contain the results from [@MAR76; @KEN91], in which $N$ is fixed and $n\to\infty$, as special cases. With these assumptions, we are now in position to provide the main technical result of this article.
\[th:1\] Assume [**A1**]{}–[**A3**]{} and consider the following matrix-valued fixed-point equation in $Z\in{{\mathbb{C}}}^{N\times N}$, $$\begin{aligned}
\label{eq:hatCN}
Z = \frac1n\sum_{i=1}^n u\left(\frac1N x_i^*Z^{-1}x_i \right)x_ix_i^*.
\end{aligned}$$ Then, we have the following results.
- There exists a unique solution to for all large $N$ a.s. We denote $\hat{C}_N$ this solution, defined as $$\begin{aligned}
\hat{C}_N = \lim_{t\to\infty} Z^{(t)}
\end{aligned}$$ where $Z^{(0)}=I_N$ and, for $t\in{{\mathbb{N}}}$, $$\begin{aligned}
Z^{(t+1)} = \frac1n\sum_{i=1}^n u\left(\frac1N x_i^*(Z^{(t)})^{-1}x_i \right)x_ix_i^*.
\end{aligned}$$
- Defining $\hat{C}_N$ arbitrarily when does not have a unique solution, we also have $$\begin{aligned}
\left\Vert \phi^{-1}(1)\hat{C}_N - \hat{S}_N \right\Vert {\overset{\rm a.s.}{\longrightarrow}}0.
\end{aligned}$$
The proof is provided in Appendix \[app:th1\].
An immediate corollary of Theorem \[th:1\] is the asymptotic closeness of the ordered eigenvalues of ${\phi^{-1}(1)}\hat{C}_N$ and $\hat{S}_N$.
\[co:spacing\] Under the assumptions of Theorem \[th:1\], $$\begin{aligned}
\max_{i\leq N} \left| {\phi^{-1}(1)}\lambda_i(\hat{C}_N) - \lambda_i(\hat{S}_N) \right| &{\overset{\rm a.s.}{\longrightarrow}}0.
\end{aligned}$$
The proof is provided in Appendix \[app:th1\].
Some comments are called for to understand Theorem \[th:1\] in the context of robust M-estimation.
Theorem \[th:1\]–(I) can be first compared to the result from Maronna [@MAR76 Theorem 1] which states that a solution to exists for each set $\{x_1,\ldots,x_n\}$ under certain conditions on the dimension of the space spanned by the $n$ vectors, as well as on $u(s)$, $N$, and $n$ (in particular $u(s)$ must satisfy $\phi_\infty>n/(n-N)$ in [@MAR76]). Our result may be considered more interesting in practice in the sense that the system sizes $N$ and $n$ no longer condition $\phi_\infty$ and therefore do not constrain the definition of $u(s)$. Theorem \[th:1\]–(I) can also be compared to the results on uniqueness [@MAR76; @KEN91] which hold for all $N,n$ under some further conditions on $u(s)$, such as $\phi(s)$ is strictly increasing [@MAR76]. The latter assumption is particularly demanding as it may reject some M-estimators such as the Huber M-estimator for which $\phi(s)$ is constant for large $s$. Theorem \[th:1\]–(I) trades these assumptions against a requirement for $N$ and $n$ to be “sufficiently large” and for $\{x_1,\ldots,x_n\}$ to belong to a probability one sequence. Precisely, we demand that there exists an integer $n_0$ depending on the random sequence $\{(x_1,\ldots,x_n)\}_{n=1}^\infty$, such that for all $n\geq n_0$, existence and uniqueness are established under no further condition than the definition (i)–(ii) of $u(s)$ and [**A1**]{}–[**A3**]{}. Theorem \[th:1\]–(II), which is our main result, states that, as $N$ and $n$ grow large with a non trivial limiting ratio, the fixed-point solution $\hat{C}_N$ (either always defined under the assumptions of [@MAR76; @KEN91] or defined a.s. for large enough $N$) is getting asymptotically close to the sample covariance matrix, up to a scaling factor. This implies in particular that, while $\hat{C}_N$ is an $n$-consistent estimator of (a scaled version of) $C_N$ for $n\to\infty$ and $N$ fixed, in the large $N,n$ regime it has many of the same first order statistics as $\hat{S}_N$. This suggests that many results holding for $\hat{S}_N$ in the large $N,n$ regime should also hold for $\hat{C}_N$, at least concerning first order convergence. For instance, as will be seen through Corollary \[co:RG-MUSIC\], one expects consistent estimators (in the large $N,n$ regime) based on functionals of $\hat{S}_N$ to remain consistent when using $\phi^{-1}(1)\hat{C}_N$ in place of $\hat{S}_N$ in the expression of the estimator. However, it is important to note that, in general, one cannot say much on second order statistics, i.e. regarding the comparison of the asymptotic performance of both estimators. The matrices $\hat{C}_N$, parametrizable through $u$, should then be seen as a class of alternatives for $\hat{S}_N$ which may possibly improve estimators based on $\hat{S}_N$ in the large (but finite) $N,n$ regime. Note also that Theorem \[th:1\] is independent of the choice of the distribution of the entries of $y$ (as long as the moment conditions are satisfied) or of the choice of the function $u$, which is in this sense similar to the equivalent result in the classical fixed-$N$ large-$n$ regime [@OLI12].
In a similar context, it is shown in [@SIL98] and [@YIN88b] that the eigenvalues of $\hat{S}_N$ are asymptotically contained in the support of their limiting compactly supported distribution if and only if the entries of $y$ have finite fourth order moment. This first suggests that the technical assumption [**A1**]{} which requires $y$ to have uniformly bounded $8+\eta$ moment may be relaxed to $y_{ij}$ having only finite fourth order moments for Theorem \[th:1\] to hold. This being said, since most of the aforementioned $(N,n)$-consistent estimators involving $\hat{C}_N$ or $\hat{S}_N$ rely on a non-degenerate behavior of these eigenvalues (see e.g. [@COUbook Chapters 16–17] for details), the finite fourth order moment condition cannot possibly be further relaxed for these estimators to be usable. As a consequence, although [**A1**]{} might seem very restrictive in a robust estimation framework as it discards the possibility to consider distributions of $x$ with heavy tail behavior, it is a close to necessary condition for robust estimation in the random matrix regime to be meaningful.
In terms of applications to signal processing, recall first that the $n$-consistency results on robust estimation [@MAR76; @KEN91] imply that many metrics based on functionals of $C_N$ can be consistently estimated by replacing $C_N$ by $\hat{C}_N$. The inconsistency of the sample covariance matrix to the population covariance in the random matrix regime, along with Theorem \[th:1\], suggest instead that this approach will lead in general to inconsistent estimators in the large $N,n$ regime, and therefore to inaccurate estimates for moderate values of $N,n,M$. However, any metric based on $C_N$, and for which an $(N,n)$-consistent estimator involving $\hat{S}_N$ exists, is very likely to be $(N,n)$-consistently estimated by replacing $\hat{S}_N$ by $\phi^{-1}(1)\hat{C}_N$. The interest of this replacement obviously lies in the possibility to improve the metric through an appropriate choice of $u$, in particular when $y$ exhibits outlier behavior or has heavy tails.
Application example
-------------------
A specific example can be found in the context of MUSIC-like estimation methods for array processing. In this example, $K$ signal sources imping on a collection of $N$ collocated sensors with angles of arrival $\theta_1,\ldots,\theta_K$. The data $x_i\in{{\mathbb{C}}}^N$ received at time $i$ at the array is modeled as $$\begin{aligned}
x_i = \sum_{k=1}^K \sqrt{p_k} s(\theta_k) z_{k,i} + \sigma w_i\end{aligned}$$ where $s(\theta)\in{{\mathbb{C}}}^N$ is the deterministic unit norm steering vector for signals impinging the sensors at angle $\theta$, $z_{k,t}\in{{\mathbb{C}}}$ is the signal source modeled as a zero mean, unit variance, and finite $8+\eta$ order moment random variable, i.i.d. across $t$ and independent across $k$, $p_k>0$ is the transmit power of source $k$ ($p_k<p_{\rm max}$ for some $p_{\rm max}>0$) and $\sigma w_i\in{{\mathbb{C}}}^N$ is the received noise at time $t$, independent across $t$, with i.i.d. zero mean, variance $\sigma^2>0$, and finite $8+\eta$ order moment entries. Write $x_i=A_N y_i$, with $A_N \triangleq [S(\Theta)P^\frac12,\sigma I_N]$, $S(\Theta)=[s(\theta_1),\ldots,s(\theta_K)]$, $P=\operatorname{diag}(p_1,\ldots,p_K)$, and $y_i=(z_{1,t},\ldots,z_{K,t},w_i^{{\sf T}})^{{\sf T}}\in{{\mathbb{C}}}^{N+K}$. Then, with $N,n$ large and $K$ finite, Assumptions [**A1**]{}–[**A3**]{} are met and Theorem \[th:1\] can be applied. This yields the following corollary of Theorem \[th:1\].
\[co:RG-MUSIC\] Denote $E_W\in{{\mathbb{C}}}^{N\times (N-K)}$ a matrix containing in columns the eigenvectors of $C_N$ with eigenvalue $\sigma^2$ and $\hat{e}_k$ the eigenvector of $\hat{C}_N$ with eigenvalue $\hat\lambda_k\triangleq \lambda_k(\hat{C}_N)$ (recall that $\hat\lambda_1\leq\ldots\leq \hat\lambda_N$), with $\hat{C}_N$ defined as in Theorem \[th:1\]. Then, as $N,n\to \infty$ in the regime of Assumption [**A2**]{}, and $K$ fixed, $$\begin{aligned}
\gamma(\theta) - \hat{\gamma}(\theta) {\overset{\rm a.s.}{\longrightarrow}}0
\end{aligned}$$ where $$\begin{aligned}
\gamma(\theta) &= s(\theta)^* E_WE_W^* s(\theta) \\
\hat{\gamma}(\theta) &= \sum_{i=1}^N \beta_i s(\theta)^* \hat{e}_i \hat{e}_i^* s(\theta)\end{aligned}$$ and $$\begin{aligned}
\beta_i &= \left\{
\begin{array}{ll}
1+\sum_{k=N-K+1}^N \left( \frac{\hat\lambda_k}{\hat\lambda_i - \hat\lambda_k} - \frac{\hat\mu_k}{\hat\lambda_i-\hat\mu_k} \right) &,~i\leq N-K \\
- \sum_{k=1}^{N-K} \left( \frac{\hat\lambda_k}{\hat\lambda_i - \hat\lambda_k} - \frac{\hat\mu_k}{\hat\lambda_i-\hat\mu_k} \right) &,~i>N-K
\end{array}
\right.\end{aligned}$$ with $\hat\mu_1\leq\ldots\leq \hat\mu_N$ the eigenvalues of $\operatorname{diag}(\hat{{\bm \lambda}})-\frac1n \sqrt{\hat{{\bm \lambda}}}\sqrt{\hat{{\bm \lambda}}}^{{\sf T}}$, $\hat{{\bm \lambda}}=(\hat\lambda_1,\ldots,\hat\lambda_N)^{{\sf T}}$.
The Corollary is exactly the algorithm [@MES08b] with $\hat{S}_N$ replaced by $\hat{C}_N$. The validity of this operation is proved in Appendix \[app:RG-MUSIC\].
The function $\gamma(\theta)$ is the defining metric for the MUSIC algorithm [@SCH86], the zeros of which contain the $\theta_i$, $i\in\{1,\ldots,K\}$. Corollary \[co:RG-MUSIC\] proves that the $N,n$-consistent G-MUSIC estimator of $\gamma(\theta)$ proposed by Mestre in [@MES08b] can be extended into a robust G-MUSIC method. The latter merely consists in replacing the sample covariance matrix $\hat{S}_N$ as in [@MES08b] by the robust estimator $\hat{C}_N$. The angles $\theta_i$ are then estimated as the deepest minima of $\hat{\gamma}(\theta)$. This technique can be seen through simulations to perform better than either MUSIC or G-MUSIC in the finite $(N,n)$ regime in the case of impulsive noise in the sense of [**A1**]{}, for an appropriate choice of the function $u$. However, proving so requires the study of the second order statistics of $\gamma(\theta)$, which goes beyond the reach of the present article and is left to future work.
Conclusion {#sec:conclusion}
==========
We have proved that a large family of robust estimates of population covariance matrices is consistent with the sample covariance matrix in the regime of both large population $N$ and sample $n$ sizes, this being valid irrespective of the sample distribution. This result opens up a new area of research for robust estimators in the random matrix regime. The results can be applied to improve a variety of signal processing techniques relying on random matrix methods but not accounting for noise impulsiveness yet. The exact performance gain of such improved methods however often relies on second order statistics which will be investigated in future work.
Proof of Theorem \[th:1\] and Corollary \[co:spacing\] {#app:th1}
======================================================
In order to prove the existence and uniqueness of a solution to for all large $n$, we use the framework of standard interference functions from [@YAT95].
\[def:standardfunctions\] A function $h=(h_1,\ldots,h_n):{{\mathbb{R}}}_+^n\to {{\mathbb{R}}}_+^n$ is said to be a standard interference function if it fulfills the following conditions:
1. [*Positivity:*]{} if $q_1,\ldots,q_n\geq 0$, then $h_j(q_1,\ldots,q_n)>0$, for all $j$.
2. [*Monotonicity:*]{} if $q_1\geq q_1',\ldots,q_n\geq q_n'$, then for all $j$, $h_j(q_1,\ldots,q_n)\geq h_j(q_1',\ldots,q_n')$.
3. [*Scalability:*]{} for all $\alpha>1$ and for all $j$, $\alpha h_j(q_1,\ldots,q_n)\geq h_j(\alpha q_1,\ldots,\alpha q_n)$.
\[th:standardfunctions\] If an $n$-variate function $h(q_1,\ldots,q_n)$ is a standard interference function and there exists $(q_1,\ldots,q_n)$ such that for all $j$, $q_j\geq h_j(q_1,\ldots,q_n)$, then the system of equations $$\label{eq:hj=qj}
q_j = h_j(q_1,\ldots,q_n)$$ for $j=1,\ldots,n$, has at least one solution, given by $\lim_{t\to\infty} (q_1^{(t)},\ldots,q_n^{(t)})$, where $$q_j^{(t+1)} = h_j(q_1^{(t)},\ldots,q_n^{(t)})$$ for $t\geq 1$ and any initial values $q_1^{(0)},\ldots,q_n^{(0)}\geq 0$.
The proof is provided in Appendix \[app:standardfunctions\].
Note that our definition of a standard interference function differs from that of [@YAT95] in which the scalability requirement reads: for all $j$, $\alpha h_j(q_1,\ldots,q_n)>h_j(\alpha q_1,\ldots,\alpha q_n)$. Changing the strict inequality to a loose one alters the consequences for the theorem above, where only existence is ensured. However, for our present purposes with $\phi(s)$ possibly possessing a flat region, requesting a strict inequality would be too demanding.
Since $\{x_1,\ldots,x_n\}$ spans ${{\mathbb{C}}}^N$ for all large $n$ a.s. (as a consequence of Proposition \[prop:no\_eigenvalue\] in Appendix \[app:lemmas\]), we can define for these $n$ the functions $h_j$, $j=1,\ldots,n$, $$\begin{aligned}
\label{eq:hj}
h_j(q_1,\ldots,q_n) \triangleq \frac1Nx_j^*\left(\frac1n\sum_{i=1}^n u(q_i) x_ix_i^*\right)^{-1}x_j.\end{aligned}$$
We first show that $h=(h_1,\ldots,h_n)$ meets the conditions of Theorem \[th:standardfunctions\] for all large $n$ a.s. Due to [**A1**]{}, from standard arguments using the Markov inequality and the Borel Cantelli lemma, we have that $\min_{i\leq n}\Vert x_i\Vert \neq 0$ for all large $n$ a.s. (this is also a corollary of Lemma \[le:convquadraticform\] below). Therefore, we clearly have $h_j>0$ for all $j$, for all large $n$ a.s. Also, since $u$ is non-increasing, taking $q_1,\ldots,q_n$ and $q_1',\ldots,q_n'$ such that $q_i'\geq q_i\geq 0$ for all $i$, $u(q_i')\leq u(q_i)$ and then $$\begin{aligned}
\frac1n \sum_{i=1}^n u(q_i)x_ix_i^* \succeq \frac1n\sum_{i=1}^n u(q_i') x_ix_i^* \end{aligned}$$ From [@HOR85 Corollary 7.7.4], this implies $$\begin{aligned}
\left(\frac1n \sum_{i=1}^n u(q_i')x_ix_i^*\right)^{-1} \succeq \left(\frac1n\sum_{i=1}^n u(q_i) x_ix_i^*\right)^{-1}\end{aligned}$$ from which $h_j(q_1',\ldots,q_n')\geq h_j(q_1,\ldots,q_n)$, proving the monotonicity of $h$.
For $\alpha>1$, $\phi(\alpha q_i)\geq \phi(q_i)$, so that $u(\alpha q_i) \geq \frac{u(q_i)}{\alpha}$. Therefore $$\begin{aligned}
\frac1n \sum_{i=1}^n u(\alpha q_i)x_ix_i^* \succeq \frac1{\alpha}\frac1n\sum_{i=1}^n u(q_i) x_ix_i^* \end{aligned}$$ From [@HOR85 Corollary 7.7.4] again, we then have $$\begin{aligned}
\alpha \left(\frac1n\sum_{i=1}^n u(q_i) x_ix_i^*\right)^{-1} \succeq \left(\frac1n \sum_{i=1}^n u(\alpha q_i)x_ix_i^*\right)^{-1}\end{aligned}$$ so that $\alpha h_j(q_1,\ldots,q_n)\geq h_j(\alpha q_1,\ldots,\alpha q_n)$. Therefore $h$ is a standard interference function. In order to prove that admits a solution, from Theorem \[th:standardfunctions\], we now need to prove that there exists $(q_1,\ldots,q_n)$ such that for all $j$, $q_j\geq h_j(q_1,\ldots,q_n)$. Note that this may not hold for all fixed $N,n$ as discussed in [@MAR76 pp. 54]. We will prove instead that a solution exists for all large $n$ a.s.
To pursue, we need random matrix results and additional notations. Take $c_-,c_+$ such that $0<c_-<\lim\inf_N c_N$ and $\lim\sup_N c_N<c_+<1$, and denote $X_{(i)}=[x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n]\in{{\mathbb{C}}}^{N\times (n-1)}$. We start with the following fundamental lemmas, which allow for a control of the joint convergence of the quadratic forms $\frac1Nx_i^*\hat{S}_N^{-1}x_i - 1$.
\[le:lambdamin\] Assume [**A1**]{}–[**A3**]{}. There exists $\varepsilon>0$ such that $$\begin{aligned}
\min_{i\leq n} \left\{\lambda_1\left( \frac1nX_{(i)}X_{(i)}^*\right)\right\} &> \varepsilon
\end{aligned}$$ for all large $n$ a.s.
The proof is provided in Appendix \[app:lambdamin\].
\[le:convquadraticform\] Assume [**A1**]{}–[**A3**]{}. Then, a.s., $$\begin{aligned}
\max_{i\leq n} \left\{\left| \frac1Nx_i^*\hat{S}_N^{-1}x_i - 1 \right|\right\} \to 0.
\end{aligned}$$
The proof is provided in Appendix \[app:convquadraticform\].
Let $q_1=\ldots=q_n\triangleq q>0$. Then, $$\begin{aligned}
h_i(q_1,\ldots,q_n) =\frac1{u(q)} \frac1N x_i^* \hat{S}_N^{-1} x_i = \frac{q}{\phi(q)} \frac1N x_i^* \hat{S}_N^{-1} x_i.\end{aligned}$$ Take $\varepsilon>0$ such that $(1+\varepsilon)/(\phi_\infty-\varepsilon)<1$. This is always possible since $\phi_\infty>1$. Choose now $q$ such that $\phi(q) = \phi_\infty -\varepsilon$, which also exists since $\phi$ is increasing on $[0,\phi^{-1}(\phi_\infty-))$ with image $[0,\phi_\infty)$. From Lemma \[le:convquadraticform\], for all large $n$ a.s., $$\begin{aligned}
\sup_i \left| \frac1q h_i(q_1,\ldots,q_n)(\phi_\infty-\varepsilon) - 1 \right| < \varepsilon.\end{aligned}$$ Therefore, $$\begin{aligned}
\frac1q h_i(q_1,\ldots,q_n) < \frac{1+\varepsilon}{\phi_\infty-\varepsilon} < 1\end{aligned}$$ from which $h_i(q,\ldots,q) < q$ for all $i$. From Theorem \[th:standardfunctions\], we therefore prove the existence of a solution to with $h_j$ given in . Since these quadratic forms define the solutions of the fixed-point equation , this proves the existence of a solution $\hat{C}_N$ for all large $n$ a.s. Note that Lemma \[le:convquadraticform\] is crucial here and that, for $\phi_\infty$ close to one, there is little hope to prove existence for all fixed $N,n$, consistently with the results [@MAR76; @KEN91].
We now prove uniqueness. Take a solution $\hat{C}_N$ and denote $d_i = \frac1Nx_i^*\hat{C}_N^{-1}x_i$, which we order as $d_1\leq\ldots\leq d_n$ without loss of generality. Denote also $D=\operatorname{diag}(\{u(d_i)\}_{i=1}^n)$. By definition $$\begin{aligned}
d_i = \frac1Nx_i^* \left(\frac1n XDX^*\right)^{-1}x_i.\end{aligned}$$ From the non increasing property of $u$, we have the inequality $$\begin{aligned}
XDX^* \succeq u(d_n)XX^*\end{aligned}$$ which implies after inversion $$\begin{aligned}
\frac1{u(d_n)}\left(XX^*\right)^{-1} \succeq \left(XDX^*\right)^{-1}\end{aligned}$$ and therefore, recalling that $n^{-1}XX^*=\hat{S}_N$, $$\begin{aligned}
d_n \leq \frac1{u(d_n)} \frac1Nx_n^*\hat{S}_N^{-1}x_n\end{aligned}$$ or equivalently, since $u(d_n)>0$, $$\begin{aligned}
\phi(d_n) \leq \frac1Nx_n^*\hat{S}_N^{-1}x_n.\end{aligned}$$
Similarly, $$\begin{aligned}
d_1 \geq \frac1{u(d_1)} \frac1Nx_1^*\hat{S}_N^{-1}x_1\end{aligned}$$ from which we also have $$\begin{aligned}
\phi(d_1) \geq \frac1Nx_1^*\hat{S}_N^{-1}x_1.\end{aligned}$$
Since $\phi$ is non-decreasing, we also have $\phi(d_1)\leq \phi(d_i) \leq \phi(d_n)$ for $i\leq n$, and we therefore obtain $$\begin{aligned}
\frac1Nx_1^*\hat{S}_N^{-1}x_1 \leq \phi(d_i) \leq \frac1Nx_n^*\hat{S}_N^{-1}x_n.\end{aligned}$$
Take $0<\varepsilon<\min\{1,(\phi_\infty-1)\}$. From Lemma \[le:convquadraticform\], for all large $n$ a.s., $$\begin{aligned}
0<1-\varepsilon < \phi(d_i) < 1+\varepsilon<\phi_\infty.\end{aligned}$$
Since $\phi$ is continuous and increasing on $(0,\phi^{-1}(\phi_\infty-))$ with image contained in $(0,\phi_\infty)$, $\phi$ is invertible there and we obtain that for all large $n$ a.s., $$\begin{aligned}
\label{eq:bound}
\phi^{-1}\left(1-\varepsilon\right) < d_i < \phi^{-1}\left(1+\varepsilon \right).\end{aligned}$$
We can now prove the almost sure uniqueness of $\hat{C}_N$ for all large $n$. Take $\varepsilon$ in to satisfy the previous conditions and to be such that $(\phi^{-1}(1+\varepsilon))^2/\phi^{-1}(1-\varepsilon)<\phi^{-1}(\phi_\infty-)$, which is always possible as the left-hand side expression is continuous in $\varepsilon$ with limit $\phi^{-1}(1)<\phi^{-1}(\phi_\infty-)$ as $\varepsilon\to 0$.
We now follow the arguments of [@YAT95 Theorem 1]. Assume $(d^{(1)}_1,\ldots,d^{(1)}_n)$ and $(d^{(2)}_1,\ldots,d^{(2)}_n)$ are two distinct solutions of the fixed-point equation $d_j=h_j(d_1,\ldots,d_n)$ for $j=1,\ldots,n$, where $h_j$ is defined by . Then (up to a change in the indices $1$ and $2$), there exists $k$ such that, for some $\alpha>1$, $\alpha d^{(1)}_k=d^{(2)}_k$ and $\alpha d^{(1)}_i\geq d^{(2)}_i$ for $i\neq k$. From , for sufficiently large $n$ a.s. the ratio $\alpha=d^{(1)}_k/d^{(2)}_k$ is also constrained to satisfy $\alpha<\phi^{-1}(1+\varepsilon)/\phi^{-1}(1-\varepsilon)$. Using this inequality and the upper bound in , we have for all $j$ $$\begin{aligned}
0< \alpha d^{(1)}_j < \frac{(\phi^{-1}(1+\varepsilon))^2}{\phi^{-1}(1-\varepsilon)}<\phi^{-1}(\phi_\infty-).\end{aligned}$$ Since $\phi$ is increasing on $(0,\phi^{-1}(\phi_\infty-))$, we have in particular $\phi(\alpha d_j^{(1)})>\phi(d_j^{(1)})$ from which $\alpha u(\alpha d_j^{(1)})>u(d_j^{(1)})$, for all $j$ and then, with similar arguments as previously, $\alpha h_j(d_1^{(1)},\ldots,d_n^{(1)})>h_j(\alpha d_1^{(1)},\ldots,\alpha d_n^{(1)})$ for all $j$. Using the monotonicity of $h$, we conclude in particular $$\begin{aligned}
d^{(2)}_k = h_k(d^{(2)}_1,\ldots,d^{(2)}_n)&\leq h_k(\alpha d^{(1)}_1,\ldots,\alpha d^{(1)}_n) \\
&<\alpha h_k(d^{(1)}_1,\ldots,d^{(1)}_n)=\alpha d^{(1)}_k\end{aligned}$$ which contradicts $\alpha d^{(1)}_k=d^{(2)}_k$ and proves the uniqueness of $\hat{C}_N$ and Part (I) of Theorem \[th:1\].
We now prove Part (II) of the theorem. In order to proceed, we start again from . Since $\varepsilon$ is arbitrary, we conclude that $$\begin{aligned}
\max_{i\leq n} \left| d_i - \phi^{-1}(1) \right| {\overset{\rm a.s.}{\longrightarrow}}0.\end{aligned}$$ Applying the continuous mapping theorem, we then have $$\begin{aligned}
\max_{i\leq n} \left| u(d_i) -u(\phi^{-1}(1)) \right| {\overset{\rm a.s.}{\longrightarrow}}0.\end{aligned}$$ Noticing that $\phi^{-1}(1) u(\phi^{-1}(1)) = \phi(\phi^{-1}(1))=1$, and therefore that $u(\phi^{-1}(1))=1/\phi^{-1}(1)$, this can be rewritten $$\begin{aligned}
\label{eq:uditophi}
\max_{i\leq n} \left| u(d_i) - \frac1{\phi^{-1}(1)} \right| {\overset{\rm a.s.}{\longrightarrow}}0.\end{aligned}$$ Now, we also have the matrix inequalities $$\begin{aligned}
&\min_{i\leq n} \left\{u(d_i) - \frac1{\phi^{-1}(1)} \right\} \frac1nXX^* \\
&\preceq \frac1n\sum_{i=1}^n \left(u(d_i) - \frac1{\phi^{-1}(1)}\right) x_ix_i^* \\
&\preceq \max_{i\leq n} \left\{u(d_i) - \frac1{\phi^{-1}(1)} \right\} \frac1nXX^*.\end{aligned}$$ From Proposition \[prop:no\_eigenvalue\] in Appendix \[app:lemmas\], $\Vert \frac1nXX^*\Vert<K$ for some $K>0$ and for all $n$ a.s. From , we then conclude that $$\begin{aligned}
\left\Vert \frac1n\sum_{i=1}^n \left(u(d_i) - \frac1{\phi^{-1}(1)}\right) x_ix_i^* \right\Vert = \left\Vert \hat{C}_N - \frac{\hat{S}_N}{\phi^{-1}(1)} \right\Vert {\overset{\rm a.s.}{\longrightarrow}}0\end{aligned}$$ which completes the proof of Theorem \[th:1\].
The identity follows from [@HOR85 Theorem 4.3.7], according to which, for $1\leq i\leq N$, $$\begin{aligned}
\lambda_i\left(\hat{S}_N\right) &\leq \lambda_i\left(\phi^{-1}(1)\hat{C}_N\right) + \lambda_N\left(\hat{S}_N-\phi^{-1}(1)\hat{C}_N\right) \\
\lambda_i\left(\hat{S}_N\right) &\geq \lambda_i\left(\phi^{-1}(1)\hat{C}_N\right) - \lambda_N\left(\hat{S}_N-\phi^{-1}(1)\hat{C}_N\right).
\end{aligned}$$ The result follows by noticing that the second term in both right-hand sides tends to zero a.s. according to Theorem \[th:1\].
Proof of Lemma \[le:lambdamin\] {#app:lambdamin}
===============================
If the set of the eigenvalues of $\frac1nX_{(i)}X_{(i)}^*$ is contained within the set of the eigenvalues of $\frac1nXX^*$, then the result is immediate from Proposition \[prop:no\_eigenvalue\] in Appendix \[app:lemmas\]. We can therefore assume the existence of eigenvalues of $\frac1nX_{(i)}X_{(i)}^*$ which are not eigenvalues of $\frac1nXX^*$. By definition, the eigenvalues of $\frac1nX_{(i)}X_{(i)}^*$ solve the equation in $\lambda$ $$\begin{aligned}
\det\left( \frac1nX_{(i)}X_{(i)}^* - \lambda I_N \right) = 0.\end{aligned}$$
Take $\lambda$ not to be also an eigenvalue of $\frac1nXX^*$. Then, developing the above expression, we get $$\begin{aligned}
&\det\left( \frac1nX_{(i)}X_{(i)}^* - \lambda I_N \right) \\
&= \det \left(\frac1n XX^* - \frac1nx_ix_i^* -\lambda I_N \right) \\
&= \det Q(\lambda) \det \left( I_N - Q(\lambda)^{-{{\frac{1}{2}}}} \frac1nx_ix_i^* Q(\lambda)^{-{{\frac{1}{2}}}}\right) \\
&= \det Q(\lambda) \left( 1 - \frac1n x_i^* Q(\lambda)^{-1} x_i \right)
$$ with the notation $Q(\lambda)\triangleq \frac1nXX^*-\lambda I_N$, where we used $\det(I_N+AB)=\det(I_p+BA)$ in the last line, for $A\in{{\mathbb{C}}}^{N\times p}$ and $B\in{{\mathbb{C}}}^{p\times N}$, with $p=1$ here.
Therefore, since $\lambda$ cannot cancel the first determinant, $$\begin{aligned}
\frac1n x_i^* Q(\lambda)^{-1} x_i = \frac1n x_i^* \left(\frac1n XX^* -\lambda I_N \right)^{-1} x_i = 1.\end{aligned}$$
Let us study the function $$\begin{aligned}
x\mapsto f_{n,i}(x) \triangleq \frac1n x_i^* \left(\frac1n XX^* - x I_N \right)^{-1} x_i.\end{aligned}$$
First note, from a basic study of the asymptotes and limits of $f_{n,i}(x)$, that the eigenvalues of $\frac1nX_{(i)}X_{(i)}^*$ are interleaved with those of $\frac1n XX^*$ (a property known as Weyl’s interlacing lemma) and in particular that $$\label{eq:interleaving}
\lambda_1\left(\frac1nX_{(i)}X_{(i)}^*\right) \leq \lambda_1\left(\frac1nXX^*\right) \leq \lambda_2\left(\frac1nX_{(i)}X_{(i)}^*\right).$$ Since $\lambda_1(\frac1nXX^*)$ is a.s. away from zero for all large $N$ (Proposition \[prop:no\_eigenvalue\]), only $\lambda_1(\frac1nX_{(i)}X_{(i)}^*)$ may remain in the neighborhood of zero for at least one $i\leq n$, for all large $n$.
We will show that this is impossible. Precisely, for all large $n$ a.s., we will show that $f_{n,i}(x)<1$ for any $i\leq n$ and for all $x$ in some interval $[0,\xi)$, $\xi>0$, confirming that no eigenvalue of $\frac1n X_{(i)}X_{(i)}^*$ can be found there. For this, we first use the fact that the $f_{n,i}(x)$ can be uniformly well estimated for all $x<0$ through Proposition \[prop:BaiSil95\] in Appendix \[app:lemmas\] by a quantity strictly less than one. We then show that the growth of the $f_{n,i}(x)$ for $x$ in a neighborhood of zero can be controlled, so to ensure that none of them reaches $1$ for all $x<\xi$. This will conclude the proof.
We start with the study of $f_{n,i}(x)$ on ${{\mathbb{R}}}^-$. From Lemma \[le:MIL\], $$\begin{aligned}
f_{n,i}(x) = \frac{\frac1n x_i^*\left(\frac1nX_{(i)}X_{(i)}^*-xI_N\right)^{-1}x_i }{1+\frac1n x_i^*\left(\frac1nX_{(i)}X_{(i)}^*-xI_N\right)^{-1}x_i}.\end{aligned}$$ Define $$\begin{aligned}
\bar{f}_n(x) \triangleq \frac{c_Ne_N(x)}{1+c_Ne_N(x)}\end{aligned}$$ with $e_N(x)$ the unique positive solution of (see Proposition \[prop:BaiSil95\]) $$e_N(z) = \int \frac{t}{(1+c_Ne_N(z))^{-1}t-z}dF^{C_N}(t).$$ Then, with $Q(x)\triangleq\frac1nXX^*-xI_N$, $Q_i(x)\triangleq\frac1nX_{(i)}X_{(i)}^*-xI_N$, $$\begin{aligned}
\left| f_{n,i}(x) - \bar{f}_n(x) \right| \nonumber
&= \left| \frac{\frac1n x_i^*Q_i(x)^{-1}x_i }{1+\frac1n x_i^*Q_i(x)^{-1}x_i} - \frac{c_Ne_N(x)}{1+c_Ne_N(x)} \right| \\
&\leq \left| \frac1n x_i^*Q_i(x)^{-1}x_i - c_Ne_N(x) \right| \nonumber \\
&\label{eq:3terms}\leq \left| \frac1n x_i^*Q_i(x)^{-1}x_i - \frac1n \operatorname{tr}C_NQ_i(x)^{-1} \right| \nonumber \\
&+ \left| \frac1n \operatorname{tr}C_NQ_i(x)^{-1} - \frac1n \operatorname{tr}C_NQ(x)^{-1} \right| \nonumber \\
&+\left| \frac1n \operatorname{tr}C_NQ(x)^{-1} - c_Ne_N(x) \right|\end{aligned}$$
Using $(a+b+c)^{p}\leq 3^{p}(a^{p}+b^{p}+c^{p})$ for $a,b,c>0$, and $p\geq 1$ (Hölder’s inequality), and applying Lemma \[le:trace\_lemma\], Lemma \[le:rank1perturbation\], and Proposition \[prop:BaiSil95\] to the right-hand side terms of , respectively, with $p=4+\eta/2$, we obtain $$\begin{aligned}
{{\rm E}}\left[ \left|f_{n,i}(x) - \bar{f}_{n}(x) \right|^{4+\frac{\eta}2}\right] \leq \frac{K}{n^{2+\frac{\eta}4}}\end{aligned}$$ for some constant $K$ independent of $i$, where we implicitly used [**A1**]{}. Therefore, using Boole’s inequality on the above event for $i\leq n$, and the Markov inequality, for all $\zeta>0$, $$\begin{aligned}
&P\left(\max_{i\leq n} \left|f_{n,i}(x) - \bar{f}_{n}(x) \right| > \zeta \right) \\
&\leq \sum_{i=1}^n P\left(\left|f_{n,i}(x) - \bar{f}_{n}(x) \right| > \zeta \right) < \frac{K}{\zeta^{4+\frac{\eta}2}n^{1+\frac{\eta}4}}.\end{aligned}$$ The Borel Cantelli lemma therefore ensures, for all $x<0$, $$\begin{aligned}
\label{eq:fni}
\max_{i\leq n} \left| f_{n,i}(x) - \bar{f}_{n}(x) \right| {\overset{\rm a.s.}{\longrightarrow}}0.\end{aligned}$$
We now extend the study of $f_{n,i}(x)$ to $x$ in a neighborhood of zero. From Proposition \[prop:no\_eigenvalue\], $\lambda_1(\frac1nXX^*) > C_-(1-\sqrt{c_+})^2$ for all large $n$ a.s. (recall that $\lim\sup_N c_N<c_+<1$) so that $f_{n,i}(x)$ is well-defined and continuously differentiable on $U=(-\varepsilon,\varepsilon)$ for $0<\varepsilon<C_-(1-\sqrt{c_+})^2$, for all large $n$ a.s. Take $x\in U$. Since the smallest eigenvalue of $\frac1n XX^* - x I_N$ is lower bounded by $C_-(1-\sqrt{c_+})^2-\varepsilon$ for all large $n$, and that $$\begin{aligned}
\max_{i\leq n} \left| \frac1n\Vert x_i\Vert^2 - \frac{1}n\operatorname{tr}C_N \right| {\overset{\rm a.s.}{\longrightarrow}}0\end{aligned}$$ (using similar arguments based on the Boole and Markov inequality reasoning as above), we also have that for all large $n$ a.s. $$\begin{aligned}
0<f_{n,i}'(x)< \frac{c_+C_+}{( C_-(1-\sqrt{c_+})^2-\varepsilon )^2} \triangleq K'\end{aligned}$$ where we used $\lim\sup_N\frac{1}n\operatorname{tr}C_N < c_+C_+$.
From this result, along with the continuity of $f_{n,i}$, for $x\in U$ and for all large $n$ a.s., $$\begin{aligned}
f_{n,i}(x) < f_{n,i}(-x)+2x K'.\end{aligned}$$ In particular, for $\xi=\min\{\varepsilon/2,(1-c_+)/(2K')\}$, $$\begin{aligned}
\label{eq:fnix}
f_{n,i}(\xi) < f_{n,i}(-\xi)+(1-c_+).\end{aligned}$$
Since $e_N(0)=1+c_Ne_N(0)$ by definition , $$\begin{aligned}
\bar{f}_n(0) = c_N < c_+\end{aligned}$$ and $\bar{f}_n(x)$ is continuous and increasing on $U$, so that $$\begin{aligned}
\bar{f}_n(-\xi) < c_+.\end{aligned}$$
Recalling , we then conclude that, for all large $n$ a.s. $$\begin{aligned}
\max_{i\leq n}f_{n,i}(-\xi)<c_+\end{aligned}$$ which, along with , gives, for all large $n$ a.s. $$\begin{aligned}
\max_{i\leq n} f_{n,i}(\xi) < 1.\end{aligned}$$
Since $f_{n,i}(x)$ is continuous and increasing on $[0,\xi)$, the equation $f_{n,i}(x)=1$ has no solution on this interval for any $i\leq n$, for all large $n$ a.s., which concludes the proof.
Proof of Lemma \[le:convquadraticform\] {#app:convquadraticform}
=======================================
Define $\hat{S}_{N,(i)}=\hat{S}_N-\frac1nx_ix_i^*$ and denote $\hat{S}_{N,(i)}^{-1}$ its inverse when it exists or the identity matrix otherwise. Take $2\leq p\leq 4+\eta/2$ (see [**A1**]{}) and $\varepsilon>0$ as in Lemma \[le:lambdamin\]. Denoting ${{\rm E}}_{x_i}$ the expectation with respect to $x_i$ and $\phi_i = 1_{ \{\lambda_1(\hat{S}_{N,(i)})>\varepsilon\}}$, $$\begin{aligned}
&{{\rm E}}_{x_i}\left[\phi_i \left|\frac{\frac1nx_i^*\hat{S}_{N,(i)}^{-1}x_i}{1+\frac1nx_i^*\hat{S}_{N,(i)}^{-1}x_i} - \frac{\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}{1+\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}\right|^p\right] \\
&= {{\rm E}}_{x_i}\left[\phi_i \left|\frac{\frac1nx_i^*\hat{S}_{N,(i)}^{-1}x_i - \frac1n \operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}{\left(1+ \frac1nx_i^*\hat{S}_{N,(i)}^{-1}x_i \right)\left(1+ \frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1} \right)} \right|^p \right] \\
&\leq {{\rm E}}_{x_i}\left[\phi_i\left|\frac1nx_i^*\hat{S}_{N,(i)}^{-1}x_i - \frac1n \operatorname{tr}C_N\hat{S}_{N,(i)}^{-1} \right|^p \right].\end{aligned}$$ Recalling that $x_i=A_Ny_i$ with $y_i$ having independent zero mean and unit variance entries, from Lemma \[le:trace\_lemma\], we have $$\begin{aligned}
&{{\rm E}}_{x_i}\left[\phi_i\left|\frac{\frac1nx_i^*\hat{S}_{N,(i)}^{-1}x_i}{1+\frac1nx_i^*\hat{S}_{N,(i)}^{-1}x_i} - \frac{\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}{1+\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}\right|^p\right] \\
&\leq \frac{\phi_iK_p}{n^\frac{p}2} \left[ \left(\frac{\nu_4}n\operatorname{tr}(C_N\hat{S}_{N,(i)}^{-1})^2 \right)^{\frac{p}2}+\frac{\nu_{2p}}{n^{\frac{p}2}}\operatorname{tr}\left( (C_N\hat{S}_{N,(i)}^{-1})^2 \right)^{\frac{p}2}\right] \end{aligned}$$ for some constant $K_p$ depending only on $p$, with $\nu_{\ell}$ any value such that ${{\rm E}}[|y_{ij}|^\ell]\leq\nu_\ell$ (well defined from [**A1**]{}). Using $\frac1{n^k}\operatorname{tr}A^k\leq (\frac1n\operatorname{tr}A)^k$ for $A\in{{\mathbb{C}}}^{N\times N}$ nonnegative definite and $k\geq 1$, with here $A=(C_N\hat{S}_{N,(i)}^{-1})^2$, $k=p/2$, this gives $$\begin{aligned}
&{{\rm E}}_{x_i}\left[\phi_i\left|\frac{\frac1nx_i^*\hat{S}_{N,(i)}^{-1}x_i}{1+\frac1nx_i^*\hat{S}_{N,(i)}^{-1}x_i} - \frac{\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}{1+\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}\right|^p\right] \nonumber \\
&\leq \frac{\phi_i K_p}{n^\frac{p}2} \left(\nu_4^{\frac{p}2} + \nu_{2p} \right) \left(\frac1n\operatorname{tr}(C_N\hat{S}_{N,(i)}^{-1})^2 \right)^{\frac{p}2} \nonumber\\
&\leq \frac{K_p}{n^{\frac{p}2}} \left(\nu_4^{\frac{p}2}+\nu_{2p}\right) (c_+ C_+^2\varepsilon^{-2})^{\frac{p}2} \triangleq \frac{K'_p}{n^{\frac{p}2}} \label{eq:ineq0} \end{aligned}$$ where, in , we used $\operatorname{tr}AB \leq \Vert A\Vert \operatorname{tr}B$ for $A,B\succeq 0$, $\phi_i\leq 1$, $\Vert \hat{S}_{N,(i)}^{-1}\Vert \leq \varepsilon^{-1}$ when $\phi_i=1$, and $\frac1n \operatorname{tr}C_N^2 \leq c_+C_+^2$.
This being valid irrespective of $X_{(i)}$, we can take the expectation of the above expression over $X_{(i)}$ to obtain $$\begin{aligned}
{{\rm E}}\left[\phi_i\left|\frac{\frac1nx_i^*\hat{S}_{N,(i)}^{-1}x_i}{1+\frac1nx_i^*\hat{S}_{N,(i)}^{-1}x_i} - \frac{\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}{1+\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}\right|^p\right] \leq \frac{K'_p}{n^\frac{p}2}.\end{aligned}$$
Therefore, from Lemma \[le:MIL\], $$\begin{aligned}
{{\rm E}}\left[\phi_i\left|\frac1nx_i^*\hat{S}_{N}^{-1}x_i - \frac{\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}{1+\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}\right|^p\right] \leq \frac{K'_p}{n^\frac{p}2}.\end{aligned}$$
Using Boole’s inequality on the $n$ events above with $i=1,\ldots,n$, and Markov inequality, for $\zeta>0$, $$\begin{aligned}
&P\left( \max_{i\leq n} \left\{ \phi_i \left| \frac1nx_i^*\hat{S}_{N}^{-1}x_i - \frac{\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}{1+\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}} \right| \right\} > \zeta \right) \\
&\leq \frac{K'_p\zeta^{-p}}{n^{\frac{p}2-1}}.\end{aligned}$$ Choosing $4<p\leq 4+\eta/2$, the right-hand side is summable. The Borel-Cantelli lemma then ensures that $$\begin{aligned}
\max_{i\leq n} \left\{ \phi_i \left| \frac1nx_i^*\hat{S}_{N}^{-1}x_i - \frac{\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}{1+\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}} \right| \right\} {\overset{\rm a.s.}{\longrightarrow}}0.\end{aligned}$$ But, from Lemma \[le:lambdamin\], $\min_i \{\phi_i\}=1$ for all large $n$ a.s. Therefore, we conclude $$\begin{aligned}
\label{eq:maxin}
\max_{i\leq n} \left\{ \left| \frac1nx_i^*\hat{S}_{N}^{-1}x_i - \frac{\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}{1+\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}} \right| \right\} {\overset{\rm a.s.}{\longrightarrow}}0.\end{aligned}$$
Since $\hat{S}_{N,(i)}-\varepsilon I_N\succ 0$ for these large $n$, we also have $$\begin{aligned}
&\max_{i\leq n}\left| \frac{\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}{1+\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}} - \frac{\frac1n\operatorname{tr}C_N\hat{S}_{N}^{-1}}{1+\frac1n\operatorname{tr}C_N\hat{S}_{N}^{-1}} \right| \\
&=\max_{i\leq n} \left| \frac{\frac1n\operatorname{tr}C_N\hat{S}_{N}^{-1} - \frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}}{\left(1+\frac1n\operatorname{tr}C_N\hat{S}_{N,(i)}^{-1}\right)\left(1+\frac1n\operatorname{tr}C_N\hat{S}_{N}^{-1}\right)} \right| \leq \frac1n \frac{C_+}{\varepsilon}\end{aligned}$$ where, in the last inequality, we used Lemma \[le:rank1perturbation\] with $B=C_N$, $A=\hat{S}_{N,(i)}-\varepsilon I_N$ and $x=\varepsilon$, along with the fact that $(1+x)^{-1}\leq 1$ for $x\geq 0$.
From Proposition \[prop:BaiSil95\], since $\lambda_1(\hat{S}_N)\geq \lambda_1(\hat{S}_{N,(i)})>\varepsilon$ for these large $n$ (see ), we also have $$\begin{aligned}
\left| \frac1n\operatorname{tr}C_N\hat{S}_{N}^{-1} - \frac{c_N}{1-c_N} \right| {\overset{\rm a.s.}{\longrightarrow}}0 \end{aligned}$$ and thus, from $c_N(1-c_N)^{-1}/(1+c_N(1-c_N)^{-1})=c_N$, $$\begin{aligned}
\left| \frac{\frac1n\operatorname{tr}C_N\hat{S}_{N}^{-1}}{1+\frac1n\operatorname{tr}C_N\hat{S}_{N}^{-1}} - c_N \right| {\overset{\rm a.s.}{\longrightarrow}}0.\end{aligned}$$
Putting things together, this finally gives $$\begin{aligned}
\max_{i\leq n} \left\{ \left| \frac1nx_i^*\hat{S}_{N}^{-1}x_i - c_N \right| \right\} {\overset{\rm a.s.}{\longrightarrow}}0\end{aligned}$$ an expression which, since $c_N>c_->0$ for all large $N$, can be divided by $c_N$, concluding the proof.
Proof of Theorem \[th:standardfunctions\] {#app:standardfunctions}
=========================================
The proof immediately follows from the arguments of [@YAT95]. When the scalability assumption is satisfied with strict inequality, the result is exactly [@YAT95 Theorem 2]. When the scalability assumption is reduced to a loose inequality, [@YAT95 Theorem 1] does not hold, and therefore uniqueness cannot be satisfied. Nonetheless, the existence of a solution follows from the proof of [@YAT95 Lemma 1] which does not call for the scalability assumption. Indeed, since there exists $(q_1,\ldots,q_n)$ such that $q_i\geq h(q_1,\ldots,q_n)$ for all $i$, the algorithm $$\begin{aligned}
q_j^{(t+1)}=h_j(q_1^{(t)},\ldots,q_n^{(t)})
\end{aligned}$$ with $q_j^{(0)}=q_j$, satisfies $q_j^{(1)}\leq q_j^{(0)}$ for all $j$. Assuming $q_j^{(t+1)}\leq q_j^{(t)}$ for all $j$, the monotonicity assumption ensures that $q_j^{(t+2)}\leq q_j^{(t+1)}$ which, by recursion, means that $q_j^{(t)}$ is a non-increasing sequence. Now, since $q_j^{(t)}$ is in the image of $h_j$, $q_j^{(t)}>0$ by positivity, and therefore $q_j^{(t)}$ converges to a fixed-point (not necessarily unique). Such a fixed-point therefore exists. Note that [@YAT95 Lemma 2] provides an algorithm for reaching this fixed-point, starting with $q_j^{(0)}=0$ for all $j$.
Proof of Corollary \[co:RG-MUSIC\] {#app:RG-MUSIC}
==================================
If $\hat{C}_N$ is replaced by $\hat{S}_N$ in the statement of the result, then Theorem \[co:RG-MUSIC\] is exactly [@MES08c Theorem 2], which is a direct consequence of [@MES08b Theorem 3] with some updated remarks on the $\hat\mu_i$ found in the discussion around [@COUbook Theorem 17.1]. In order to prove Theorem \[co:RG-MUSIC\], we need to justify the substitution of $\hat{S}_N$ by $\hat{C}_N$. First observe that the result is independent of a scaling of $\hat{S}_N$, and therefore we can freely substitute $\hat{S}_N$ by $\phi^{-1}(1)\hat{C}_N$ instead of $\hat{C}_N$. Using the notations of Mestre in [@MES08b], we first need to extend [@MES08b Proposition 4]. Call $\hat{g}^C_M(z)$ the equivalent of $\hat{g}_M(z)$ designed from the eigenvectors of $\phi^{-1}(1)\hat{C}_N$ instead of those of $\hat{S}_N$ (referred to as $\hat{R}_M$ in [@MES08b] with $M$ in place of $N$, and $N$ in place of $n$). Then, on the chosen rectangular contour $\partial {{\mathbb{R}}}^-_y(m)$, both $\hat{g}^C_M(z)$ and $\hat{g}_M(z)$ are a.s. bounded holomorphic functions for all large $N$; this is due to the exact separation [@COU10b Theorem 3] of the eigenvalues of $\hat{S}_N$ and the fact that Corollary \[co:spacing\] ensures the convergence between the eigenvalues of $\phi^{-1}(1)\hat{C}_N$ and of $\hat{S}_N$.
From [@MES08b Equation (29)], $\hat{g}_M(z)$ consists of the functions $\hat{b}_M(z)$ and $\hat{m}_M(z)$ for which we also call $\hat{b}^C_M(z)$ and $\hat{m}^C_M(z)$ their equivalents for $\phi^{-1}(1)\hat{C}_N$. We need to show that the respective differences of these functions go to zero. From the definition [@MES08b Equation (4)] of $\hat{b}_M(z)$, Theorem \[th:1\] and the fact that $\left|\frac1N\operatorname{tr}(A^{-1}-B^{-1})\right|\leq \Vert A^{-1}\Vert \Vert B^{-1}\Vert \Vert A-B\Vert $ for invertible $A,B\in{{\mathbb{C}}}^{N\times N}$, we have immediately that $$\begin{aligned}
\sup_{z\in \partial {{\mathbb{R}}}^-_y(m)} \left| \hat{b}_M(z)-\hat{b}^C_M(z)\right| {\overset{\rm a.s.}{\longrightarrow}}0.
\end{aligned}$$ Similarly, using [@MES08b Equation (6)], and $\left|a^*(A^{-1}-B^{-1})b\right|\leq |a^*b|\Vert A^{-1}\Vert \Vert B^{-1}\Vert \Vert A-B\Vert$ for $a,b\in{{\mathbb{C}}}^N$, we find $$\begin{aligned}
\sup_{z\in \partial {{\mathbb{R}}}^-_y(m)} \left| \hat{m}_M(z)-\hat{m}^C_M(z)\right| {\overset{\rm a.s.}{\longrightarrow}}0.
\end{aligned}$$ By the dominated convergence theorem, this gives $$\begin{aligned}
\oint_{\partial {{\mathbb{R}}}^-_y(m)} \left(\hat{g}^C_M(z)-\hat{g}_M(z)\right) dz {\overset{\rm a.s.}{\longrightarrow}}0
\end{aligned}$$ which then immediately extends [@MES08b Proposition 4] to the present scenario. The second step to be proved is that the residue calculus performed in [@MES08b Equations (32)–(33)] carries over to the present scenario. The poles within the contour $\partial {{\mathbb{R}}}^-_y(m)$ are the $\hat\lambda_k$ and the $\hat\mu_k$ found in the contour. The indices $k$ such that the $\hat\lambda_k$ and $\hat\mu_k$ are within $\partial {{\mathbb{R}}}^-_y(m)$ are the same for $\hat{S}_N$ and $\phi^{-1}(1)\hat{C}_N$ for all large $N$, due to the exact separation property and Corollary \[co:spacing\]. This completes the proof.
Useful lemmas and results {#app:lemmas}
=========================
\[le:MIL\] Let $x\in{{\mathbb{C}}}^N$, $A\in{{\mathbb{C}}}^{N\times N}$, and $t\in{{\mathbb{R}}}$. Then, whenever the inverses exist $$\begin{aligned}
x^*\left(A + txx^*\right)^{-1}x = x^*A^{-1}x (1+t x^*A^{-1}x)^{-1}.
\end{aligned}$$
\[le:rank1perturbation\] Let $v\in{{\mathbb{C}}}^N$, $A,B\in{{\mathbb{C}}}^{N\times N}$ nonnegative definite, and $x>0$. Then $$\begin{aligned}
\operatorname{tr}B\left(A+vv^*+xI_N\right)^{-1} - \operatorname{tr}B \left(A+xI_N\right)^{-1} \leq x^{-1}\Vert B\Vert.
\end{aligned}$$
[@SIL06 Lemma B.26] \[le:trace\_lemma\] Let $A\in{{\mathbb{C}}}^{N\times N}$ be non-random and $y=[y_1,\ldots,y_N]^{{\sf T}}\in{{\mathbb{C}}}^N$ be a vector of independent entries with ${{\rm E}}[y_i]=0$, ${{\rm E}}[|y_i|^2]=1$, and ${{\rm E}}[|y_i|^\ell]\leq \nu_\ell$ for all $\ell\leq 2p$, with $p\geq 2$. Then, $$\begin{aligned}
{{\rm E}}\left[\left| y^* Ay - \operatorname{tr}A \right|^p\right]\leq C_p \left( (\nu_4 \operatorname{tr}AA^*)^{\frac{p}2} + \nu_{2p} \operatorname{tr}(AA^*)^{\frac{p}2} \right)
\end{aligned}$$ for $C_p$ a constant depending on $p$ only.
\[prop:BaiSil95\] Let $X=[x_1,\ldots,x_n]\in{{\mathbb{C}}}^{N\times n}$ with $x_i=A_Ny_i$, $A_N\in{{\mathbb{C}}}^{N\times M}$, $M\geq N$, where $y_i=[y_{i1},\ldots,y_{iM}]\in{{\mathbb{C}}}^M$ has independent entries satisfying ${{\rm E}}[y_{ij}]=0$, ${{\rm E}}[|y_{ij}|^2]=1$, ${{\rm E}}[|y_{ij}|^{\ell}]<\nu_\ell$ for all $\ell \leq 2p$ and $C_N\triangleq A_NA_N^*$ is nonnegative definite with $\Vert C_N\Vert<C_+<\infty$. Assume $c_N=N/n$ and $\bar{c}_N=M/N\geq 1$ satisfy $\lim\sup_N c_N<\infty$ and $\lim\sup_N \bar{c}_N<\infty$, as $N,n,M\to\infty$. Then, for $z<0$, and $p>2$, $$\begin{aligned}
\label{eq:eN_moment}
{{\rm E}}\left[\left|\frac1N\operatorname{tr}C_N\left(\frac1nXX^* -zI_N\right)^{-1} - e_N(z)\right|^p\right] \leq \frac{K_p}{N^{\frac{p}2}}\end{aligned}$$ for $K_p$ a constant depending only on $p$, $\nu_{\ell}$ for $\ell\leq 2p$, and $z$, while $e_N(z)$ is the unique positive solution of $$\label{eq:eN}
e_N(z) = \int \frac{t}{(1+c_Ne_N(z))^{-1}t-z}dF^{C_N}(t)$$ where $F^{C_N}$ is the eigenvalue distribution of $C_N$. The function ${{\mathbb{R}}}^-\to {{\mathbb{R}}}^+,~z\mapsto e_N(z)$ is increasing.
Moreover, for any $N_0$, as $N,n\to\infty$ with $\lim\sup_N c_N<\infty$, for $z\in{{\mathbb{R}}}\setminus \mathcal S_{N_0}$, where $\mathcal S_{N_0}$ is the union of the supports of the eigenvalue distributions of $\frac1nXX^*$ for all $N\geq N_0$, $$\begin{aligned}
\label{eq:conv_eN}
\frac1N\operatorname{tr}C_N\left(\frac1nXX^* -zI_N\right)^{-1} - e_N(z) {\overset{\rm a.s.}{\longrightarrow}}0.\end{aligned}$$
To prove the first part of Proposition \[prop:BaiSil95\], we follow the steps of the proof of [@HAC07]. Note first that we can append $A_N$ into an $M\times M$ matrix by adding rows of zeros, without altering the left-hand side of . Using the notations of [@HAC07], we consider the simple case where $A_n=0$ and $\sigma_{ij}^n=C^n_{i}$, where $C_i^n$ denotes the $i$-th eigenvalue of $C_N$. Although this updated proof of [@HAC07] would impose $C_N$ to be diagonal, it is rather easy to generalize to non-diagonal $C_N$ (see e.g. [@COU09; @WAG10]). The proof then extends to the non i.i.d. case when using Lemma \[le:trace\_lemma\] instead of [@HAC07 (B.1)]. The second part follows from the first part immediately for $z<0$. In order to extend the result to $z\in{{\mathbb{R}}}\setminus \mathcal S_{N_0}$, note that both left-hand side terms in are uniformly bounded in any compact $\mathcal D$ away from $\mathcal S_{N_0}$ and including part of ${{\mathbb{R}}}^-$, and are holomorphic on $\mathcal D$. From Vitali’s convergence theorem [@TIT39], their difference therefore tends to zero on $\mathcal D$, which is what we need.
\[prop:no\_eigenvalue\] Let $X=[x_1,\ldots,x_n]\in{{\mathbb{C}}}^{N\times n}$ with $x_i=A_Ny_i$, $A_N\in{{\mathbb{C}}}^{N\times M}$, where $y_i=[y_{i1},\ldots,y_{iM}]\in{{\mathbb{C}}}^M$ has independent entries satisfying ${{\rm E}}[y_{ij}]=0$, ${{\rm E}}[|y_{ij}|^2]=1$ and ${{\rm E}}[|y_{ij}|^{4+\eta}]<\alpha$ for some $\eta,\alpha>0$, $C_N\triangleq A_NA_N^*$ has bounded spectral norm, and $N,n,M\to\infty$ with $\lim\sup_N N/n <1$, and $1\leq \lim\sup_N M/N <\infty$. Let $N_0$ be an integer and $[a,b]\subset{{\mathbb{R}}}\cup \{\pm \infty\}$, $b>a$, a segment outside the closure of the union of the supports $F^{N/n,C_N}$, $N\geq N_0$, with $F^{t,A}$ the limiting support of the eigenvalues of $\frac1n XX^*$ when $C_N$ has the same spectrum as $A$ for all $N$ and $N/n\to t$. Then, for all large $n$ a.s., no eigenvalue of $\frac1nXX^*$ is found in $[a,b]$.
Appending $A_N$ into an $M\times M$ matrix filled with zeros, this unfolds from [@COU10b Theorem 3] (for which conditions 1)-3) are met), with the supports $F^{N/n,C_N}$ appended with the singleton $\{0\}$. Now, for $A_N\in{{\mathbb{C}}}^{N\times M}$, such that $A_NA_N^*$ is positive definite, zero is not an eigenvalue of $\frac1nXX^*$ for all $N$, a.s., which gives the result. Condition 1) of [@COU10b Theorem 3] holds here by definition. Condition 3) is obtained by taking $\psi(x)=x^{2+\eta}$. Condition 2) is obtained by taking $z$ a random variable with Pareto distribution $P(z\leq x)=(1-a^{p-1}x^{1-p})1_{x\geq a}$ for $p=5+\eta$ and $a=\alpha^{\frac1{4+\eta}}$; by Markov inequality, $$\begin{aligned}
\frac1{n_1n_2}\sum_{i\leq n_1,j\leq n_2}P(y_{ij}>x) &\leq \alpha x^{-4-\eta} = P(z>x).
\end{aligned}$$ This $z$ has finite $4+\eta$ order moment, which therefore enforces Condition 2).
[^1]: Silverstein’s work is supported by the U.S. Army Research Office, Grant W911NF-09-1-0266. Couillet’s work is supported by the ERC MORE EC–120133.
[^2]: Our expression differs from the standard convention where $x_i^*\hat{C}_N^{-1}x_i$ is traditionally not scaled by $1/N$. The current form is however more convenient for analysis in the large $N,n$ regime.
[^3]: Note that this function intervenes in the maximum-likelihood estimator of the scatter matrix of Student-t distributed random vectors [@OLI12]. Here we do not make any such maximum-likelihood consideration for the selection of $u$.
|
---
abstract: 'We present an analysis of the effects of dissipational baryonic physics on the local dark matter (DM) distribution at the location of the Sun, with an emphasis on the consequences for direct detection experiments. Our work is based on a comparative analysis of two cosmological simulations with identical initial conditions of a Milky Way halo, one of which (Eris) is a full hydrodynamic simulation and the other (ErisDark) is a DM-only one. We find that two distinct processes lead in Eris to a 30% enhancement of DM in the disk plane at the location of the Sun: the accretion and disruption of satellites resulting in a DM component with net angular momentum and the contraction of baryons pulling DM into the disk plane without forcing it to co-rotate. Owing to its particularly quiescent merger history for dark halos of Milky Way mass, the co-rotating dark disk in Eris is less massive than what has been suggested by previous work, contributing only 9% of the local DM density. Yet, since the simulation results in a realistic Milky Way analog galaxy, its DM halo provides a plausible alternative to the Maxwellian standard halo model (SHM) commonly used in direct detection analyses. The speed distribution in Eris is broadened and shifted to higher speeds compared to its DM-only twin simulation ErisDark. At high speeds $f(v)$ falls more steeply in Eris than in ErisDark or the SHM, easing the tension between recent results from the CDMS-II and XENON100 experiments. The non-Maxwellian aspects of $f(v)$ are still present, but much less pronounced in Eris than in DM-only runs. The weak dark disk increases the time-averaged scattering rate by only a few percent at low recoil energies. On the high velocity tail, however, the increase in typical speeds due to baryonic contraction results in strongly enhanced mean scattering rates compared to ErisDark, although they are still suppressed compared to the SHM. Similar trends are seen regarding the amplitude of the annual modulation, while the modulated fraction is increased compared to the SHM and decreased compared to ErisDark.'
author:
- 'Annalisa Pillepich, Michael Kuhlen, Javiera Guedes, and Piero Madau'
bibliography:
- 'ErisDarkDisk.bib'
nocite:
- '[@bryan_statistical_1998]'
- '[@kuhlen_dark_2010]'
title: 'The Distribution of Dark Matter in the Milky Way’s Disk'
---
Introduction
============
The direct detection of dark matter (DM) is one of the most exciting and frontier pursuits of contemporary physics. Direct detection experiments attempt to measure the weak nuclear recoils produced in rare scatterings of DM particles off target nuclei in shielded underground terrestrial detectors [@goodman_detectability_1985; @gaitskell_direct_2004]. After many years of steady progress in enlarging target masses, improving detector sensitivities, and lowering energy thresholds, but concomitant lack of detections and only ever more stringent exclusion limits, the field may now at last be at the cusp of success. In addition to the long standing detection claim by the DAMA collaboration [@bernabei_search_2000; @bernabei_new_2010], a number of additional experiments have in recent years reported signals that may be interpreted as DM scattering events.
Specifically, the CoGeNT collaboration has reported a statistically significant excess of events over their well characterized radioactive background [@aalseth_cogent:_2012], together with an annual modulation signal at somewhat lower significance [@aalseth_search_2011]. Similarly, the CRESST-II experiment has reported 67 events in their signal acceptance region [@angloher_results_2012], which cannot be accounted for by known backgrounds at a statistical significance of more than $4 \sigma$, yet match the expectation of DM scattering events. Finally, a recent analysis from the CDMS II collaboration of data obtained with their silicon detectors found three DM candidate events with a total expected background of 0.7 events [@cdms_collaboration_dark_2013]. Taking into account the energies of the three events, the CDMS II Si data prefer a DM scattering interpretation over a known-background-only scenario at 99.81% probability, so slightly more than $3 \sigma$.
Despite these exciting developments, the case for a discovery of a DM particle is not yet closed, for two principal reasons. For one, the regions of parameter space (mass of DM particle $m_\chi$ and (spin-independent) scattering cross section $\sigma_{\rm SI}$) preferred by the tentative detections don’t all agree with each other. They do generally favor a light DM particle ($m_\chi \lesssim 10$ GeV) with $\sigma_{\rm SI}$ around $10^{-41}\text{--}10^{-40} \, {\rm cm}^2$, but the published $2 \sigma$ confidence intervals don’t all overlap [for a recent summary, see e.g. Fig.4 of @cdms_collaboration_dark_2013]. Secondly, the preferred parameters are nominally ruled out by the non-detections in XENON100 [@aprile_dark_2012] and the CDMS II Germanium detectors [@cdms_ii_collaboration_dark_2010; @ahmed_results_2011].[^1]
All direct detection analyses must make an assumption about the local phase-space distribution of the DM particles incident on Earth. The most commonly used model is the so-called Standard Halo Model (SHM), in which the local DM density is taken to be $\rho_0 = 0.3$ GeV cm$^{-3}$ [consistent with the most recent observational constraints, @garbari_limits_2011; @garbari_limits_2012; @zhang_segue_2013; @bovy_rix_2013] and the halo rest-frame speed distribution [f(v) $f(v)$]{} is assumed to be a Maxwellian with a peak (most probable) speed of 220 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} and a cutoff at the Galactic escape speed of 550 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{}. A consistent interpretation of the various detection claims and exclusion limits is complicated by the fact that the experiments (with different target nuclei and energy thresholds) are sensitive to different speed ranges, and thus depend on the assumed [f(v) $f(v)$]{} in different ways. Indeed, departures from the Maxwellian assumption may allow some of the conflicting detection claims to be reconciled [@frandsen_resolving_2012; @kelso_toward_2012; @mao_connecting_2013]. Although it is possible to compare results from multiple experiments in a way that is independent of astrophysical assumptions [see e.g. @fox_integrating_2011; @frandsen_resolving_2012], this technique only applies over the limited range of recoil energies for which the experiments probe the same region of [f(v) $f(v)$]{}.
Numerical galaxy formation simulations can provide guidance for the expected local DM density and velocity distribution, and their spatial and halo-to-halo variance. Although ultra-high resolution DM-only cosmological simulations like Via Lactea II [@diemand_clumps_2008] and Aquarius [@springel_aquarius_2008] predict that the Milky Way’s halo should be filled with a large number of dense self-bound subhalos, the simulations tend to find that the central regions near the location of the Sun remain quite smooth [@zemp_graininess_2009], owing to the strong tidal forces that tend to disrupt subhalos. The relics of such disrupted subhalos are predicted in turn to traverse the Solar neighborhood under the form of thousands of DM streams [@vogelsberger_phase-space_2009; @fantin_finestructure_2011], however their superposition is also expected to be smooth. It thus appears unlikely that the Earth lies inside a significant over- or under-density with respect to mean density at 8 kpc [@kamionkowski_galactic_2008; @kuhlen_dark_2010; @kamionkowski_galactic_2010], and the SHM is acceptable in this regard.
The situation is quite different for the velocity distribution. Here numerical simulations have pointed out a number of departures from the Maxwellian shape assumed in the SHM. The speed distributions averaged in a spherical shell at 8 kpc in DM-only simulations typically shows a pronounced deficit near the peak and an excess on the high speed tail, before again falling below the Maxwellian at the highest speeds [@hansen_universal_2006; @vogelsberger_phase-space_2009; @kuhlen_dark_2010]. The high speed excess arises in part from a “debris flow” [@kuhlen_direct_2012], and it reflects the incompletely phase-mixed nature of the DM halo. The simulated [f(v) $f(v)$]{} is much better described by a Tsallis distribution [@vergados_impact_2008], a modified Gaussian distribution [@fairbairn_spin-independent_2009], or the empirical fitting function proposed by @mao_halo--halo_2013. Furthermore the shape of [f(v) $f(v)$]{} depends on the location within the halo relative to its scale radius and exhibits considerable scatter between halos [@mao_halo--halo_2013]. In addition to these global non-Maxwellian features, velocity space substructure can give rise to spatial variations in [f(v) $f(v)$]{}, with individual subhalos or tidal streams producing spikes at discrete speeds [@kuhlen_dark_2010]. DM associated with the Sagittarius tidal stream is an example of a known velocity substructure in our Galaxy that is likely to have a non-negligible influence on DM detection experiments [@purcell_dark_2012].
Most of the above results are based on DM-only simulations, which neglect the effects of baryonic physics in order to achieve extremely high spatial and mass resolution. In recent years, however, increasing computational resources and advances in the treatment of baryonic physics have made it possible to follow the formation of disk galaxies like our Milky Way in cosmological simulations that include dissipational gas physics [e.g. @governato_bulgeless_2010; @agertz_formation_2011; @guedes_forming_2011]. Because baryons are able to radiate energy and condense in the centers of halos, they have the potential to modify the structure of their hosting dark matter halos, and may thereby alter the expectations for direct detection experiments. In particular, adiabatic contraction [@blumenthal_contraction_1986; @gnedin_response_2004] may drag DM toward the halo center and thus increase the local DM density. On the other hand, violent energetic feedback processes might result in the removal of DM from halo centers and the formation of a DM core [@read_mass_2005; @mashchenko_stellar_2008; @governato_bulgeless_2010; @pontzen_how_2012]. Baryonic physics may even result in an offset between the point of maximum DM density and the dynamical center of the Galaxy [@kuhlen_off-center_2013].
Regarding baryonic modifications of the local DM velocity structure, the effect that has received the most attention is the possibility of the creation of a so-called “dark disk” [@lake_darkdisk_1989; @read_thin_2008; @read_dark_2009; @purcell_dark_2009; @ling_dark_2010] – a flattened component of the DM halo, nearly co-rotating with the stellar disk, that is thought to be formed by the tidal disruption of accreted satellites dragged into the disk plane by dynamical friction. The co-rotation results in a reduction of the typical speeds of DM particles incident on Earth, and this can have profound effects on the expectations for direct detection experiments [@bruch_detecting_2009] [however, see @billard_is_2013] and the DM capture rates in the Earth and Sun [@sivertsson_accurate_2010].
[lcccccccccc]{} ErisDark & $9.1 \times 10^{11}$ & 247 & 166 & $7.55 \times 10^6$ & $9.1 \times 10^{11}$ & 0 & 0\
Eris & $7.8 \times 10^{11}$ & 235 & 239 & $1.85 \times 10^{7}$ & $6.9 \times 10^{11}$ & $5.6 \times 10^{10}$ & $3.9 \times 10^{10}$
In the present paper, we analyze the density and velocity structure of the local DM distribution in one of the highest resolution and most realistic hydrodynamic cosmological calculations of the formation of a disk-dominated galaxy, the Eris simulation. Our work represents an improvement over past analyses based on hydro simulations in at least two aspects. First, Eris represents a simulated disk galaxy that matches, for the first time, many of the observed properties of our Milky Way. Secondly, we compare Eris to its DM-only twin simulation ErisDark, a collisionless run starting from identical initial conditions, which allows us to isolate the effects of the dissipational baryonic physics. The reader should be cautioned, however, that the total mass of the resulting simulated galaxy ($8 \times 10^{11} {\,\rm M_\odot}$) falls at the lower end of the wide range of estimates for the virial mass of the Galaxy ($5 \times 10^{11} {\,\rm M_\odot}< {M_{\rm halo}}< 3 \times 10^{12}$). Moreover, also as a consequence of the low mass, its merger history appears relatively quiescent: within the $\Lambda$CDM cosmology, the fraction of halos of Eris’ present-day mass that have not experienced a merger with mass ratio 1:10 or larger since redshift 3 is about 15% [@koda_2009]. While a comparison of the kinematic properties of halo stars in Eris with the latest sample of halo stars from SDSS seems to favor a light, centrally concentrated Milky Way halo [@rashkov_light_2013], no strong, definitive arguments exist as of yet to constrain the timing and mass-ratio abundance of our Galaxy assembly history. Undoubtedly, the presented results will depend quantitatively on the specific halo-assembly realization.
Our paper is organized as follows. In Section \[sec:baryonic\_physics\], we review the properties of the simulations Eris and ErisDark, and present the local density and velocity distributions. In Section \[sec:darkdisk\] we focus on the dark disk component by analyzing material that was stripped from accreted satellites. Finally, in Section \[sec:implications\] we discuss implications for direct detection experiments, including effects on the mean scattering rate and the annual modulation, before summarizing our main results in Section \[sec:summary\].\
Baryonic Physics and the Local Dark Matter Distribution {#sec:baryonic_physics}
=======================================================
In this section we describe the properties of the local (i.e. near 8 kpc in the disk) distribution of dark matter as predicted in Eris, a high resolution cosmological hydrodynamics galaxy formation simulation resulting in a realistic Milky Way analog. In order to elucidate the effects that the dissipational baryonic physics has had, we compare to results from ErisDark, the DM-only counterpart to Eris. We first briefly review the properties of the two simulations, then describe the local DM density and velocity structure.
The Eris and ErisDark Simulations
---------------------------------
Both Eris and ErisDark are cosmological zoom-in simulations of a Milky-Way-like galaxy drawn from an N-body realization of 40 millions dark-matter particles in a 90-Mpc-side periodic box. The initial conditions have been generated with the code [grafic1]{} [@bertschinger_multiscale_2001], assuming first-order Zeld’ovich approximation for the displacements and velocities of the particles at $z_i = 90$, and a $\Lambda$CDM cosmological model ($H_0 = 73 $ km s$^{-1}$ Mpc$^{-1}$, $\Omega_m = 0.268$, $\Omega_b = 0.042$, n$_s = 0.96$, $\sigma_8 = 0.76$). Three levels of refinement have been implemented [e.g. @katz_hierarchical_1993] to obtain a high-resolution particle mass of $m^{\rm ErisDark}_{\rm DM} = 1.2\times 10^5 {\,\rm M_\odot}$ in a subregion of about 1 Mpc side. We have modeled the process of structure formation by following two distinct twin runs, Eris and ErisDark, simulated with the N-body+SPH code [gasoline]{} [@wadsley_gasoline:_2004], respectively with and without including baryonic dynamics and physics. In the Eris SPH simulation, presented in detail in @guedes_forming_2011, the high-resolution particles are further split into 13 millions dark-matter particles and an equal number of gas particles. The final dark and gas particle mass reads $m_{\rm DM} = 9.8 \times 10^4 {\,\rm M_\odot}$ and $m_{\rm SPH} = 2 \times 10^4 {\,\rm M_\odot}$, while each star particle is stochastically created with an initial mass of $m_\star = 6.1 \times 10^3 {\,\rm M_\odot}$. The gravitational softening length is fixed to 124 physical pc at all redshifts $z < 9$, both in ErisDark and Eris. Compton, atomic, and metallicity-dependent radiative cooling at low temperatures, heating from a cosmic UV field and supernova explosions, a star formation recipe based on a high atomic gas density threshold (n$_{\rm SF} = 5$ atoms cm$^{-3}$ with 10 percent star-formation efficiency), and a blastwave scheme for supernova feedback give rise at the present epoch to a massive, barred, late-type spiral galaxy, a close Milky Way analog [@rashkov_light_2013].
The basic properties of the two simulated Milky-Way galaxies at $z=0$ are summarized in Table \[tab:MW\_prop\], where the host halo has been identified with the spherical overdensity Amiga Halo Finder [@gill_evolution_2004; @knollmann_ahf:_2009] (the virial radius being defined to enclose a mean total density of 98 times the critical density of the Universe at a given time, i.e. 364 $\bar\rho$, Bryan et al. 1998). In Eris, the simulated galaxy has an extended rotationally supported stellar disk with a radial scale length R$_d$ = 2.5 kpc, a scale height of $h_{z}$ = 490 pc at a galactocentric distance of 8 kpc, a gently falling rotation curve, with a circular velocity at the solar circle of $V_{c,\odot}=205 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$, in good agreement with the recent determination of the local circular velocity, $V_{c,\odot}=218 \pm 6 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ by @bovy_milky_2012, an i-band bulge-to-disk ratio B/D = 0.35, and a baryonic mass fraction within the virial radius that is 30 percent below the cosmic value. The stellar mass contained in the thin disk is $\sim 2 \times 10^{10} {\,\rm M_\odot}$, 50 percent of the overall stellar content at $z=0$. An in-depth description and discussion of the structural properties, brightness profiles, stellar and gas content in Eris can be found in @guedes_forming_2011.
Density profiles {#sec:density}
----------------
In Figure \[fig:density\_profile\] we show the density profiles of all matter components (dark matter, stars, and gas) as a function of galactocentric radius. We define a disk region-of-interest (ROI) as a cylindrical volume aligned with the stellar disk and extending 0.1 kpc above and below the disk’s midplane ($|z|<0.1$ kpc), and calculate density profiles by binning up particles in evenly spaced logarithmic cylindrical annuli. We show profiles for all DM particles contained in this ROI (thick black line), as well as the baryonic components (cyan lines). For comparison, we also plot a spherical density profile obtained by binning in spherical shells all DM particles in Eris (thin black line) and ErisDark (dashed magenta line).
The Eris galaxy is baryon dominated inward of 12.5 kpc. DM makes up only slightly more than half (55.5%) of the enclosed mass within a spherical radius of 8 kpc, implying that the circular velocity at 8 kpc is sourced in about equal parts by DM and baryons. The local DM density at 8 kpc in the disk plane is 0.42 GeV cm$^{-3}$ (spanning between 0.82 and 0.27 GeV cm$^{-3}$ in the 6-10 kpc range) and it contributes only 27.5% of the total matter density at this radius. The most recent observational constraints on the local DM density span from $1.25^{+0.30}_{-0.34}$ GeV cm$^{-3}$ [@garbari_limits_2011] to $0.3 \pm 0.1$ GeV cm$^{-3}$ [@bovy_local_2012], with large uncertainties due to modeling assumptions: in this respect, Eris’ local DM density appears in good agreement with observationally inferred estimates. The total baryonic content in Eris’ ROI (spanning between 2.7 and 0.6 GeV cm$^{-3}$ in the 6-10 kpc range) appears lower than the results from the *Hipparcos* satellite reported by [@holmberg_localdensity_2000], who derive an estimate of the local dynamical mass density of $0.1 {\,\rm M_\odot}$ pc$^{-3} = 3.75$ GeV cm$^{-3}$ to be compared with the measurement of $0.095 {\,\rm M_\odot}$ pc$^{-3}$ = 3.56 GeV cm$^{-3}$ in visible disk matter only [^2]. While this tension depends on the effective thickness of Eris’ baryonic disk (still inevitably puffed up compared to the Milky Way because of resolution), it should be noticed that the total surface density for $|z| <$1.1 kpc at 8 kpc (48 ${\,\rm M_\odot}$pc$^{-2}$) is remarkably consistent with the range of local surface densities recently derived by [@bovy_rix_2013].
Interestingly, the local DM density in the disk is about 31% *higher* than the spherically averaged DM density at 8 kpc in the ErisDark simulation (0.32 GeV cm$^{-3}$), even though in ErisDark all of the matter is treated as DM, while in Eris 17% is baryonic. This increase in the local DM density is the result of a contraction due to the dissipational processes occurring during the formation of the Galactic disk. The local disk DM density in Eris is also higher (by 34%) than its spherical average (0.31GeV cm$^{-3}$), indicating that at 8 kpc this contraction occurred primarily in the plane of the disk rather than globally.
![Density profiles as a function of galactocentric radius in the midplane ($|z| <$ 0.1 kpc) of the stellar disk. The thick black line is for all DM particles in the disk region, and the cyan lines are the baryonic components. The thin black line is the spherically-averaged dark matter density profile, for which $R$ refers to the 3D radius, and the magenta dashed line is the same for the ErisDark simulation. The shaded band indicates the region we considered for the velocity distribution analysis.[]{data-label="fig:density_profile"}](density_profile_noDD_revised.pdf){width="\columnwidth"}
This conclusion is further strengthened by a comparison of the ellipsoidal shapes of the dark matter density distributions in Eris and ErisDark. We followed the iterative method described in @kuhlen_shapes_2007 and applied it to particles between 6 and 10 kpc from the host halo’s center. As is typical for halos in dissipationless DM-only simulations [e.g. @allgood_shape_2006], the ErisDark halo is quite prolate, with intermediate-to-minor axis ratio $q=0.53$ and minor-to-major axis ratio $s=0.45$. As expected [@katz_dissipational_1991; @dubinski_effect_1994; @kazantzidis_effect_2004; @abadi_galaxy-induced_2010], the inclusion of dissipational baryonic physics results in a more axisymmetric and rounder DM halo in Eris. It is oblate with $q=0.99$ and $s=0.69$, and its minor axis is aligned to within $1.5^\circ$ with the angular momentum vector of the stellar disk (and to within $7^\circ$ of ErisDark’s minor axis).
Velocity Distributions
----------------------
{width="17cm"}
Scattering rates at DM direct detection experiments depend on the shape of the DM velocity distribution $f(\vec{v})$ at Earth. The relevant length scale ($R_\oplus$) is far below the resolution limit even of state-of-the-art numerical galaxy formation simulations (few 100 parsec), so we are forced to take a coarse-grained spatial average. In ultra-high resolution purely collisionless (DM only) simulations, spatial variations in $f(\vec{v})$ at 8 kpc have been investigated on $\sim$ kpc scales [@vogelsberger_phase-space_2009; @kuhlen_dark_2010]. These studies found some spatially localized sharp velocity features due to the presence of subhalos or tidal streams, but only with a low probability of $\sim 10^{-2}$. For the present work we thus neglect any small scale variations and consider the velocity distribution determined from all particles in a cylindrical disk annulus to be representative of $f(\vec{v})$ at Earth.
The annulus we consider is aligned with the stellar disk and has $|R - R_\odot| <$ 2 kpc and $|z| <$ 2 kpc.[^3] In Eris this region contains 81,213 DM particles and 830,068 star particles. From these we calculate distributions of the radial ([v\_R $v_R$]{}), azimuthal ([v\_$v_\theta$]{}), and vertical ([v\_z $v_z$]{}) velocity components, as well as for the velocity modulus ($|\vec{v}|$). These distributions are shown in Figure \[fig:fv\_4panel\]. All distributions are separately normalized to unity ($\int \! f(v_i) \, dv_i = 1$), and have been smoothed with a boxcar window of width $50 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ in order to suppress numerical noise stemming from low particle counts. The distribution of the star’s ${\ifmmode v_\theta \else $v_\theta$\fi}$ (cyan line in upper left panel) has been scaled down by a factor of 0.4 in order to show its shape on the same plot.
We compare the Eris disk ROI velocity distributions to the ErisDark spherical shell sample of width 4 kpc, which contains 229,931 DM particles. This kind of spherical shell sample is commonly used in the analysis of DM-only simulations of Milky-Way-like halos, for which there is no preferred plane to associate with the Galactic disk. We additionally plot a Maxwell-Boltzmann (MB) distribution with the same peak speeds as the simulations’ distributions: $\sigma_{\rm 1D} = v_{\rm peak}/\sqrt{2} = 137.9 \, (109.6) {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ in Eris (ErisDark).
Compared to ErisDark, the dissipational baryonic physics in Eris has broadened the radial and azimuthal distributions, while the vertical component has become slightly narrower. Note that the azimuthal component in Eris is skewed towards positive ${\ifmmode v_\theta \else $v_\theta$\fi}$, indicating the presence of an enhanced population of particles approximately co-rotating with the stars, i.e. a so-called “dark disk”. This asymmetry is the topic of Section \[sec:darkdisk\].
In the speed distribution (lower right), the DM-only simulation exhibits the familiar departures from a Maxwellian shape [@hansen_universal_2006; @vogelsberger_phase-space_2009; @kuhlen_dark_2010], with a deficit near the peak and excess particles at high speeds. In Eris the distribution is shifted to larger speeds, with the mean speed increasing from $\langle v \rangle = 187.6 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ to $220.8 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$. Furthermore, it no longer shows as marked a departure from the matched Maxwellian as in the DM-only case, only exceeding it slightly from 230 to 380 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{}and falling more rapidly at even higher speeds. We also compared to the so-called Standard Halo Model (SHM) distribution, consisting of a Maxwellian with ${\ifmmode v_{\rm peak} \else $v_{\rm peak}$\fi}= 220 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ (dashed line). Eris actually exceeds the SHM at all speeds less than $\sim 350 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$, and then again falls more sharply at higher speeds.
![The speed distribution in Eris (black) and ErisDark (grey) on a logarithmic scale, compared to the fitting function from @mao_halo--halo_2013 (dashed), with $(v_0,v_{\rm esc},p) = (330 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}, 480 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}, 2.7)$ for Eris and $(100 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}, 440 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}, 1.5)$ for ErisDark, and to the peak-matched Maxwellian curves (dotted).[]{data-label="fig:Mao_comparison"}](Mao_comparison.pdf){width="\columnwidth"}
Recently @mao_halo--halo_2013 proposed an empirical fitting function for the speed distribution[^4], $$f(v) =
\begin{cases}
A \, v^2 \, \exp\!{(-v/v_0)} \, \left( {\ifmmode v_{\rm esc} \else $v_{\rm esc}$\fi}^2 - v^2 \right)^p & \text{if $v \leq {\ifmmode v_{\rm esc} \else $v_{\rm esc}$\fi}$,} \\
0 & \text{otherwise,}
\end{cases}
$$ which they showed to be flexible enough to match the variations in the shape of [f(v) $f(v)$]{} over a wide range of halo masses and locations within the halos. As shown in Figure \[fig:Mao\_comparison\], the [f(v) $f(v)$]{} of both ErisDark and Eris are indeed well fit by this functional form, with fit parameters $(v_0, p) = (330 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}, 2.7)$ in Eris and $(100 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}, 1.5)$ in ErisDark. The escape velocity [v\_[esc]{} $v_{\rm esc}$]{} is not a free parameter and was determined directly in the simulations from the maximum particle speeds in the ROI to be ${\ifmmode v_{\rm esc} \else $v_{\rm esc}$\fi}= 480 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ in Eris and $440 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ in ErisDark.
The increase in the parameter $p$ from ErisDark to Eris is an important result of this study, since it is precisely such high values, i.e. a more steeply falling [f(v) $f(v)$]{} at high speeds, that ease the tension between the tentative detection of a scattering signal reported by CDMS-Si [@cdms_collaboration_dark_2013] and the nominal exclusion of such a signal from the Xenon-100 experiment [@aprile_dark_2012], as shown by @mao_connecting_2013.\
{width="\textwidth"}
A rotating dark disk from satellite accretion {#sec:darkdisk}
=============================================
We now turn to a discussion of the origin of the asymmetry in the distribution of the azimuthal velocity component in Eris. This feature is indicative of a “dark disk”, consisting of an oblate dark matter distribution aligned and nearly co-rotating with the stellar disk, as previously reported in cosmological hydrodynamic galaxy formation simulations by, for example, @read_dark_2009 [see also @ling_dark_2010], who find that the dark disk may contribute between 20 and 60 percent of the local DM density.
Dark disk material is typically thought to be deposited by massive satellites that are preferentially dragged into and disrupted in the plane of the disk [@read_thin_2008; @read_dark_2009]. If such satellites more commonly have pro-grade orbits with respect to the rotation of the stars in the disk, then the material they deposit upon being tidally disrupted would predominantly be co-rotating with the stars, leading to a positively skewed $f({\ifmmode v_\theta \else $v_\theta$\fi})$. In fact, dark disk material is expected to be prograde rather the retrograde, as dynamical friction is more efficient at dragging towards the disk plane incoming satellites which not only are massive, but also on a prograde orbit wrt to the baryonic disk.
Indeed, the fraction of DM with [v\_$v_\theta$]{} within 50 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} of the peak of the star’s $f({\ifmmode v_\theta \else $v_\theta$\fi})$ (at 205 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{}) is higher in Eris (0.13) than in ErisDark (0.055). However, a similar increase is also observed at negative [v\_$v_\theta$]{} (within 50 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} of ${\ifmmode v_\theta \else $v_\theta$\fi}= -205 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$) where the fraction of mass increases from 0.054 (ErisDark) to 0.095 (Eris). This indicates that the bulk of the increase in co-rotating material stems from a broadening of the $f({\ifmmode v_\theta \else $v_\theta$\fi})$ distribution, rather than from disrupted satellites preferentially depositing material into a co-rotating configuration. Note that some symmetric broadening could still arise from satellites being dragged into the galactic plane, if they were about equally likely to be on prograde and retrograde orbits. Most of the increase in dispersion, however, is likely due to the additional baryonic material that has been able to settle to the center of the halo and has deepened the potential: at 8 kpc, the circular velocity has increased from 140 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} in ErisDark to 205 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} in Eris.
Motivated by these considerations, we have followed in Eris the accretion and disruption of all satellites consisting of more than 1000 DM particles at infall. There are 160 such systems, of which 74 deposit material in our disk ROI. We have identified all DM particles that were at one point bound to these satellites, and determined their contribution to the DM distribution in the disk ROI. In total there are 30,780 such particles, together contributing 38 percent of the local DM density. Figure \[fig:fv\_2panel\_darkdisk\] shows distributions of the azimuthal and total speed for each of these satellites. The left panel demonstrates that not all of the material accreted from satellites is co-rotating with the stars. In fact, most satellites deposit material into a roughly symmetric [v\_$v_\theta$]{} distribution centered on ${\ifmmode v_\theta \else $v_\theta$\fi}= 0 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$, while a few deposit a predominantly retrograde (${\ifmmode v_\theta \else $v_\theta$\fi}< 0 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$) particle distribution. The positive skewness in the total $f({\ifmmode v_\theta \else $v_\theta$\fi})$ appears to be contributed mostly by one massive system with $M_{\rm infall} = 1.8 \times 10^{10} {\,\rm M_\odot}$ and $z_{\rm infall} = 2.7$ (mass ratio 1:14).
The total speed distributions in the right panel reveals two populations of accreted satellites: one set deposits material with typical speeds comparable to the peak of the overall speed distribution ($\sim 200 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$), and a second set with considerably higher speeds ($\gtrsim 300 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$). The latter material makes up a so-called “debris flow”, whose implication for direct detection experiments has been discussed in @kuhlen_direct_2012.
Returning to the azimuthal distributions, we define for each satellite an asymmetry parameter, $${\rm Asym} = {\frac}{F({\ifmmode v_\theta \else $v_\theta$\fi}> 0) - F({\ifmmode v_\theta \else $v_\theta$\fi}< 0)}{F({\ifmmode v_\theta \else $v_\theta$\fi}> 0) + F({\ifmmode v_\theta \else $v_\theta$\fi}< 0)},
\label{eq:Asym}$$ where $F({\ifmmode v_\theta \else $v_\theta$\fi}> 0) = \int_0^\infty \! f({\ifmmode v_\theta \else $v_\theta$\fi}) d{\ifmmode v_\theta \else $v_\theta$\fi}$ is the fraction of material with positive [v\_$v_\theta$]{}, and $F({\ifmmode v_\theta \else $v_\theta$\fi}< 0)$ the fraction with negative [v\_$v_\theta$]{}. This parameter quantifies the degree to which a satellite contributes material predominantly rotating in the same sense as the stars. Figure \[fig:Asym\_histogram\] shows a histogram of the parameter Asym, weighted by the mass contributed to the disk ROI by each satellite. The majority of the mass accreted from satellites has positive Asym, rotating prograde with respect to the stars. ${{\rm Asym}}> 0$ satellites contribute 31 percent of the DM mass in the disk ROI, 81% of all accreted material.
Following previous work [@purcell_dark_2009; @read_dark_2009], we first consider as dark disk particles those with [v\_$v_\theta$]{} within 50 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} of the peak of the stellar $f({\ifmmode v_\theta \else $v_\theta$\fi})$ at 205 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{}. However, even satellites with a fairly symmetric $f({\ifmmode v_\theta \else $v_\theta$\fi})$ can contribute material that satisfies this criterion; indeed, an almost equal amount of mass is found to be rotating in the opposite sense at the same speed. For these reasons, we additionally impose the constraint that ${{\rm Asym}}> 2/3$, i.e. only material from satellites with highly positively asymmetric $f({\ifmmode v_\theta \else $v_\theta$\fi})$ is considered part of the dark disk. With these criteria, the dark disk in Eris makes up only 3.2 percent ($2.6 \times 10^8 {\,\rm M_\odot}$) of all DM in the disk ROI. Note that this component almost exactly accounts for the difference in mass between material with [v\_$v_\theta$]{} within 50 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} of 205 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} and -205 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} ($2.75 \times 10^8 {\,\rm M_\odot}$).
With the above definition, the dark disk in Eris is much less massive than what has been suggested by previous work [@read_thin_2008; @read_dark_2009; @purcell_dark_2009; @ling_dark_2010]. A physical reason for this difference is the fairly quiet satellite accretion history of Eris, whereas previous works have focused on Milky-Way galaxies characterized by more massive and more numerous merger events. It has previously been pointed out that a quiet recent merger history is likely a pre-requisite for obtaining a realistic Milky Way analog with a thin and cold stellar disk, thus making a heavy dark disk unlikely [@purcell_dark_2009]. However, other studies have shown that the over-heating of the thin stellar disk is much less disruptive once more realistic distributions for the inclination and eccentricities of satellite orbits [@read_thin_2008] and gas [@moster_gasdisk_2010] are properly included in the simulations. Moreover, [@niederste-ostholt_sagittarius_2010] suggest that Sagittarius, which is currently being disrupted in the Galaxy’ s tidal field, might have been as massive as $10^{10} {\,\rm M_\odot}$ prior to merging, making the necessity of a quiescent merger history even more uncertain. Our ${{\rm Asym}}> 2/3$ cut further reduces the dark disk contribution.
![Histogram of the asymmetry parameter Asym (see text for definition) of accreted satellites, weighted by the mass they contribute locally. We define as a “dark disk” material contributed by satellites with ${\rm Asym} > 2/3$.[]{data-label="fig:Asym_histogram"}](Asym_histogram.pdf){width="\columnwidth"}
Although the dark disk, under the above criteria, only contributes 0.032 of the total DM mass in the disk ROI, we note that the overall asymmetry in $f({\ifmmode v_\theta \else $v_\theta$\fi})$ is considerably larger: ${{\rm Asym}}= 0.12$, implying about 30% more prograde than retrograde material. This motivates a less restrictive definition of dark disk, namely all material contributed by satellites with high asymmetry, regardless of its lag speed with respect to the stars. When only applying the asymmetry criterion of ${{\rm Asym}}> 2/3$, the dark disk makes up 9.1 percent ($7.25 \times 10^8 {\,\rm M_\odot}$) of all DM in the disk ROI, and it is this definition of dark disk that we use in the remainder of the paper. About 60% of the dark disk is contributed by the same single satellite that dominates the overall skewness of ${\ifmmode f(v) \else $f(v)$\fi}$.
Figure \[fig:darkdisk\_projections\] depicts face-on and edge-on projections of the dark disk material. This exhibits very little azimuthal structure and, as expected, is oblate in shape, with a minor-to-major axis ratio of $s = 0.45$, so even more flattened than the overall DM distribution. As shown in Figure \[fig:exponential\_disk\], the vertical density structure of the dark disk is well described by an exponential profile with a scale height of 5.0 kpc. While the radial structure is not exponential over the full radial range, it is approximately so near the solar radius (6–12 kpc), with a scale radius of 5.4 kpc.
The dark disk contributes 0.034 GeV cm$^{-3}$ to the DM density in the disk ROI. This is only about one third of the excess DM density in the Eris disk ROI over the ErisDark spherical average (0.12 GeV cm$^{-3}$), and this suggests that there are at least two distinct processes leading to an enhancement of the DM in the disk plane: one process that results in a DM component with significant net angular momentum and that is nearly co-rotating with the stellar disk, and another process that pulls DM into the disk plane without forcing it to co-rotate [see also @zemp_impact_2012].
In the following section we look in more detail at how the baryonic physics effects we have discussed above affect the expected direct detection signals.\
![Face-on and edge-on projections of the Eris dark disk (Asym $> 2/3$) particles. Contours are at $(6.7 \times 10^6, \, 1.0 \times 10^7, \, 2.0 \times 10^7, \, 6.7 \times 10^7) \, {\,\rm M_\odot}\, {\rm kpc}^{-2}$ in the top panel and at $(1.3 \times 10^7, \, 2.6 \times 10^7, \, 6.7 \times 10^7, \, 1.3 \times 10^8) \, {\,\rm M_\odot}\, {\rm kpc}^{-2}$ in the bottom.[]{data-label="fig:darkdisk_projections"}](darkdisk_projection.pdf){width="0.95\columnwidth"}
![Radial (blue) and vertical (red) density profiles of the dark disk component in Eris. The dashed line are exponential profiles with scale radius $R_{\rm dd} = 5.0$ kpc and scale height $h_{\rm z,dd} = 5.4$ kpc. []{data-label="fig:exponential_disk"}](darkdisk_exponential_fit.pdf){width="\columnwidth"}
Implication for Experiments {#sec:implications}
===========================
Earth Frame $f({\ifmmode v_\theta \else $v_\theta$\fi})$ and [f(v) $f(v)$]{}
----------------------------------------------------------------------------
{width="\textwidth"}
[lccccc]{} CDMS II (Ge) & Ge (32, 73) & 10 & $[10.0, 100]$ & $[651, 2060]$ & (1)\
& & 70 & & $[160, 507]$ &\
& & 500 & & $[89.9, 284]$ &\
\
CDMS II (Si) & Si (14, 28) & 5 & $[7.0, 100]$ & $[700, 2640]$ & (2)\
& & 10 & & $[403, 1520]$ &\
& & 20 & & $[254, 961]$ &\
\
XENON100 & Xe (54, 131) & 10 & $[6.6, 43.3]$ & $[671, 1720]$ & (3)\
& & 50 & & $[172, 442]$ &\
& & 500 & & $[60.0, 154]$ &\
\
DAMA/LIBRA & Na (11, 23) & 10 & $[6.7, 20.0]$ & $[378, 652]$ & (4)\
& I (53, 127) & 100 & $[25.0, 75.0]$ & $[214, 370]$ &\
\
CoGeNT & Ge (32, 73) & 5 & $[2.27, 11.2]$ & $[583, 1300]$ & (5)\
& & 10 & & $[310, 689]$ &\
\
CRESST-II & O (8, 16) & 10 & $[12.0, 40.0]$ & $[477, 872]$ & (6)\
& Ca (20, 40) & 10 & & $[581, 1060]$ &\
& W (74, 184) & 50 & & $[253, 463]$ &
So far we have focused on velocity distributions in the halo rest frame, but direct detection scattering rates of course depend on the velocity distribution in the Earth’s rest frame. Here the low relative velocity with respect to the stars of the rotating dark disk component can lead to pronounced changes compared to the non-rotating DM. We transform the halo-centric velocities into Earth rest frame by applying a Galilean boost by $\vec{v}_\oplus(t)$. The Earth’s velocity with respect to the Galactic center is the sum of the local standard of rest (LSR) circular velocity around the Galactic center, the Sun’s peculiar motion with respect to the LSR, and the Earth’s orbital velocity with respect to the Sun, $$\vec{v}_\oplus(t) = \vec{v}_{\rm LSR} + \vec{v}_{\rm pec} + \vec{v}_{\rm orbit}(t).$$ We set $\vec{v}_{\rm LSR} = (0, 205, 0) {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ , $\vec{v}_{\rm pec} = (10.00, 5.23, 7.17) {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ [@dehnen_local_1998], and $\vec{v}_{\rm orbit}(t)$ as specified in @lewin_review_1996. The velocities are given in the conventional $(U,V,W)$ coordinate system where $U$ refers to motion radially inwards towards the Galactic center, $V$ in the direction of Galactic rotation, and $W$ vertically upwards out of the plane of the disk. We associate these three velocity coordinates with the $({\ifmmode v_R \else $v_R$\fi}, {\ifmmode v_\theta \else $v_\theta$\fi}, {\ifmmode v_z \else $v_z$\fi})$ coordinates of the simulation particles. Note that for consistency with the simulation, we set the azimuthal component of $\vec{v}_{\rm LSR}$ equal to the rotational velocity of the star particles in Eris ($205 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$), rather than to the IAU standard value of $220 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$.
The resulting Earth rest frame speed distributions are shown in Figure \[fig:fv\_earth\_2panel\_darkdisk\] for $t=150.2$ days since J2000.0 (beginning of June), when the Earth’s relative motion with respect to the Galactic DM halo is maximized. The dark disk’s azimuthal velocity distribution (left panel) peaks at $-85 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ ($60 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ below the peak of the stars). The additional low azimuthal speed material from the dark disk results in a marked excess in the low speed tail ($<200 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$) of the full speed distribution (right panel), and a corresponding deficit at intermediate speeds (200–400 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{}), in the Eris disk annulus compared to the ErisDark spherical shell. At even higher speeds ($>400 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$) we again see an excess in Eris compared to ErisDark. This high speed excess is the result of the overall broadening of the velocity distributions caused by the adiabatic contraction of the DM halo (the second process mentioned at the end of Sec. \[sec:darkdisk\]).
Scattering Signal
-----------------
{width="49.00000%"} {width="49.00000%"} {width="49.00000%"} {width="49.00000%"}
The differential DM-nucleus scattering rate per unit detector mass is given by [@lewin_review_1996] $${\frac}{dR}{dE_r} = {\frac}{\rho_0}{m_N \, m_\chi} \, \sigma(E_r) \, \int_{{\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}}^\infty \!\! {\frac}{f(v)}{v} \, dv,$$ where $\rho_0$ is the local (at Earth) DM density, $m_N$ is the mass of the target nucleus, $m_\chi$ the mass of the DM particle, $E_r$ is the recoil energy, $\sigma(E_r)$ is the energy-dependent scattering cross section, and $v = |\vec{v}|$ is the Earth-frame speed of the DM particles incident on the detector. When a DM particle with incident speed $v$ elastically scatters off a nucleus, it imparts some fraction of its kinetic energy as a nuclear recoil. A given $E_r$ can be produced by any particle with $v > {\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}= \sqrt{E_r \, m_N/(2\mu^2)}$ (where $\mu = m_N m_\chi / (m_N + m_\chi)$ is the reduced mass), and hence the differential scattering rate is directly proportional to $${\ifmmode g(v_{\rm min}) \else $g(v_{\rm min})$\fi}\equiv \int_{{\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}}^\infty \!\! {\frac}{f(v)}{v} \, dv.
\label{eq:gvmin}$$
Not all DM direct detection experiments are sensitive to the same range of [v\_[min]{} $v_{\rm min}$]{}. For a given $m_N$ and an assumed $m_\chi$, the recoil energy sensitivity band $(E_{r,\rm min},E_{r,\rm max})$ of an experiment can be mapped onto a corresponding band in [v\_[min]{} $v_{\rm min}$]{} [@fox_integrating_2011; @frandsen_resolving_2012]. Experiments with heavier nuclei and lower energy thresholds have smaller [v\_[min]{} $v_{\rm min}$]{}, and hence probe more of $f(v)$. In Table \[tab:benchmarks\] we list [v\_[min]{} $v_{\rm min}$]{}-bands for a number of prominent direct detection experiments and several representative values of $m_\chi$. Note that these combinations of experiments and DM particle masses cover almost the entire range of possible [v\_[min]{} $v_{\rm min}$]{} values, from ${\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}\lesssim 100 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ for Xenon100 and CDMS-II(Ge) with a very massive DM particle ($m_\chi \sim 500$ GeV) all the way to beyond the escape speed for light DM particles ($m_\chi \lesssim 10$ GeV).
In Figure \[fig:signal\] we show comparisons of the average scattering rate $\langle {\ifmmode g(v_{\rm min}) \else $g(v_{\rm min})$\fi}\rangle$ and the modulation amplitude, fraction, and peak day between Eris, the dark disk component of Eris, the ErisDark spherical shell, and the SHM model (Maxwellian with ${\ifmmode v_{\rm peak} \else $v_{\rm peak}$\fi}= 220 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ and $v_{\rm esc} = 550 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$). Note that we compare to the SHM, rather than to the peak-matched Maxwellian with ${\ifmmode v_{\rm peak} \else $v_{\rm peak}$\fi}= 195 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$, since the SHM is commonly used in the direct detection literature. We have also chosen not to scale up the Eris velocities to match ${\ifmmode v_{\rm peak} \else $v_{\rm peak}$\fi}= 220 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ (as was done in Kuhlen et al. 2010, for example), since Eris after all is a realistic Milky Way analog galaxy, and so its lower ${\ifmmode v_{\rm peak} \else $v_{\rm peak}$\fi}$ should be considered a realistic possibility.
The top left panel of Figure \[fig:signal\] shows the time-averaged mean scattering rate, $\langle {\ifmmode g(v_{\rm min}) \else $g(v_{\rm min})$\fi}\rangle$. The differences between Eris, ErisDark, and the SHM remain fairly modest at low and intermediate [v\_[min]{} $v_{\rm min}$]{}. Below 100 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} the presence of the dark disk raises [g(v\_[min]{}) $g(v_{\rm min})$]{} by a few percent compared to ErisDark. Between 100 and 300 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{}, the dark disk results in a comparable reduction of the average scattering rate, but at even larger [v\_[min]{} $v_{\rm min}$]{} the non-rotating density enhancement again reverses the trend. The difference between Eris and ErisDark continues to grow monotonically, reaching a factor 2 (3, 4) enhancement near ${\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}=500$ (550, 600) [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{}. Compared to the SHM, the average scattering rate in Eris is enhanced by up to 15 percent at ${\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}< 180 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$, but is reduced for all high [v\_[min]{} $v_{\rm min}$]{}. To summarize, the rotating dark disk components affects the average scattering rate only modestly, but the additional contraction (non-rotating density enhancement) strongly affects the scattering rates at large ${\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}$, leading to an increase compared to the DM-only simulation and a suppression compared to the SHM.
Modulation
----------
Owing to the orbital motion of the Earth around the Sun, the speed with which particles impinge on Earth is modulated in the Earth’s rest-frame on an annual time scale, and this modulation is propagated into ${\ifmmode g(v_{\rm min}) \else $g(v_{\rm min})$\fi}$ [@drukier_detecting_1986]. We have fit the fully modulated [g(v\_[min]{}) $g(v_{\rm min})$]{} to a sinusoidal variation with a constant offset, ${\ifmmode g(v_{\rm min}) \else $g(v_{\rm min})$\fi}(t) = A + B \cos(2 \pi (t - t_p)/365 {\rm d})$. The constant term ($A$) was discussed in the previous section; we now discuss the modulation amplitude ($B$, top right of Figure \[fig:signal\]), the modulation fraction ($B/A$, bottom left of Figure \[fig:signal\]), and the peak day ($t_p$, bottom right of Figure \[fig:signal\]).
The amplitude of the modulated component ($B$) decreases with increasing ${\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}$, which is simply a reflection of the decreasing average scattering rate. As discussed in more detail below, the phase of the modulation flips by 180 degrees at ${\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}\approx 175 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$, and this results in a null in the modulation amplitude. Note that even the dark disk component by itself (red line in Figure \[fig:signal\]) exhibits a small amount of modulation, including its own null. This is a result of the small lag between the dark disk and the stellar disk and the Sun’s peculiar motion. If the dark disk were truly perfectly co-moving with the Sun, then Earth’s orbital motion around the Sun would not result in any annual modulation. Of course in an actual experiment there is no way to distinguish whether a given scattering event is from a dark disk particle or from the background halo.
Compared to ErisDark, the presence of the dark disk suppresses the modulation by several tens of percent at ${\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}< 400 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$, except around 175 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{}, where the slight shift in the location of the modulation null leads to a small positive peak in the fractional difference. At ${\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}> 400 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ the modulation amplitude in Eris begins to exceed that of ErisDark and again the difference quickly increases to factors of a few. Comparing to the SHM, the situation is reversed: below $400 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ the modulation amplitude in Eris is mostly greater than in the SHM, but at higher speeds it drops below. Similar trends can be seen in the modulation fraction ($B/A$), except that here the Eris curve remains predominantly above the ErisDark and below the SHM curve for the entire range of ${\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}$. The inclusion of dissipational baryonic physics tends to decrease the modulation fraction at all speeds, but compared to the SHM the Eris (and ErisDark) simulation actually exhibits an enhanced modulation fraction.
The peak day of the modulation flips from occurring in the Northern summer (near June 1) at ${\ifmmode v_{\rm min} \else $v_{\rm min}$\fi}\gtrsim 175 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$ to the winter (near December 1) at lower [v\_[min]{} $v_{\rm min}$]{}. This well-understood effect [@lewis_phase_2004; @purcell_dark_2012] is a consequence of the shift of [f(v) $f(v)$]{} to lower speeds from summer, when the relative motion between Earth and the DM halo is maximized, to winter, when it is minimized. Below some speed $f(v;{\rm winter})$ exceeds $f(v;{\rm summer})$, and vice versa at higher speeds. Since [g(v\_[min]{}) $g(v_{\rm min})$]{} is defined as an integral of $f(v)/v$ from [v\_[min]{} $v_{\rm min}$]{} to infinity (see Eq. \[eq:gvmin\]), this implies that there exists some speed, slightly below the peak of [f(v) $f(v)$]{} ($0.89 \, v_{\rm peak}$ for a Maxwellian), for which $g({\ifmmode v_{\rm min} \else $v_{\rm min}$\fi};{\rm winter}) = g({\ifmmode v_{\rm min} \else $v_{\rm min}$\fi};{\rm summer})$. Below this speed the modulation peaks in the winter, above it in the summer, i.e. the phase of the annual modulation flips. The inset in the bottom right panel of Figure \[fig:signal\] shows that this transition is shifted by a few [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} to higher [v\_[min]{} $v_{\rm min}$]{} in Eris compared to ErisDark, but occurs at about 20 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} less than in the SHM. Note also that the transition in the simulations is less abrupt than in the Maxwellian model.
The exact day on which the peak of the annual modulation occurs depends on the detailed shape of [f(v) $f(v)$]{}, and it has been shown [@kuhlen_dark_2010; @purcell_dark_2012] that DM velocity substructure can occasionally lead to marked changes (tens of days) in $t_p$, especially at large [v\_[min]{} $v_{\rm min}$]{}. The addition of baryonic physics, however, seems to have a more moderate effect on $t_p$. With an exception at the modulation null, $t_p$ does not change by more than three days between Eris and ErisDark and the SHM model.\
Summary {#sec:summary}
=======
We have analyzed the local (6–10 kpc) DM distribution in Eris, one of the highest resolution N-body+SPH hydrodynamics simulations to date of the formation of a Milky-Way-like galaxy in a cosmological context. The simulated disk galaxy matches many observational constraints on the structure of the Milky Way, such as having an extended rotationally supported stellar disk, a gently falling rotation curve at 8 kpc, falling on the Tully-Fisher relation, having a stellar-to-total mass ratio of 0.04, a star formation rate of 1.1 M$_\odot$ yr$^{-1}$, a low bulge-to-disk ratio of 0.35, and even a hot halo with a pulsar dispersion measure in excellent agreement with measurements towards the Magellanic Cloud [for more details, see @guedes_forming_2011]. Eris is the most realistic such simulation available today.
The focus of our study has been to assess the influence of dissipational baryonic physics on the DM density and velocity distribution at the location of the Sun, and its implications for Earth-bound DM direct detection experiments. To this end we have also analyzed the ErisDark simulation, a DM-only counterpart to Eris, using the same initial conditions except that all matter is treated as collisionless DM. Direct comparisons between Eris and ErisDark allow us to isolate the effects of the baryonic physics. We have also compared Eris to the Standard Halo Model (Maxwellian with ${\ifmmode v_{\rm peak} \else $v_{\rm peak}$\fi}= 220 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$, $v_{\rm esc} = 550 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$), in order to highlight changes relative to this simplified model, which is still commonly used in the direct detection literature.
The main results of our study are summarized as follows:
- The local DM density at 8 kpc in the disk plane in Eris is 0.42 GeV cm$^{-3}$, about 34% higher than the Eris spherical average and 31% higher than the ErisDark spherical average. This indicates that the dissipational baryonic physics in Eris has led to a contraction of the dark matter halo, and that this contraction is most pronounced in the disk plane.
- In our disk region-of-interest (ROI, an annulus centered at 8 kpc with width and height equal to 4 kpc) the distributions of radial and azimuthal velocity components are broadened in Eris compared to ErisDark, and only slightly narrower in the vertical component. As a result, the speed (velocity modulus) distribution is also broadened and shifted to higher speeds. This reflects the deeper potential well created by the dissipation of the baryons, which have sunk to the center of the halo. Nevertheless, the peak of the speed distribution in Eris occurs at only ${\ifmmode v_{\rm peak} \else $v_{\rm peak}$\fi}= 195 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$, considerably below that of the SHM (${\ifmmode v_{\rm peak} \else $v_{\rm peak}$\fi}= 220 {\ifmmode \,{\rm km\,s^{-1}}\else km$\,$s$^{-1}$\fi}$).
- As observed in DM-only simulations, the speed distribution in Eris is not perfectly described by a Maxwellian shape, exhibiting a deficit at speeds below its peak and an excess at higher speeds. However, the differences to the peak-matched Maxwellian are much smaller in Eris than in ErisDark.
- Both the ErisDark and Eris [f(v) $f(v)$]{} are well described by the empirical fitting function recently proposed by @mao_halo--halo_2013. The best-fit value of $p$ (a parameter governing how steeply the high speed tail falls) is higher in Eris (2.7) than in ErisDark (1.5). A more steeply falling [f(v) $f(v)$]{} eases the tension between non-detections in direct detection experiments with heavy nuclei (e.g. Xenon-100) and tentative signals from experiments with lighter nuclei (e.g. CDMS-Si, CoGeNT).
- The azimuthal velocity component in Eris (but not in ErisDark) is noticeably skewed towards positive [v\_$v_\theta$]{}, with 30% more prograde than retrograde (with respect to the stars) material. This indicates the possible presence of a “dark disk”.
- We have quantified the Eris dark disk component by following the accretion history of the 160 most massive satellites. 81% of all accreted material in the disk ROI comes from satellites with positive asymmetry parameter Asym, i.e. depositing more prograde than retrograde rotating material. We define as a “dark disk” all material deposited by satellites with high asymmetry, ${{\rm Asym}}> 2/3$. With this definition, the dark disk in Eris contributes 9.1% (0.034 GeV cm$^{-3}$) of the DM density in the disk ROI. Additionally applying the commonly used criterion that dark disk material lie within 50 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{} of the stellar rotation speed, the dark disk contribution drops to 3.2% (0.012 GeV cm$^{-3}$).
- The dark disk contributes only about one third of the excess DM density in the Eris disk ROI over the ErisDark spherical average (0.12 GeV cm$^{-3}$), and this suggests that there are at least two distinct processes leading to an enhancement of the DM in the disk plane: one process that results in a DM component with significant net angular momentum and that is nearly co-rotating with the stellar disk, and another process that pulls DM into the disk plane without forcing it to co-rotate [see also @zemp_impact_2012].
- The time-averaged scattering rate, proportional to [g(v\_[min]{}) $g(v_{\rm min})$]{}, exhibits only mild changes from ErisDark to Eris for most values of [v\_[min]{} $v_{\rm min}$]{}. At very low [v\_[min]{} $v_{\rm min}$]{} , the co-rotating dark disk component leads to a few percent increase in [g(v\_[min]{}) $g(v_{\rm min})$]{}, since there are slightly more particles with low relative speeds. Bigger changes are seen at high [v\_[min]{} $v_{\rm min}$]{}, where the broadening of [f(v) $f(v)$]{} due to the overall halo contraction leads to scattering rates that are several times higher than in ErisDark. On the other hand, comparing to the SHM, the mean scattering rate is strongly reduced at high [v\_[min]{} $v_{\rm min}$]{}.
- Similar trends hold for the amplitude of the annual modulation in Eris. Compared to ErisDark, it is slightly suppressed at low [v\_[min]{} $v_{\rm min}$]{}, and strongly enhanced at high [v\_[min]{} $v_{\rm min}$]{}. Compared to the SHM, however, the modulation amplitude is suppressed, just like the non-modulating part. The sign of the effect is reversed for the modulation fraction: it is suppressed by $\sim 50\%$ with respect to ErisDark, but similarly enhanced compared to the SHM, across the whole range of [v\_[min]{} $v_{\rm min}$]{}. Lastly, the peak day of the modulation is not strongly affected by the dissipational physics, with changes typically not exceeding $\pm 3$ days at most [v\_[min]{} $v_{\rm min}$]{}. Compared to the SHM, however, the [v\_[min]{} $v_{\rm min}$]{} corresponding to the sign flip in the modulation phase shifts by about 15 [[kms\^[-1]{}]{}km$\,$s$^{-1}$]{}.\
In conclusion, we have in this work investigated the effects dissipational baryonic physics has on the local distribution of DM near the Sun. We are able to isolate these effects through a comparative analysis of two twin cosmological galaxy formation simulations with identical initial conditions, one of which (Eris) being a full hydrodynamic simulation and the other (ErisDark) a DM-only one. Since the Eris simulation results in a realistic Milky Way analog galaxy, its DM halo can be viewed as a more realistic alternative to the Maxwellian standard halo model commonly used in analysis of direct detection experiments.
As DM direct detection experiments continue to develop and become ever more sensitive, it will be of paramount importance to properly understand and quantify the expectations provided by realistic simulations of galaxy formation. We look forward to the day when large numbers of detected DM scattering events will allow direct tests of these predictions.
Acknowledgments {#acknowledgments .unnumbered}
===============
Support for this work was provided by the NSF through grant OIA-1124453, and by NASA through grant NNX12AF87G (P.M.). J.G. was funded by the ETH Zurich Postdoctoral Fellowship and the Marie Curie Actions for People COFUND Program. The Eris Simulation was carried out at NASA’s Pleiades supercomputer, the UCSC Pleiades cluster, and ErisDark was performed at the UCSC Pleiades cluster.
[^1]: Note, however, that alternative interpretations of the XENON100 and CDMS II (Ge) data exist: a re-analysis of the low energy Ge events in CDMS II by @collar_maximum_2012 finds strong evidence ($5.6 \sigma$) for a population of nuclear recoil events, and @hooper_revisiting_2013 makes the point that the two nuclear recoil candidate events in the XENON100 data are more easily explained as DM scattering events rather than background leakage.
[^2]: The total baryonic density in Eris’ disk declines by about 40% by varying the ROI height from 1 to 4 times the force resolution, where 0.490 kpc is our estimate for the scale height of Eris’ disk. On the other hand, the local DM density is insensitive to the choice of the ROI height, up to $|z| <$2 kpc: this suggests already that if a dark disk can effectively be identified, its vertical extension will be much larger than the baryonic disk’s.
[^3]: Note that we have considerably extended the vertical extent of the annulus ROI compared to the disk ROI used for the density profiles (Sec. \[sec:density\]). This is necessary in order to get particle numbers sufficient to determine velocity distribution.
[^4]: Note that we include the factor of $4\pi v^2$ in our definition of [f(v) $f(v)$]{} (such that $\int \! f(v) dv = 1$), and hence our expression has an additional factor of $v^2$ compared to @mao_halo--halo_2013.
|
---
abstract: 'We study mass ejection from accretion disks formed in the merger of a white dwarf with a neutron star or black hole. These disks are mostly radiatively-inefficient and support nuclear fusion reactions, with ensuing outflows and electromagnetic transients. Here we perform time-dependent, axisymmetric hydrodynamic simulations of these disks including a physical equation of state, viscous angular momentum transport, a coupled $19$-isotope nuclear network, and self-gravity. We find no detonations in any of the configurations studied. Our global models extend from the central object to radii much larger than the disk. We evolve these global models for several orbits, as well as alternate versions with an excised inner boundary to much longer times. We obtain robust outflows, with a broad velocity distribution in the range $10^2-10^4$kms$^{-1}$. The outflow composition is mostly that of the initial white dwarf, with burning products mixed in at the $\lesssim 10-30\%$ level by mass, including up to $\sim 10^{-2}M_\odot$ of ${}^{56}$Ni. These heavier elements (plus ${}^{4}$He) are ejected within $\lesssim 40^\circ$ of the rotation axis, and should have higher average velocities than the lighter elements that make up the white dwarf. These results are in broad agreement with previous one- and two-dimensional studies, and point to these systems as progenitors of rapidly-rising ($\sim $ few day) transients. If accretion onto the central BH/NS powers a relativistic jet, these events could be accompanied by high energy transients with peak luminosities $\sim 10^{47}-10^{50}$ergs$^{-1}$ and peak durations of up to several minutes, possibly accounting for events like CDF-S XT2.'
author:
- |
Rodrigo Fernández$^{1}\thanks{E-mail: rafernan@ualberta.ca}$, Ben Margalit$^{2}$[^1], and Brian D. Metzger$^{3}$\
$^1$ Department of Physics, University of Alberta, Edmonton, AB T6G 2E1, Canada\
$^2$ Department of Astronomy and Theoretical Astrophysics Center, University of California, Berkeley, CA 94720, USA\
$^3$ Department of Physics and Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027, USA\
bibliography:
- 'nudaf\_mnras.bib'
- 'apj-jour.bib'
- 'rodrigo.bib'
date: Submitted to MNRAS
title: 'Nuclear Dominated Accretion Flows in Two Dimensions. II. Ejecta dynamics and nucleosynthesis for CO and ONe white dwarfs'
---
\[firstpage\]
accretion, accretion disks — hydrodynamics — nuclear reactions, nucleosynthesis, abundances — stars: winds, outflows — supernovae: general — white dwarfs
Introduction
============
Over the past decade, optical transient surveys have uncovered new types of relatively rare events with properties intermediate between supernovae and classical novae (e.g., @kulkarni_2012), and which have yet to be conclusively associated with a progenitor system. Examples include Ca-rich transients [@Perets+10; @kasliwal_2012], type Iax supernovae (e.g., @foley_2013), and rapidly-evolving blue transients (e.g., @drout_2014).
Mergers of white dwarfs (WD) with neutron stars (NS) or stellar-mass black holes (BH) are expected to generate transients, but theoretical predictions about their observational signatures are not well developed yet. The key difficulty is the wide range of scales and physical processes involved in the problem, in contrast to mergers of similarly-sized objects (e.g., WD-WD, or NS-NS/BH) which are more tractable and for which observational predictions are more mature (e.g., @dan_2014 [@FM16]).
Observationally, there are at least 20 confirmed galactic WD-NS binaries plus a few dozen more candidates [@vankerkwijk_2005]. However, only a few of these systems will merge within a Hubble time (e.g., @lorimer_2008). Merger rates are predicted to be in the range $10^{-6}$-$10^{-4}$ per year in the Milky Way, (e.g., ) with the most frequent systems expected to contain CO and ONe WDs [@toonen_2018]. At present, only one candidate WD-BH binary is known in the Galaxy [@bahramian_2017].
On the theoretical side, @fryer1999 considered WD-BH mergers as progenitors of long gamma-ray bursts. They explored the dynamics of disk formation during unstable Roche lobe overflow in circular orbits around stellar-mass BHs using Smooth Particle Hydrodynamc (SPH) simulations with nuclear burning, and predicted the accretion power expected from the resulting disk using analytical arguments (see also @King+07). The disruption and disk formation process has also been explored by @Paschalidis+11 and @bobrick_2017 with time-dependent simulations. More extensive theoretical work exists in the context of tidal disruption of WDs on parabolic orbits around BHs [@luminet_1989; @rosswog_2009; @macleod_2016; @kawana_2018]. Thermonuclear burning due to tidal pinching of the WD and/or tidal tail intersection are commonly found, although most existing work considers massive ($\geq 100M_\odot$) BHs with the exception of @kawana_2018, who obtains explosions with BHs of mass $10M_\odot$.
@M12 explored the evolution of the torus formed during a quasi-circular WD-NS/BH merger using a steady-state, height-integrated model. Results showed that nuclear reactions are important compared to viscous heating, and that the radiatively-inefficient nature of these disks should result in significant outflows. The importance of nuclear burning led @M12 to coin the term *Nuclear-Dominated Accretion Flows (NuDAF)* for this regime. The burning of increasingly heavier elements as they accrete deeper into the gravitational potential generates an onion-shell-like stratification of composition in radius, with non-trivial amounts of ${}^{56}$Ni production as a possible outcome. A similar analysis has recently been applied to accretion disks in X-ray binaries and supermassive black holes [@ranjan_2019].
The time-dependent evolution of height-integrated disks was carried out by @margalit_2016 [hereafter MM16] using a prescribed outflow model. A systematic parameter exploration showed that disks from CO WDs evolve in a self-similar, quasi steady-state fashion that is relatively robust to parameter variations. Outflow velocities were found to be $\sim 10^4$kms$^{-1}$, with $\sim 10^{-3}M_\odot$ of radioactive ${}^{56}$Ni produced. At very late times, these disks could in principle be a formation site of planets around the NS [@margalit_2017].
@FM12 [hereafter Paper I] performed global two-dimensional hydrodynamic simulations of the accretion disk using an ideal gas equation of state, parameterized nuclear burning, and viscous angular-momentum transport. Results showed that turbulence-aided detonations of the disk are possible during the first few orbits if the nuclear energy release is significant compared to the local gravitational potential. Non-exploding cases yielded robust quasi-steady outflows, as expected from the radiatively inefficient character of the disk. A key uncertainty in these results was the robustness of detonations when a more realistic equation of state that includes radiation pressure is taken into account. Recently, @zenati_2019 has reported two-dimensional time-dependent simulations of the disk including a physical equation of state, a coupled 19-isotope nuclear reaction network, viscous angular momentum transport, and self-gravity. Simulations are followed for a few orbits at the initial disk radius ($\sim 100$s) and robust outflows are also found, with properties consistent with previous 1D studies. Detonations are also reported.
Here we study the evolution of the disk with global two-dimensional axisymmetric simulations, focusing on the properties of the disk outflow in the case of the most common CO and ONe WDs. We improve upon Paper I by including a physical equation of state, a fully-coupled nuclear reaction network, and self-gravity. We also improve on previous work by resolving all spatial scales down to the compact object surface over short times (few $100$s), and also follow the outer disk for much longer times ($\sim$hr) with an excised boundary. Study of He-WD/NS binaries is left for future work.
The paper is structured as follows. Section 2 describes the numerical method employed and the parameter space surveyed. Section 3 presents our results. Section 4 summarizes our conclusions and discusses observational implications. The Appendices contain a description of the self-gravity implementation, the initial conditions for the disk, and a method to obtain the accretion rate at the central object.
Methods
=======
Physical Model
--------------
We consider accretion disks formed during the tidal disruption of a white dwarf by a neutron star or a black hole via unstable mass transfer. Following @M12, Paper I, and MM16, we assume that nuclear burning is not dynamical during the merger itself, and place the disk at the circularization radius [@eggleton1983] $$\label{eq:torus_radius}
R_{\rm t} = \frac{R_{\rm WD}}{(1+q)}\frac{0.6q^{2/3} + \ln (1 + q^{1/3})}{0.49 q^{2/3}}$$ where $R_{\rm WD}$ is the radius of the white dwarf before disruption, which depends on the white dwarf mass $M_{\rm WD}$ and composition (e.g., @nauenberg1972) and $q = M_{\rm WD}/M_{\rm c}$ is the mass ratio of the binary, with $M_{\rm c}$ the mass of the other compact object (neutron star or black hole). Mass transfer should be unstable for most CO and ONe WDs (e.g., @bobrick_2017).
Outside the immediate vicinity of the central compact object, where neutrino or photodissociation losses from the disk[^2] can be important, the disk is radiatively inefficient. Evolution occurs on a viscous timescale at the initial torus radius $R_{\rm t}$ by the action of angular momentum transport processes, which include magnetic turbulence and perhaps also gravitational instabilities (MM16). While our models include self-gravity, they are axisymmetric and therefore gravitational torques are not accounted for. Also, we do not include magnetic fields, and instead parameterize angular momentum transport via a viscous shear stress. See Paper I for a more extended discussion of the validity of these approximations.
Equations and Numerical Method
------------------------------
We solve the equations of mass, momentum, energy, and chemical species conservation in axisymmetric spherical polar coordinates $(r,\theta)$, with source terms due to gravity, shear viscosity, nuclear reactions, and charged-current neutrino emission: $$\begin{aligned}
\label{eq:mass_conservation}
\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho\mathbf{v}_p) & = & 0\\
\label{eq:momentum_conservation}
\frac{d \mathbf{v}_p}{d t} + \frac{1}{\rho}\nabla p & = &
-\nabla\Phi \\
\label{eq:angular_conservation}
\rho\frac{d j}{d t} & = & r\sin\theta\,(\nabla\cdot\mathbf{T})_\phi\\
\label{eq:energy_conservation}
\rho\frac{d e_{\rm int}}{d t} + p\nabla\cdot\mathbf{v}_p
& = & \frac{1}{\rho\nu}\mathbf{T}:\mathbf{T} + \rho\left(\dot{Q}_{\rm nuc} - \dot{Q}_{\rm cool}\right)\\
\label{eq:poisson}
\nabla^2\Phi & = & 4\pi G \rho + \nabla^2\Phi_{\rm c}\\
\label{eq:fuel_evolution}
\frac{\partial \mathbf X}{\partial t} & = & \mathbf{\Theta}(\rho,e_{\rm int},\mathbf{X}) + \mathbf{\Gamma}_{\rm cc}\end{aligned}$$ where $d/dt\equiv \partial/\partial t + \mathbf{v_{\rm p}}\cdot\nabla$, and $\rho$, $\mathbf{v}_{\rm p}$, $j$, $p$, $e_{\rm int}$, $\Phi$, and $\mathbf{X}$ are respectively the fluid density, poloidal velocity, specific angular momentum in the $z$-direction, total pressure, specific internal energy, gravitational potential, and mass fractions of the isotopes considered ($\sum_i X_i =1$). Explicit source terms include the viscous stress tensor for azimuthal shear, with non-vanishing components $$\begin{aligned}
\label{eq:trphi_def}
T_{r\phi} & = & \rho \nu\,\frac{r}{\sin\theta}\frac{\partial}{\partial r}\left(\frac{j}{r^2} \right)\\
T_{\theta\phi} & = & \rho \nu\,\frac{\sin\theta}{r^2}\frac{\partial}{\partial\theta}\left(\frac{j}{\sin^2\theta} \right),\end{aligned}$$ the nuclear heating rate per unit volume $\dot{Q}_{\rm nuc}$, and the neutrino cooling rate per unit volume $\dot{Q}_{\rm cool}$. In equation (\[eq:poisson\]), we separate the gravitational potential of the central object $\Phi_{\rm c}$ from that generated by the disk density field. There is also an implicit source term in the expansion of $(\mathbf{v_{\rm p}}\cdot \nabla)\mathbf{v}_{\rm p}$ in spherical coordinates (left hand side of equation \[eq:momentum\_conservation\]), which contains a centrifugal acceleration[^3] that depends on $j$: $$\mathbf{f}_c = \frac{j^2}{(r\,\sin\theta)^3}\left[\sin\theta\hat r +\cos\theta\hat\theta\right].$$
The system of equations (\[eq:mass\_conservation\])-(\[eq:fuel\_evolution\]) is closed with the Helmholtz equation of state [@timmes2000], the 19-isotope nuclear reaction network $\mathbf{\Theta}$ of @weaver1978, which provides a cost-effective description of energy generation by fusion and losses from photodissociation and thermal neutrino emission ($\dot{Q}_{\rm nuc}$), and an alpha-viscosity prescription [@shakura1973] $$\label{eq:viscosity_alpha}
\nu = \alpha \frac{p}{\rho\,\Omega_{\rm K}},$$ where $\alpha$ is a free parameter and $\Omega_{\rm K}$ is the Keplerian frequency. In addition to the neutrino losses included in the nuclear network [@itoh1996], neutrino emission via charged-current weak interactions is included (as described in @fernandez2019), adding a cooling term ($\dot{Q}_{\rm cool}$) and an extra source term for the mass fraction of neutrons and protons ($\mathbf{\Gamma}_{\rm cc}$).
We use [FLASH3]{} [@fryxell00; @dubey2009] to evolve the system of equations (\[eq:mass\_conservation\])-(\[eq:fuel\_evolution\]) with the dimensionally-split version of the Piecewise Parabolic Method (PPM, @colella84). The modifications to the code required to evolve accretion disks with a viscous shear stress are described in @FM13 and Paper I. Source terms are applied in between updates by the hydrodynamic solver (operator-split). The 19-isotope nuclear reaction network is that included in [FLASH3]{}; we use it with the MA28 sparse matrix solver and Bader-Deuflhard variable time stepping method (e.g., @timmes1999). A time step limiter $$\Delta t_{\rm burn} < 0.1\frac{e_{\rm int}}{|\dot{Q}_{\rm nuc}|}$$ is imposed for nuclear burning, in addition to the standard Courant, heating, and viscous time step restrictions.
Self-gravity is implemented using the algorithm of @MuellerSteinmetz1995, with a customized version for non-uniform spherical grids. The implementation and testing of this component is described in Appendix \[s:self\_gravity\_appendix\]. The gravitational potential generated by the central object $\Phi_{\rm c}$ is modeled as a pseudo-Newtonian point mass $M_{\rm c}$, with a spin-dependent event-horizon [@artemova1996; @FKMQ14].
The computational domain extends from an inner radius $R_{\rm in}$ to an outer radius $R_{\rm max}$, with the grid logarithmically spaced with $64$ cells per decade in radius. The full range of polar angles $[0,\pi]$ is covered with $56$ points equispaced in $\cos\theta$. The effective resolution at the equatorial plane is $\Delta r /r \simeq 0.037\simeq \Delta\theta \simeq 2^\circ$. One model is evolved at twice the resolution in radius and angle to test convergence (§\[s:models\_evolved\]).
The boundary conditions are reflecting in polar angle and outflow at $r=R_{\rm max}$. At $r=R_{\rm in}$, we use a reflecting boundary condition when resolving the neutron star at the center, otherwise we set this boundary to outflow. The angular momentum is set to have a stress-free boundary condition at $r=R_{\rm in}$, except in the case when the NS is assumed to be spinning, in which case a finite stress is imposed. The boundary condition for the gravitational potential is such that it vanishes at $r\to \infty$.
Initial Conditions {#s:initial_conditions}
------------------
The initial condition is an equilibrium torus with constant entropy, angular momentum, and composition. In a realistic system, this initial state would be determined by the dynamics of Roche lobe overflow, which requires a fully three-dimensional simulation of the merger dynamics until the disk settles into a nearly axisymmetric state. In practice, the thermal time due to viscous heating in our disks is a few orbits, or about $\sim 1/10$ of the viscous time, which means that for the timescales of interest, the thermodynamics becomes quickly set by viscous heating and nuclear burning. For systems in which a thermonuclear runaway is expected early on (i.e., ONe WD + NS mergers), the initial conditions are more important than in the systems we consider here.
------------------ -------------- ------------- ------------- --------------------------- ----- ---------- -------------- ------------------------------- ----- ------------------- --------------- --
[Model]{} $M_{\rm wd}$ $M_{\rm c}$ $R_{\rm t}$ Mass Fractions d $\alpha$ $R_{\rm in}$ $(n_{\rm r}, n_{\rm \theta})$ BC $p_{\rm sp}/\chi$ $t_{\rm max}$
$(M_\odot)$ $(M_\odot)$ ($10^9$cm) C/O/He/Ne ($10^7$cm) (s)
[CO+NS(l)]{} 0.6 1.4 2 $0.50$/$0.50$/$0.0$/$0.0$ 1.5 0.03 2 (64,56) out 0 4,123
[CO+NS(l-hr)]{} (128,112)
[CO/He+NS(l)]{} $0.45$/$0.45$/$0.1$/$0.0$ (64,56)
[ONe+BH(l)]{} 1.2 5.0 1 $0.00$/$0.60$/$0.0$/$0.4$ 1 887
0.6 1.4 2 $0.50$/$0.50$/$0.0$/$0.0$ 1.5 0.03 0.1 (64,56) ref 0 245
[CO/He+NS(s)]{} $0.45$/$0.45$/$0.1$/$0.0$ 215
[ONe+BH(s)]{} 1.2 5.0 1 $0.00$/$0.60$/$0.0$/$0.4$ 0.3 out 50
0.6 1.4 2 $0.50$/$0.50$/$0.0$/$0.0$ 1.5 0.10 0.1 (64,56) ref 0 110
[CO+NS(s-sp)]{} 0.03 2ms 170
[ONe+BH(s-sp)]{} 1.2 5.0 1 $0.00$/$0.60$/$0.0$/$0.4$ 0.17 out 0.8 31
\[t:models\]
------------------ -------------- ------------- ------------- --------------------------- ----- ---------- -------------- ------------------------------- ----- ------------------- --------------- --
The torus is initially constructed using only the gravity of the central object ($\Phi_{\rm c}$), for which a semi-analytic formulation is straightforward (e.g., @PP84 [@FM13]). This torus is then relaxed with self-gravity, by evolving it without any other source terms for 20 orbits at $r=R_{\rm t}$. A detailed description of this procedure is provided in Appendix \[s:initial\_condition\_appendix\].
Initial tori are described by their mass ($M_{\rm WD}$), radius of maximum density ($R_{\rm t}$, equation \[eq:torus\_radius\]), entropy (or $H/R = c_{\rm i}/\Omega_{\rm K}$ at density maximum, with $c_{\rm i}=\sqrt{p/\rho}$), angular momentum profile (constant), and composition. The orbital time at $r=R_{\rm t}$ is given by $$\label{eq:torb_def}
t_{\rm orb} \simeq 40\,\textrm{s}\,\left(\frac{R_{\rm t}}{10^{9.3}\textrm{cm}}\right)^{3/2}
\left(\frac{1.4M_\odot}{M_{\rm c}}\right)^{1/2},$$ and the viscous time at the same location is $$\label{eq:tvis_def}
t_{\rm vis} \simeq 900\,\textrm{s}\,\left(\frac{0.03}{\alpha}\right)\left(\frac{0.5}{\rm H/R} \right)^2
\left(\frac{R_{\rm t}}{10^{9.3}\textrm{cm}}\right)^{3/2} \left(\frac{1.4M_\odot}{M_{\rm c}}\right)^{1/2}.$$
The torus is initially surrounded by a low-density adiabatic atmosphere with hydrogen composition. This ambient density profile is set to $10^{-3}$gcm$^{-3}$ inside $r=4R_{\rm t}$ for most models, and decays as $r^{-2}$ outside this radius. In some cases we add a $r^{-0.5}$ dependence of this ambient inside $r=4R_{\rm t}$ whenever numerical problems at the inner radial boundary are encountered; this value is small enough it does not affect the dynamics of the outflow. A density floor is set at 90% of the initial ambient value. A constant floor of pressure ($10^8$ergcm$^{-3}$) about an order of magnitude lower than the lowest ambient value (near the outer boundary) is used to prevent numerical problems around the torus edges at early times and near the rotation axis at the inner boundary once an evacuated funnel forms. A constant floor or temperature ($3\times 10^4$K) is also used to prevent the code from reaching the lowest tabulated temperature in the Helmholtz EOS. The choice of floors is low enough that results do not depend on it. When computing mass ejection and accretion, the ambient matter is excluded, and material coming from the disk has densities much higher than this initial ambient gas.
Models Evolved {#s:models_evolved}
--------------
All of our models are described in Table \[t:models\]. Given the large dynamic range in radius ($\gtrsim 10^3$) between the initial disk radius $R_t$ and the characteristic size of the central compact object, and the fact that our time step is limited by the Courant condition at the smallest radius in the simulation, we evolve two types of models.
The first group sets the inner boundary radius at $R_{\rm in} = 10^{-2}R_{\rm t}\sim 10^7$cm, allowing evolution to long timescales ($\sim$ few $t_{\rm vis}$) but not resolving the regions from where the fastest outflows are launched and where $^{56}$Ni is produced for a central NS. These models thus probe nucleosynthesis of intermediate-mass elements and the time-dependence of mass ejection on long timescales (as in Paper I). These models have “[(l)]{}" appended to their names, for “large inner boundary".
The group consists of our fiducial WD+NS model, [CO+NS(l)]{}, a $0.6M_\odot$ CO WD around a $1.4M_\odot$ NS with a viscosity of $\alpha=0.03$ and an initial entropy of $3k_{\rm B}$ per nucleon (c.f. Appendix \[s:initial\_condition\_appendix\]). To test the effect of a small admixture of helium, we include model [CO/He+NS(l)]{}, which is identical to the fiducial case except that the initial abundances of helium, carbon, and oxygen are $10\%$, $45\%$, and $45\%$, respectively. Model [ONe+BH(l)]{} probes a more massive ONe WD ($1.2M_\odot$) around a $5M_\odot$ BH, with otherwise identical parameters. Finally, model [CO+NS(l-hr)]{} is identical to the fiducial case but with twice the resolution in radius and angle, to probe the degree of convergence of our results. All these models are evolved to $t=100t_{\rm orb}\simeq 4 t_{\rm vis}$.
The second group of models (“small inner boundary", or “[(s)]{}" for short) resolve the central compact object but are evolved for a shorter amount of time ($\sim$ several $t_{\rm orb}$). Model [CO+NS(s)]{} corresponds to the fiducial case but now with an inner reflecting radial boundary at $10$km. Likewise, model [CO/He+NS(s)]{} probes the hybrid WD while resolving the neutron star. In both cases, the neutron star is assumed to be non-rotating. Model [ONe+BH(s)]{} extends the domain of the corresponding large inner boundary BH model to a radius midway between the innermost stable circular orbit (ISCO) and horizon. The BH is assumed to be non-spinning.
We include additional models that probe parameter variations among the small inner boundary set. [CO+NS(s-vs)]{} increases the alpha viscosity parameter from $\alpha=0.03$ to $0.1$. Model [CO+NS(s-sp)]{} adds a spin period of $2$ms to the NS and imposes a finite viscous stress at this boundary, to probe energy release at the boundary layer. Finally, model [ONe+BH(s-sp)]{} adds a dimensionless spin of $\chi = 0.8$ to the BH, extending accretion to smaller radii (the inner boundary is again placed midway between the new ISCO and horizon radii).
For small boundary models, we first evolve the disk with a large inner boundary to save computational time in this early phase, until the disk material reaches this larger inner boundary. The result is then remapped into a grid that extends the inner boundary further inward, with the process being repeated for each additional order of magnitude that $R_{\rm in}$ decreases.
Results {#s:results}
=======
![Mass accretion rate at the central object inferred from the three large boundary models by following the procedure described in Appendix \[s:accretion\_central\]. The dashed lines show power-law fits in time.[]{data-label="f:mdot_LB"}](f1.pdf){width="\columnwidth"}
{width="\textwidth"}
Mass ejection on long timescales
--------------------------------
Over the first few orbits at $r=R_{\rm t}$, the equilibrium initial tori begin accreting to small radius while simultaneously transporting angular momentum outward. The absence of cooling during this initial stage leads to vigorous convection. This early phase of evolution is nearly identical to the quiescent models of Paper I, with quantitative details that depend on the parameters of the system. Figure \[f:mdot\_LB\] shows the accretion rate at the central object for the three large inner boundary models that remove the regions with $r < 0.01R_{\rm t}$ to allow for a longer evolution \[[CO+NS(l), CO/He+NS(l)]{}, and [ONe+BH(l)]{}\]. The accretion rate reaches a peak around 5-6 orbits at $r=R_{\rm t}$ ($\simeq 200$s for the NS models, and $\simeq 35$s for the BH model).
None of our large boundary models detonate during the initial viscous spreading of the equilibrium torus nor at later stages. This stands in contrast to some of the results of Paper I, which used parametric nuclear reactions, an ideal equation of state, and point mass gravity. At higher temperatures, the increasing contribution of radiation pressure results in more moderate increases in the temperature at small disk radii, preventing nuclear burning from ever causing a thermonuclear runaway (see §\[s:comparison\_1d\] for a more detailed discussion of this effect). Inclusion of self-gravity only increases the density by a factor of $\sim 2$ and moves the radius of the torus density peak inward by a few percent relative to using only point mass gravity (Appendix \[s:initial\_condition\_appendix\]). The quantitative difference in the evolution once source terms are included is minor. As a more extreme example, we evolved a test fiducial model in which the initial condition obtained with point mass gravity is not relaxed for self-gravity. While stronger nuclear burning is obtained in some regions of the disk due to radial oscillations, a detonation is not obtained within $2$ orbits.
The onset of convection is also accompanied by outflows from the disk, which continue until the end of all simulations. We consider matter to be unbound from the disk when its Bernoulli parameter $$b = \frac{1}{2}\left[v_r^2+v_\theta^2+\frac{j^2}{(r\sin\theta)^2}\right]
+ e_{\rm int} + \frac{p}{\rho} + \Phi$$ is positive. This criterion considers the conversion of thermal energy into kinetic energy by adiabatic expansion, and is useful when measuring the outflow at radii not much larger than the disk. The unbound mass outflow rate at a radius[^4] $r_{\rm out}=30R_{\rm t}$ is shown in Figure \[f:evolution\_LB\] for the three large inner boundary models. Peak outflow is reached around 15 orbits at $r=R_{\rm t}$, with a subsequent decay with time (after $t\simeq t_{\rm vis}$) that follows an approximate power-law.
By the time we stop our large inner boundary models ($100$ orbits at $r=R_{\rm t}$, see Table \[t:models\] for values in s), mass ejection is not yet complete. Nevertheless, given the power-law dependence with time of the outflow rate, we can estimate the final ejecta mass by extrapolating forward in time assuming that the same power law continues without changes (see also MM16). If this assumption holds, the extrapolation is a *lower limit* on the total ejecta mass, because it does not include the contribution from $r < 0.01R_{\rm t}$, which is quite significant when a NS sits at the center (§\[s:comparison\_sb\_lb\]). If the mass outflow rate at some radius is $\dot{M}_{\rm out}\propto t^{-\delta}$ ($\delta>0$), then we can write for $t > t_0$ $$\label{eq:mass_ejection_power-law}
M_{\rm ej}(t) = M_{\rm ej,0} + \frac{1}{\delta-1}\dot{M}_{\rm out,0}t_0
\left[1 - \left(\frac{t}{t_0} \right)^{1-\delta}\right]$$ with $M_{\rm ej,0}$ and $\dot{M}_{\rm out,0}$ the ejected mass and outflow rate at $t=t_0\sim t_{\rm vis}$, after which the power-law dependence holds. For a finite value at $t\to \infty$, we need $\delta > 1$. Figure \[f:evolution\_LB\] shows $M_{\rm ej}$ as a function of time. The resulting exponents are $\delta \simeq \{1.3,1.2,1.8\}$ for models [CO+NS(l)]{}, [CO/He+NS(l)]{}, and [ONe+BH(l)]{}, respectively, leading to a finite asymptotic value in all three cases. All mass ejection results are shown in Table \[t:results\]. The asymptotic ejecta masses for the large boundary models are $60-70\%$ of the initial WD mass before including the contribution from $r < 0.01R_{\rm t}$. Over the short timescales that small boundary models run, they eject about twice more mass when counted to the same radius and time than large boundary models (§\[s:small\_bnd\_evolution\]). A simple scaling of the asymptotic ejecta by this factor would exceed the initial WD mass, which indicates that (1) most of the WD mass is indeed likely to be ejected, but that (2) the time exponents of the outflow are also likely to change with time and/or be different than those derived from the large boundary models. Upper limits to the ejected mass can be obtained by subtracting the asymptotic accreted mass at the compact object (Figure \[f:mdot\_LB\]) from the WD mass. These accreted masses are $\simeq 3\times 10^{-3}M_\odot$ for the WD+NS models, and $0.12M_\odot$ for the ONe+BH model.
Figure \[f:evolution\_LB\] also shows how the cumulative ejecta is distributed in specific energy at the radius $r_{\rm out} = 30R_{\rm t}$ where we sample the outflow. In all three large boundary models, the highest kinetic energies achieved correspond approximately to the gravitational potential energy at the initial torus radius $R_{\rm t}$. The resulting maximum velocities are $\sim 6,000$kms$^{-1}$ for models [CO+NS(l)]{} and [CO/He+NS(l)]{}, and $\sim 15,000$kms$^{-1}$ for [ONe+BH(l)]{}.
The bulk of the ejecta has not yet reached homology at this radius, as indicated by the significant internal energy component. Nonetheless, most of the ejecta with $b>0$ has more than sufficient energy to escape the gravitational field of the system. For the NS models, only a fraction $3-5$% by mass has negative specific energy but positive Bernoulli parameter at $r_{\rm out}=30R_{\rm t}$, while for the BH model this fraction is $10\%$ (shown as a shaded area in Figure \[f:evolution\_LB\], representing marginally bound ejecta). The ratio of total internal energy to kinetic energy in Figure \[f:evolution\_LB\] is $E_{\rm i}/E_{\rm k}\simeq 0.15$ for the NS models and $0.2$ for the BH model, while the total internal energy is very close to the gravitational energy, $E_{\rm i}\simeq E_{\rm g}$, in all cases. Assuming that all of the internal energy is converted to kinetic energy upon adiabatic expansion, the root-mean-square velocity would increase by a factor $\sqrt{1+E_{\rm i}/E_{\rm k}}\lesssim 1.1$. In practice, this is an upper limit, since some of the internal energy will be used to escape the gravitational potential. Therefore the kinetic energy distributions of Figure \[f:evolution\_LB\] are close to their values in homology.
The velocity distribution of the ejecta is broad, as shown in Figure \[f:evolution\_LB\], spanning about two orders of magnitude in radial velocity. Note that this distribution is incomplete, however, as including the region close to the compact object will add even faster outflows (§\[s:small\_bnd\_evolution\]). The angular distributions at the end of the simulations are strongly peaked toward the poles, with an excess of about two orders of magnitude relative to the equatorial direction.
------------------ --------------------------- --------------------- -------------------------- --------------------------- ------------ ------------ ------------- ------------- ------------- ------------ ------------- ------------- ------
[Model]{} $M_{\rm ej}(t_{\rm max})$ $M^\infty_{\rm ej}$ $t_{\rm cmp}$ $M_{\rm ej}(t_{\rm cmp})$
(s) @$R_{\rm t}$ $(M_\odot)$ ${}^{12}$C ${}^{16}$O ${}^{4}$He ${}^{20}$Ne ${}^{24}$Mg ${}^{28}$Si ${}^{32}$S ${}^{40}$Ca ${}^{56}$Ni
[CO+NS(l)]{} 0.18 0.44 170 3.4E-3 0.50 0.50 5E-8 4E-3 1E-3 3E-4 6E-6 2E-9 ...
[CO+NS(l-hr)]{} 0.21 0.42 3.5E-3 0.49 0.50 6E-8 6E-3 2E-3 4E-4 6E-6 ... ...
[CO/He+NS(l)]{} 0.20 0.49 7.2E-3 0.45 0.21 2E-2 0.23 8E-2 7E-3 3E-6 ... ...
[ONe+BH(l)]{} 0.59 0.72 30 1.2E-3 7E-5 0.60 3E-7 0.16 5E-2 0.11 5E-2 9E-3 3E-3
8.7E-3 ... 170 8.4E-3 0.34 0.45 4E-2 2E-2 3E-2 6E-2 2E-2 9E-3 2E-2
[CO/He+NS(s)]{} 4.4E-3 ... 170 1.0E-2 0.38 0.20 4E-2 0.21 9E-2 4E-2 8E-3 5E-3 1E-2
[ONe+BH(s)]{} 7.6E-4 ... 30 2.2E-3 8E-5 0.56 8E-4 0.10 5E-2 0.16 7E-2 2E-2 3E-2
1.5E-2 ... 110 7.3E-2 0.40 0.46 3E-2 1E-2 2E-2 3E-2 1E-2 9E-3 2E-2
[CO+NS(s-sp)]{} 2.3E-3 ... 170 8.3E-3 0.33 0.44 7E-2 3E-2 3E-2 5E-2 1E-2 9E-3 2E-2
[ONe+BH(s-sp)]{} 9.7E-6 ... 30 2.4E-3 9E-5 0.46 3E-2 0.11 3E-2 0.15 8E-2 3E-2 9E-2
\[t:results\]
------------------ --------------------------- --------------------- -------------------------- --------------------------- ------------ ------------ ------------- ------------- ------------- ------------ ------------- ------------- ------
Table \[t:results\] shows that doubling the resolution in radius and angle results in enhanced mass ejection by about $\sim 10\%$, which is consistent with other long-term hydrodynamic disk studies carried out at similar resolution (e.g., @FM13).
Time-average behavior {#s:time-average}
---------------------
We average our results in time to remove the stochastic component of the flow, facilitating structural analysis and comparison with previous one-dimensional work. We denote by angle brackets the time- and angle average of a quantity *per unit volume* $A(r,\theta\,t)$, $$\langle A\rangle (r) = \frac{1}{(t_{\rm f}-t_{\rm i}) (\cos\theta_{\rm f}-\cos\theta_{\rm i})}
\int_{t_{\rm i}}^{t_{\rm f}}\int_{\cos\theta_{\rm i}}^{\cos\theta_{\rm f}} A\,dt\,d\cos\theta,$$ where $[t_{\rm i},t_{\rm f}]$ and $[\theta_{\rm i},\theta_{\rm f}]$ are the time and polar-angle interval considered in the average. For quantities per unit mass $\tilde A = A/\rho$, we compute the average as $\langle \rho \tilde A\rangle / \langle \rho\rangle$. For example, the average of the Bernoulli parameter is computed as $$\begin{aligned}
\label{eq:bernoulli_average}
\langle b\rangle & = & \frac{1}{\langle\rho\rangle^2}\frac{1}{2}\left[\langle \rho v_r\rangle^2
+ \langle \rho v_\theta\rangle^2 + \frac{\langle \rho j\rangle^2}{(r\sin\theta)^2}\right]\nonumber\\
&& + \frac{1}{\langle\rho\rangle}\left[\langle \rho e_{\rm int}\rangle +\langle p\rangle + \langle\rho\Phi\rangle\right],\end{aligned}$$ which we normalize with a local “Keplerian" speed $$\label{eq:vk_definition}
\langle v_K^2 \rangle = -\langle \rho\Phi\rangle / \langle\rho\rangle,$$ which is simply the last term in equation (\[eq:bernoulli\_average\]). Likewise, the root-mean-square fluctuation of a quantity per unit mass is computed as $$\textrm{r.m.s.}\, \left(\tilde A\right) \equiv \frac{\langle \rho \tilde A^2\rangle}{\langle\rho\rangle} -
\frac{\langle \rho \tilde A\rangle^2}{\langle\rho\rangle^2}.$$
![Time- and angle-averaged profiles of structural quantities for models [CO+NS(l)]{} and [CO/He+NS(l)]{} as a function of radius. The average is taken within $\pm 30^\circ$ of the equatorial plane, and within $\pm 60$s (1.5 orbits at $r=R_{\rm t}$) of the time at which peak accretion is reached ($t\simeq 200$s $\simeq$ 5 orbits). Quantities are defined as $\rho_5 = \rho/(10^5\,\textrm{g\,cm}^{-3})$, $T_9 = T/(10^9\,\textrm{K})$, $\dot{Q}_{\rm nuc,19} = \dot{Q}_{\rm nuc}/(10^{19}\,\textrm{erg\,[g\,s]}^{-1})$, $\dot{Q}_{\rm vis,19} = \dot{Q}_{\rm vis}/(10^{19}\,\textrm{erg\,[g\,s]}^{-1})$, and $\dot{M}_{-3} = \dot{M}/(10^{-3}\,M_\odot\textrm{s}^{-1})$. The average Keplerian speed $v_{\rm K}$ is given by equation (\[eq:vk\_definition\]), and the shaded areas in the middle panel bracket root-mean-square fluctuations (values for $\dot{Q}_{\rm vis}$ in model [CO+NS(l)]{} are not shown, for clarity).[]{data-label="f:timeave_profiles_LB"}](f3.pdf){width="\columnwidth"}
![Same as Figure \[f:timeave\_profiles\_LB\], but now comparing model [CO+NS(l)]{} with its high-resolution counterpart, [CO+NS(l-hr).]{} The structural profiles of the disk are essentially converged with resolution. The time-averaged profiles in the high-resolution model show more fluctuation because the data outputs were made at larger intervals in time, hence fewer snapshots are involved in the average for the same time period. []{data-label="f:timeave_profiles_resolution"}](f4.pdf){width="\columnwidth"}
Figure \[f:timeave\_profiles\_LB\] shows the average radial profiles of various quantities for the large inner boundary models [CO+NS(l)]{} and [CO/He+NS(l)]{}, with the average taken within $30$deg of the equatorial plane and within $3$ orbits[^5] at $r=R_{\rm t}$ from the time of peak accretion at the inner boundary $r=0.01R_{\rm t}$ ($206\pm 62$s). The inner and outer portions of the disk which are respectively accreting and expanding are separated by the radius at which the accretion rate $\dot{M}=0$, and this radius moves outward in time. The temperature and density profiles vary slowly with radius in both models, with a slight decrease in the density profile for the hybrid WD model due to enhanced nuclear heating from He-burning reactions.
{width="\textwidth"}
In both models, the mean viscous heating dominates at all radii, except in the region where most of the He is burned in model [CO/He+NS]{}, where the mean nuclear heating rate is at most comparable to the average viscous heating. This additional nuclear heating is associated with an enhancement of $\sim 10\%$ in the ejected mass in this hybrid model (Table \[t:results\]). While the fluctuations in the viscous heating term remain small over the entirety of the disk, nuclear burning becomes highly stochastic inside radii where heavier elements start to be produced. In the case of the hybrid model, these fluctuations can exceed the average viscous heating over a narrow range of radii, while for the fiducial model nuclear burning never dominates (the steep decrease of the viscous heating at small radii is an artifact of the boundary condition in the models shown in Figure \[f:timeave\_profiles\_LB\]). The relative weakness of nuclear burning helps explain why a thermonuclear runaway never takes place in our models.
Figure \[f:timeave\_profiles\_LB\] also shows the profile of averaged Bernoulli parameter on the disk equatorial plane. This quantity adjusts to negative values close to zero at small radii. While the average Bernoulli parameter can be slightly positive near the inner boundary, this is a consequence of vertical alternations in sign in regions from which the outflow is launched. These average profiles of Bernoulli parameter are in broad agreement with the assumptions of @M12 and MM16.
Figure \[f:timeave\_profiles\_resolution\] compares average radial profiles in the fiducial WD+NS model and a version at twice the resolution in radius and angle. The profiles of all quantities are in excellent agreement except for nuclear burning around $r=R_{\rm t}$, which is slightly higher in the high-resolution model (but still sub-dominant relative to viscous heating). Table \[t:results\] shows that the overall mass ejection is higher by about $10\%$ in the high-resolution model. While higher spatial resolution allows a better characterization of convective turbulence in the disk, the modest increase in mass ejection indicates that this convective activity is a sub-dominant factor in determining mass ejection compared to other processes such as viscous heating and angular momentum transport.
Evolution near the central object {#s:small_bnd_evolution}
---------------------------------
![Time- and angle-averaged mass fractions of various species, as labeled, for small inner boundary models [CO+NS(s)]{} (top), [CO/He+NS(s)]{} (middle), and [ONe+BH(s)]{} (bottom), at $t=113\pm 3$s ($2.8\pm 0.2$ orbits at $r=R_{\rm t}$) for the NS models and $t=39\pm 1.8$s ($4.4\pm 0.2$ orbits at $r=R_{\rm t}$) for the BH model. The gray shaded area markes the inner boundary of the BH model, midway between the horizon and the ISCO.[]{data-label="f:abund_profiles_SB"}](f6.pdf){width="\columnwidth"}
A key property of disks formed in WD-NS/BH mergers is that nuclear fusion reactions of increasingly heavier elements take place as material accretes to smaller radii with higher temperatures and densities [@M12]. Our small inner boundary models can resolve this phenomenon in its entirety, at the expense of evolving for a short amount of time relative to $t_{\rm vis}$ given the more restrictive Courant condition at smaller radii.
None of our small-boundary models undergo a detonation. Given the deeper gravitational potential than in the large boundary models, nuclear energy release at these radii is less dynamically important, so this outcome is to be expected if detonations did not already occur at larger radii.
Figure \[f:timeave\_abund\_snapshots\] shows the spatial distribution of various species in our fiducial WD+NS model that resolves the compact object \[[CO+NS(s)]{}\]. Turbulence is associated with convection driven mostly by viscous heating but also by the nuclear energy released in fusion reactions. Species are launched from the same radii of the disk in which they are produced, with only moderate radial mixing. This stratification of mass ejection into different species becomes evident when taking a time-average of the flow (also shown in Figure \[f:timeave\_abund\_snapshots\]), yielding a characteristic onion-shell-like structure as envisioned by @M12.
Time-averaged radial profiles of different abundances in the disk are shown in Figure \[f:abund\_profiles\_SB\] for the three baseline small boundary models. During accretion, elements that initially made up the WD are fused into heavier ones from the outside-in. The hybrid CO-He WD model shows a larger fraction of intermediate mass elements at larger radius than the fiducial CO WD, while the BH model completes all nucleosynthesis at larger radii due to the higher disk temperatures.
Given that we resolve the compact object, we are also able to resolve the radius inside which heavy elements undergo photodissociation into ${}^{4}$He nuclei and nucleons. In the vicinity of the NS surface, the composition is almost entirely neutrons and protons at the times shown. The BH model shows an increase in the heavy element abundance as the inner boundary is approached. This phenomenon is associated with a decreasing entropy given the net energy losses from nuclear reactions.
While the outflow composition is well stratified on spatial scales comparable to the disk thickness, as shown by Figure \[f:timeave\_abund\_snapshots\], significant mixing of the ejecta occurs as it expands outward, to the point where individual species are not distinguishable on scales comparable to the initial circularization radius $R_{\rm t}\sim 10^9$cm. Note also that ejection of fusion products is confined to a narrow cone in angle $\lesssim 40$deg from the rotation axis (Figure \[f:hist\_ang\_def\_LB-SB\]), which persists out to very large radii (Figure \[f:timeave\_abund\_snapshots\]). Figure \[f:hist\_ang\_def\_LB-SB\] suggests that nucleosynthesis products produced at deeper radii have narrower angular distributions around the rotation axis.
![Mass histogram of unbound ejecta beyond $r=R_{\rm t}$ by $t=200$s as a function of polar angle, for the default WD+NS model with small inner boundary \[[CO+NS(s)]{}, solid lines\] and large inner boundary \[[CO+NS(l)]{}, shaded areas\]. The black lines and gray shaded areas show total mass, while red and blue correspond to ${}^{24}$Mg and ${}^{56}$Ni only (the large boundary model does not make any ${}^{56}$Ni).[]{data-label="f:hist_ang_def_LB-SB"}](f7.pdf){width="\columnwidth"}
To characterize the composition of outflows, we compute an average ejecta mass fraction for species $i$ as $$\label{eq:average_abundance}
\bar X_i(r_{\rm out},t_{\rm cmp}) = \frac{\int dt\int d\Omega\, \rho v_r X_i}{\int dt \int d\Omega\, \rho v_r}$$ where the time integrals are carried out from the beginning of the simulation out to some fiducial comparison time $t_{\rm cmp}$, and the angular integral covers all polar angles. Table \[t:results\] shows mass ejected and abundances for all models, measured at $r_{\rm out} = R_{\rm t}$ and by a time that allows to compare models with different durations ($170$s for most NS models, and $30$s for the BH models). The outflow from the fiducial small-boundary CO WD is dominated by ${}^{12}$C and ${}^{16}$O at a combined $\sim 80\%$ by mass, with all nucleosynthesis products contributing each at a few $\%$ level by mass. For the hybrid CO-He WD, the original WD elements are preserved at a combined $\sim 60\%$ by mass, with ${}^{20}$Ne and ${}^{24}$Mg being a significant secondary contribution at $21\%$ and $9\%$, respectively.
In the same way as with the large boundary models, the admixture of He in the fiducial CO WD results in more energetic nuclear burning and enhanced mass ejection. Table \[t:results\] shows that when integrated out to the same time, the total unbound mass ejection within $r=R_{\rm t}$ is higher by $\sim 10\%$ in model [CO/He+NS(s)]{} than in [CO+NS(s)]{}.
The outflow from the ONe WD + BH model is qualitatively different from the fiducial CO + NS case. The ejected mass is higher given the larger disk mass and a similar overall fraction ejected. Regarding composition, the initial WD material is preserved at a combined mass fraction of $66\%$, with ${}^{28}$Si being the dominant nucleosynthetic product at $16\%$ by mass. While other products have abundances at a few $\%$ level by mass, a key property of this combination is the small amounts of ${}^{12}$C and ${}^{4}$He in the ejecta, at less than $0.1\%$ by mass.
In the fiducial and hybrid small boundary WD+NS models, the mass fraction of ${}^{56}$Ni in the ejecta is $\sim 2\%$ at a time $170$s. If we assume that this fraction remains constant in all ejecta and that the fraction of the disk mass is at least that estimated for the large boundary models ($0.4M_\odot$, which is a lower limit), we obtain a characteristic ${}^{56}$Ni yield in the range $10^{-3}-10^{-2}M_\odot$. The non-spinning BH model with small boundary makes a larger fraction of ${}^{56}$Ni which suggests a yield $\gtrsim 10^{-2}M_\odot$, given the larger WD mass and asymptotic ejected fraction. These estimates are optimistic, given the fact that burning fronts recede with time as the disk density decreases (MM16), implying that ${}^{56}$Ni production will eventually stop. Most of the mass is ejected during peak accretion (Figure \[f:mout\_def\_LB-SB\]), however, and the ${}^{56}$Ni fraction should remain approximately constant during this period. We thus expect that the late-time recession of the burning fronts will introduce corrections of order unity to the final ${}^{56}$Ni yield. Our range of ejected ${}^{56}$Ni is in agreement with previous estimates (@M12; MM16; @zenati_2019).
No significant $r$-process production is expected in our models. Figure \[f:abund\_profiles\_SB\] shows that after photodissociation, the mass fractions of neutrons and protons remain equal all the way to the surface of the NS, thus preserving the initial $Y_e = 0.5$ of the WD. While our models include charged-current weak interactions that modify $Y_e$, no appreciable neutronization occurs. At the surface of the NS we have $T\simeq 10^{10}$K and $\rho\sim 10^6$gcm${}^{-3}$ (§\[s:parameter\_dependencies\]), for which electrons are trans-relativistic. The non-relativistic and relativistic Fermi energies are comparable and smaller than the thermal energy $$\begin{aligned}
\frac{p_{\rm F}^2}{2m_e kT} & \simeq 0.2\rho_6^{2/3} T_{10}^{-1}&\\
\frac{p_{\rm F}c}{kT} & \simeq 0.5\rho_6^{1/3}T_{10}^{-1}&\end{aligned}$$ where $T_{10}=T/(10^{10}\,\textrm{K})$ and $\rho_6 = \rho/(10^6\,\textrm{g}\,\textrm{cm}^{-3})$. Thus electrons are essentially non-degenerate, and the equilibrium electron fraction is close to $Y_e = 0.5$. Even though neutrino cooling from electron-positron capture on nucleons is sub-dominant relative to other heating and cooling processes (§\[s:parameter\_dependencies\]), the timescale to change of $Y_e$ from charged-current weak interactions for non-degenerate material (e.g., @fernandez2019) $$\left(\frac{d\ln Y_e}{dt}\right)^{-1}\simeq 5T_{10}^{-5}\,\textrm{s}$$ is shorter than the characteristic evolutionary times. The lack of neutronization is thus a consequence of the non-degeneracy of the material. Whether these systems generate any $r$-process elements might depend on the details of angular momentum transport, which might result in higher accretion rates and densities at small radii, increasing the degeneracy of the material. This is not found for our choice of parameters.
### Comparison of small- and large inner boundary models {#s:comparison_sb_lb}
![Unbound mass ejected at $r=R_{\rm t}$ as a function of time in the fiducial WD+NS models with large inner boundary (black and gray) and small inner boundary (red).[]{data-label="f:mout_def_LB-SB"}](f8.pdf){width="\columnwidth"}
Given our approach to disk evolution that separates large- from small radius dynamics, it is important to make a connection between the two run types and to quantify the ejecta missing from the large boundary runs. Since the small boundary models cannot be evolved for nearly as long as large boundary runs, most of the ejecta from the former does not make it to a large enough radius to probe near-homologous expansion. Instead, we need to make the comparison at smaller radius, which we choose to be $r=R_{\rm t}$. By restricting the analysis to material with positive Bernoulli parameter, we separate bound disk material from unbound ejecta.
Figure \[f:mout\_def\_LB-SB\] compares the mass outflow rate at $r=R_{\rm t}$ from the default WD+NS with small- and large inner boundary. As expected, the model that resolves the compact object ejects more mass (factor of $\sim 2$) than the large boundary model at all times up to the end of the simulation at $t=245$s (Table \[t:results\]). This time is in the range during which the mass accretion rate onto the compact object reaches its maximum value, evolving slowly with time before entering the power-law decay regime at around $t\simeq t_{\rm vis}$. The radial profiles of the ${}^{20}$Ne and ${}^{56}$Ni mass fractions as a function of time are shown in Figure \[f:profiles\_spec\_time\], showing the location of the burning fronts. On a linear scale in time, these burning fronts are essentially at constant radii after $t\simeq 100$s for this value of the viscosity parameter.
![Radial profiles of the ${}^{20}$Ne and ${}^{56}$Ni mass fractions as a function of time for the fiducial WD+NS model with small inner boundary. See also Figure \[f:abund\_profiles\_SB\].[]{data-label="f:profiles_spec_time"}](f9.pdf){width="\columnwidth"}
The angular distribution of material is very similar up to $t=200$s, with both models ejecting the majority of the material within a funnel of $\lesssim 40$deg from the rotation axis. The large boundary simulation does not show a significant difference between the angular distribution of the total ejecta (mostly C and O) and that of ${}^{24}$Mg (the burning product with the largest mass fraction). In contrast, the small boundary model shows a trend in which burning products that are generated at smaller radii are ejected at narrower angles (on average) from the rotation axis. This is consistent with the snapshots in Figure \[f:timeave\_abund\_snapshots\], and suggests that despite the mixing, this angular segregation can persist to large radii.
The importance of resolving small radii is illustrated in Figure \[f:hist\_vel\_def\_LB-SB\], which shows the velocity distribution of ejecta for both small- and large boundary fiducial WD+NS. The velocity distribution of the large boundary model cuts off at $v_{\rm max}\simeq \sqrt{GM_c/R_{\rm t}}\sim$ few $1,000$kms$^{-1}$, which persists up to the end of the simulation (c.f., Figure \[f:evolution\_LB\]). In contrast, the outflow from the small inner boundary model can reach maximum velocities that are about $10$ times higher. These velocities correspond to gravitational binding energies of radii as small as $0.01R_{\rm t}$, where nuclear energy release is still significant (Figures \[f:abund\_profiles\_SB\] and \[f:profiles\_spec\_time\]). Note also that the mean velocities of elements produced at smaller radii are higher: ${}^{12}$C is on average slower than ${}^{24}$Mg (because it has more slow material), which in turn is slower than ${}^{56}$Ni and ${}^{4}$He (the latter two have comparable distributions). This trend is consistent with the trend in the angular distribution of burning products.
![Mass histograms as a function of radial velocity, for unbound ejecta at $r=R_{\rm t}$ in the fiducial WD+NS models (top) and WD+BH models (bottom) at the times labeled, with solid lines denoting small-boundary versions and shaded areas their large-boundary counterparts. Black/grey histograms correspond to total ejecta, while green, brown, red, blue, and cyan histograms correspond respectively to ${}^{12}$C, ${}^{16}$O, ${}^{24}$Mg, ${}^{56}$Ni, and ${}^{4}$He. The vertical dashed lines correspond from left to right to the (point-mass) Keplerian velocity at radii $\{1,0.1,0.01\}R_{\rm t}$, respectively. Model [CO+NS(s)]{} uses a reflecting boundary condition at the surface of the NS, while all other models employ an outflow boundary condition at the smallest radius.[]{data-label="f:hist_vel_def_LB-SB"}](f10.pdf){width="\columnwidth"}
The marked difference between the small- and large-boundary velocity distribution for the fiducial WD+NS model is in part a consequence of the reflecting boundary condition at the NS surface for model [CO+NS(s)]{}. The large boundary model has an outflow boundary condition, through which not only mass but also energy are lost. In contrast, the small boundary model is such that energy from accretion has nowhere to go except into a wind, given the weakness of neutrino and photodissociation cooling (§\[s:parameter\_dependencies\]). This difference stands in contrast to the large- and small-boundary BH models (also shown in Figure \[f:hist\_vel\_def\_LB-SB\]), both of which use an outflow inner radial boundary condition and have a velocity distribution that differs only by a factor of $\sim 2$ in their maximum velocity.
Finally, the composition of the outflow between large- and small boundary models is significantly different, as expected given the radii at which nucleosynthesis occurs. The fiducial large boundary model preserves the original WD composition at more than $99\%$ by mass, while this fraction drops to a combined $79\%$ (with different relative fractions) in the small boundary model. The large boundary hybrid CO-He WD model consumes a substantial fraction of the original ${}^{16}$O and ${}^{4}$He, yet it does not manage to make any significant ${}^{56}$Ni. The large boundary BH model does make some ${}^{56}$Ni, but the overall fractions of the heaviest elements are much smaller.
![Time- and angle-averaged profiles of structural quantities for small inner boundary models with a NS, comparing the baseline model [CO+NS(s)]{}, the model with a rotating NS [CO+NS(s-sp)]{} and a model with higher viscosity parameter [CO+NS(s-vs)]{}. Quantities are the same as in Figure \[f:timeave\_profiles\_LB\], and the time range for the average is $1.7\pm0.2$ orbits at $r=R_{\rm t}$ ($70\pm 8$s) for models [CO+NS(s)]{} and [CO+NS(s-sp)]{}, and $0.5\pm 0.02$ orbits ($20.2\pm 0.8$s) for model [CO+NS(s-vs)]{}. Cyan curves show net energy loss from the nuclear reaction network due to photodissiation and thermal neutrino losses (not shown for model [CO+NS(s-sp)]{} for clarity, as it resembles that of [CO+NS(s-vs)]{}), and the orange curve show charged-current neutrino losses, shown only for model [CO+NS(s-vs)]{}, for clarity.[]{data-label="f:timeave_profiles_rns"}](f11.pdf){width="\columnwidth"}
Parameter dependencies {#s:parameter_dependencies}
----------------------
We now turn to addressing some of the parameter sensitivities of our results. Figure \[f:timeave\_profiles\_rns\] shows time- and angle averaged profiles of various quantities for the baseline WD+NS model and variations of it with different viscosity parameter and spin of the neutron star. At a comparable evolutionary time, the model with higher viscosity differs in that (1) viscous heating is higher throughout the disk, (2) the disk evolution is faster, as indicated by the larger mass outflow rate at the disk outer edge, (3) nuclear energy release is a factor of a few larger inside $0.01R_{\rm t}$, and (4) the transition from positive to negative net energy generation by the reaction network moves inward in radius. The model with a spinning NS, evaluated at the same time as the baseline model, differs only inside $\sim 30$km (3 NS radii), where a boundary layer develops. The additional viscous heating in this layer results in a somewhat higher temperature and an inward shift of the transition where neutrino cooling dominates over nuclear energy release, like in the high-viscosity model. In all three models, neutrino cooling from electron/positron capture onto nucleons is sub-dominant.
Figure \[f:abund\_profiles\_rns\] shows inner disk nucleosynthesis profiles for the three small boundary NS models. Abundances of all elements are very similar among models except within a few NS radii of the stellar surface, where the high-viscosity and spinning NS profiles both deviate from the baseline model in that photodissociation of ${}^{4}$He moves further out in radius given the higher temperatures. Table \[t:results\] shows that the mass fractions in the outflow are very similar among all three models, with the possible exception of ${}^{12}$C and ${}^{4}$He, pointing to a robustness in the composition of the wind to the details of how angular momentum transport operates.
![Time- and angle-averaged abundance profiles within $30$deg of the equatorial plane over the same time period as in Figure \[f:timeave\_profiles\_rns\], for the three small boundary models that resolve the NS surface \[[CO+NS(s)]{}, [CO+NS(s-sp)]{}, and [CO+NS(s-vs)]{}\]. Abundances have the same color coding as Figure \[f:abund\_profiles\_SB\].[]{data-label="f:abund_profiles_rns"}](f12.pdf){width="\columnwidth"}
The difference in profiles between the non-spinning and spinning BH models is shown in Figure \[f:timeave\_profiles\_bh\]. While the outer disk evolution is nearly identical, differences arise near the inner boundary, where the spinning BH model has slightly higher densities and temperatures. This bifurcation does not significantly affect the radius inside which photodissociation and thermal neutrino cooling dominate over nuclear heating, although net energy loss is stronger in the spinning BH model, even exceeding viscous heating near the inner boundary. Like the NS models, neutrino cooling due to charged-current weak interactions is negligible.
![Same as Figure \[f:timeave\_profiles\_rns\], but for the two small boundary models that resolve the BH: [ONe+BH(s)]{} (non-spinning) and [ONe+BH(s-sp)]{} (spin $\chi = 0.8$). The time interval is $3\pm 0.3$ orbits at $r=R_{\rm t}$ ($26.6\pm 2.7$s).[]{data-label="f:timeave_profiles_bh"}](f13.pdf){width="\columnwidth"}
The nucleosynthesis profiles of the two BH models are shown in Figure \[f:abund\_profiles\_bh\]. Differences become prominent inside the radius at which iron group nuclei start undergoing photodissociation into ${}^{4}$He and nucleons. At smaller radii, the spinning BH model has a lower abundance of heavy elements compared with the non-spinning case. Table \[t:results\] shows however that the mass fractions in in the wind are very similar in those models, with the exception of ${}^{16}$O and ${}^{4}$He, indicating that radii close to the BH do not significantly contribute to the outflow.
![Same as Figure \[f:abund\_profiles\_rns\], but for the two small boundary models that resolve the BH. The time intervals are the same as for Figure \[f:timeave\_profiles\_bh\].[]{data-label="f:abund_profiles_bh"}](f14.pdf){width="\columnwidth"}
Comparison with previous work {#s:comparison_1d}
-----------------------------
Our results are a significant improvement relative to Paper I. First, our new large boundary models, which cover a similar range as the simulations in Paper I, are evolved for a much longer time. Second, we also include more realistic microphysics, in particular an equation of state that accounts for radiation pressure, and a realistic nuclear reaction network. Finally, we can resolve the dynamics at the surface of the central object. The key qualitative difference with the results of Paper I is the absence of any detonation in our current models. Accretion proceeds in a quasi-steady way, with secular mass ejection on the viscous time of the disk.
Figure \[f:plot\_old\_nudaf\] compares instantaneous equatorial profiles of key quantities in the fiducial high-resolution model of Paper I ([COq050\_HR]{}, used in Figures 1-3 of that paper) and in our high-resolution large-boundary WD+NS model [CO+NS(l-hr)]{}, at a time shortly before a detonation occurs in the former. Both models employ the same equatorial resolution, domain size, torus parameters, central object mass, and boundary conditions. The model from Paper I assumes an ideal gas equation of state and point mass gravity, uses a different prescription for the viscosity (proportional to density, as in @stone1999), and uses a single power-law nuclear reaction calibrated to match $^{12}$C($^{12}$C,$\gamma$)$^{24}$Mg. While our new model evolves somewhat faster due to the different viscosity, the profiles of viscous heating differ by less than a factor of $2$. The key difference is the temperature profile, which differs by a factor $4$ at the radius where most of the nuclear burning occurs in the model of Paper I. The temperature profile in model [CO+NS(l-hr)]{} is shallower at small radius, which is a consequence of radiation pressure being dominant at this location, as shown in Figure \[f:plot\_old\_nudaf\]. The burning rates correspondingly differ by several orders of magnitude.
![Instantaneous equatorial profiles of structural quantities at $t=1.6t_{\rm orb}$ in the fiducial high-resolution model of Paper I ([COq050\_HR]{}, solid lines) and our fiducial high-resolution large boundary model ([CO+NS(l-hr)]{}, dashed lines). The symbols have the same meaning as in Figure \[f:timeave\_profiles\_LB\]. The gray dashed curve shows the ratio of radiation to total pressure in model [CO+NS(l-hr)]{}. The time is close to the onset of a detonation in model [COq050\_HR]{}.[]{data-label="f:plot_old_nudaf"}](f15.pdf){width="\columnwidth"}
A separate question is whether detonations that should be occurring are not resolved in our current models. The mean accretion flow is such that burning fronts are spread out over distances comparable to the local radius (e.g., Figure \[f:timeave\_abund\_snapshots\]), so no sudden releases of energy occur given that the radial accretion speed is subsonic. The turbulent r.m.s. Mach number around $r\sim 10^8$cm is $\mathcal{M}_{\rm turb}\lesssim 0.3$ in model [CO+NS(l-hr)]{}, which implies fractional temperature fluctuations of $\mathcal{M}_{\rm turb}/3\lesssim 10\%$ if radiation pressure dominates. Figure \[f:timeave\_profiles\_LB\] shows that stochastic fluctuations in the burning rate are at most comparable to the viscous heating rate during peak accretion, when the density is the highest. The viscous heating timescale is itself a factor $\sim 10$ lower than the sound crossing time, thus nuclear burning is far from being able to increase the internal energy faster than the pressure can readjust the material. Settling the question of whether detonations occur during the initial establishment of the accretion flow to the central object will require simulations that employ magnetic fields to transport angular momentum and that fully resolve turbulence.
@zenati_2019 carried out 2D hydrodynamic simulations starting from an equilibrium torus, and employing a nuclear reaction network, a realistic EOS, and self-gravity. They report weak detonations in all of their models excluding He WDs, followed by an outflow dominated by the initial WD composition, with an admixture of heavier elements. While we find the same type of outflow velocities and composition, our results differ in that we do not find any detonation in our models, weak or strong, even in the case of a hybrid CO-He WD with an admixture of He. This difference might be in part due to resolution, as their finest grid size (in cylindrical geometry) is 1km. This is comparable to the resolution of our models at $r=10^7$cm, but coarser in the vicinity of the NS (our grid is logarithmic in radius, and on the midplane we have $\Delta r /r \simeq 0.037\simeq \Delta\theta \simeq 2^\circ$). Our results are consistent with those of @fryer1999, which found that nuclear burning was energetically unimportant during disk formation.
The time-averaged profiles in the disk equatorial plane show very close similarity to the 1D results of @M12 and MM16. Figure \[f:accretion\_exponents\_rns\] shows the profile of absolute value of the radial mass flow rate for the fiducial and hybrid small boundary models. A power-law fit to the radial dependence of the accretion rate $\dot{M}\propto r^p$ yields $p\simeq 0.7$ except in the vicinity of the NS and where the disk has not yet reached steady accretion. In the mass-loss model of MM16, this corresponds to disk outflow velocities comparable to the local Keplerian speed $v_{\rm K}$. This is consistent with the velocity distribution of the outflow (Figure \[f:hist\_vel\_def\_LB-SB\]), which shows an upper limit comparable to the Keplerian speed at the innermost radius where ${}^{56}$Ni is produced (Figure \[f:profiles\_spec\_time\]). We also find a characteristic power-law decline of the outflow rate with time after peak accretion has been achieved (Figure \[f:evolution\_LB\]). The radial dependence of the mass fractions is remarkably similar to that of MM16 (c.f. their Figure 5), although the radial position of our burning fronts evolves more slowly (compare our Figure \[f:profiles\_spec\_time\] with their Figure 6). This is a consequence of our fiducial model using a lower viscosity parameter ($0.03$) than their fiducial case ($0.1$).
![Time- and angle-averaged mass flow rate (absolute value) for the fiducial small boundary WD+NS model and its hybrid counterpart. The angle-average is taken within $30$deg of the equatorial plane. The dashed lines show power-law fits to the radial dependence of the accretion rate.[]{data-label="f:accretion_exponents_rns"}](f16.pdf){width="\columnwidth"}
Observational Implications {#s:observations}
==========================
The outflow from the accretion disk should generate an electromagnetic transient that rises over a few day timescale and reaches a peak luminosity $\sim 10^{40}$ergs$^{-1}$ if powered only by radioactive decay. We can estimate this rise time and peak luminosity from the velocity distribution of ejected ${}^{56}$Ni and our estimate of the total ejecta from the disk (equation \[eq:mass\_ejection\_power-law\], Table \[t:results\]).
Figure \[f:hist\_ni56\_vel\] shows the velocity distribution of ejected ${}^{56}$Ni at various times in the fiducial small-boundary WD+NS model, measured at the initial torus radius, which is $\sim 100$ times larger than the radius at which ${}^{56}$Ni is produced (Figure \[f:profiles\_spec\_time\]). The average velocity decreases as a function of time, which means that on average, faster material is ejected before slower material and therefore resides at larger radii, even if mixing takes place[^6]. Ignoring corrections due to the geometric collimation of the outflow, this stratification in radius and velocity means that radiation escapes from faster layers first. In our estimates, we therefore consider the cumulative mass starting from the highest velocity, $$\label{eq:mass_velocity_cum}
M_i(>v) = \int_{v}^{v_{\rm max}}\frac{dM_i}{dv}\,dv.$$ where the subscript $i$ stands for either total mass or ${}^{56}$Ni mass. Note that this is a lower limit on the velocity, since thermal energy can be converted to kinetic via adiabatic expansion.
![Mass histograms of unbound ${}^{56}$Ni as a function of radial velocity for model [CO+NS(s)]{}, measured at $r_{\rm out}=R_t$ and at the labeled times.[]{data-label="f:hist_ni56_vel"}](f17.pdf){width="\columnwidth"}
The time for radiation to escape from a layer with total mass $M_{\rm tot}(>v)$ is given by [@arnett_1979] $$\label{eq:tpeak_def}
t_{\rm pk}(>v) = \left[\frac{3\kappa M_{\rm tot}(>v)}{4\pi c v} \right]^{1/2}.$$ In evaluating equation (\[eq:tpeak\_def\]), we adopt $\kappa = 0.05$gcm$^{-3}$ for a Fe-poor mixture (MM16), and obtain $M_{\rm tot}(>v)$ by renormalizing the ${}^{56}$Ni mass distribution by a conservative total ejecta mass of $0.4M_\odot$ (Table \[t:results\]). To estimate uncertainties, we also compute this mass by re-normalizing the total (not just ${}^{56}$Ni) velocity distribution to the same total ejected mass.
The luminosity of a layer with ${}^{56}$Ni mass $M_{\rm Ni}(>v)$ at time $t=t_{\rm pk}(>v)$ is $$\begin{aligned}
\label{eq:luminosity_pk}
L_{\rm pk}(>v) & = &M_{\rm Ni}(>v)\left[\dot{Q}_{\rm Ni}(t_{\rm pk})+\dot{Q}_{\rm Co}(t_{\rm pk})\right]\end{aligned}$$ where the specific nuclear heating rates from ${}^{56}$Ni and ${}^{56}$Co decay are $$\begin{aligned}
\dot{Q}_{\rm Ni}(t) & = & \frac{\Delta E_{\rm Ni}}{m_{\rm Ni}\tau_{\rm Ni}}\,e^{-t/\tau_{\rm Ni}}\nonumber\\
& \simeq & 4.8\times 10^{10}\textrm{\,[erg\,g}^{-1}\textrm{\,s}^{-1}]\,e^{-t/\tau_{\rm Ni}}\\
\dot{Q}_{\rm Co}(t) & = & \frac{\Delta E_{\rm Co}}{m_{\rm Co}}
\frac{(\tau_{\rm Ni}\tau_{\rm Co})^{-1}}{(1/\tau_{\rm Ni}-1/\tau_{\rm Co})}\nonumber\\
& \simeq & 8.9\times 10^{9}\textrm{\,[erg\,g}^{-1}\textrm{\,s}^{-1}]\,\left(e^{-t_{\rm pk}/\tau_{\rm Co}} - e^{-t_{\rm pk}/\tau_{\rm Ni}}\right),\nonumber\\\end{aligned}$$ with $\left\{\tau_{\rm Ni},\tau_{\rm Co}\right\}\simeq \left\{8.8,111\right\}$d the mean lifetimes and $\left\{\Delta E_{\rm Ni},\Delta E_{\rm Co}\right\}\simeq \left\{2.1,4,6\right\}$MeV the decay energies of ${}^{56}$Ni and ${}^{56}$Co, respectively. Equation (\[eq:luminosity\_pk\]) assumes that the gamma-rays from radioactive decay are thermalized with $100\%$ efficiency. The total ${}^{56}$Ni mass is obtained by scaling the ejected distribution to a (conservative) estimate of $10^{-3}M_\odot$, which is somewhat smaller than the ejected fraction times total ejected mass for this model (Table \[t:results\]).
Figure \[f:rise\_time\_ni56\] shows $t_{\rm pk}(>v)$ and $L_{\rm pk}(>v)$ as a function of outflow velocity for the fiducial small-boundary WD+NS model. The rise time to peak from half maximum is in the range $2-3$d depending on whether the ${}^{56}$Ni or total velocity distributions are used, and the rise time to the mass-averaged velocity is the same. The peak luminosity is a few times $10^{40}$ergs$^{-1}$. This value can increase by a factor $10$ if the ${}^{56}$Ni yield is on the higher end of our estimates, $10^{-2}M_\odot$, coming closer to normal supernova luminosities. A rough approximation to the light curve can be obtained by plotting $L_{\rm pk}(>v)$ versus $t_{\rm pk}(>v)$, which is shown in Figure \[f:lpk\_tpk\_ni56\].
![Time to reach peak emission (equation \[eq:tpeak\_def\], red) and peak luminosity (equation \[eq:luminosity\_pk\], blue) for material with velocity larger than a given value (equation \[eq:mass\_velocity\_cum\]) in model [CO+NS(s)]{}. Solid red shows $t_{\rm pk}$ computed using the ${}^{56}$Ni velocity distribution at $t=200$s (Figure \[f:hist\_ni56\_vel\]) renormalized to $0.4M_\odot$, while the dashed red line shows the same calculation but with the (renormalized) total velocity distribution. The vertical dotted lines show the corresponding mass-weighted average velocities. The dashed blue line shows $L_{\rm pk}$ without the contribution from ${}^{56}$Co heating in equation (\[eq:luminosity\_pk\]). For the peak luminosity, we assume an initial ${}^{56}$Ni mass of $10^{-3}M_\odot$.[]{data-label="f:rise_time_ni56"}](f18.pdf){width="\columnwidth"}
![Peak luminosity (equation \[eq:luminosity\_pk\]) as a function of peak time (equation \[eq:tpeak\_def\]) for material with velocity larger than a given value (equation \[eq:mass\_velocity\_cum\]) for model [CO+NS(s)]{}. This is a rough approximation to the light curve of the radioactively-powered transient expected from the disk outflow. The reverse-cumulative mass distribution for the peak time is obtained by integrating the ${}^{56}$Ni mass distribution (Figure \[f:hist\_ni56\_vel\]) and re-normalizing by the asymptotic total mass ejected ($\sim 0.4M_\odot$), while the luminosity is obtained by renormalizing the distribution to $10^{-3}M_\odot$ of ${}^{56}$Ni. The gray line shows the peak luminosity without the contribution of ${}^{56}$Co to estimate the uncertainty range from our assumption of complete gamma-ray thermalization.[]{data-label="f:lpk_tpk_ni56"}](f19.pdf){width="\columnwidth"}
The short rise time suggests a connection to previously found rapidly-evolving blue transients (e.g., @drout_2014 [@rest_2018; @chen_2019]) but with much lower luminosities. It is possible that the ejecta from the disk collides with material previously ejected in a stellar wind by one or both of the progenitors of the WD and/or NS/BH, resulting in enhanced emission relative to our simple estimates based on radioactive heating. Another way to enhance the luminosity above that from radioactive decay is through accretion power (e.g., @dexter_2013, MM16). Extrapolating the accretion rate in Figure \[f:mdot\_LB\] for model [CO+NS(s)]{} to $t\simeq t_{\rm peak}\simeq 3$d, yields $\sim 10^{43}$ergs$^{-1}$ for a $10\%$ thermalization efficiency.
In addition to powering a supernova-like transient from the unbound ejecta, we speculate that the inner parts of the accretion flow (near the central NS or BH) could generate a relativistic jet similar to those in gamma-ray bursts (e.g., @fryer1999 [@King+07]). We obtain peak accretion rates onto the central compact object of $\sim 10^{-6}-10^{-3}M_\odot$s$^{-1}$, with a peak timescale of tens to hundreds of seconds (Figure \[f:mdot\_LB\]). Assuming a jet launching efficiency of $\epsilon_{\rm j} \lesssim 0.1$, the peak jet power could therefore be $\epsilon_j\dot{M}c^2 \lesssim 10^{47} - 10^{50}$ergs$^{-1}$. While these characteristic luminosities (timescales) are somewhat too low (long) compared to the majority of long-duration gamma-ray bursts, they may be compatible with other high energy transients. For instance, @xue_2019 recently discovered an X-ray transient, CDF-S XT2, with [*Chandra*]{} with a peak isotropic luminosity $L_{\rm X} \sim 3\times 10^{45}$ergs$^{-1}$ and peak duration $\sim 10^3$s. The late-time decay of the X-ray luminosity, with a time exponent $2.16^{+26}_{-29}$, is in broad agreement with the decay rate of the accretion rate in our models (Figure \[f:mdot\_LB\]). While peak accretion in our models occurs somewhat earlier, this peak time is tied to how angular momentum transport is modeled. The host galaxy and spatial offset of CDF-S XT2 from its host, while consistent with those of NS-NS mergers, would plausibly also be consistent with the older stellar populations that can host WD-NS/BH mergers.
WD-NS mergers have also been discussed as a possible formation channel of pulsar planets . Using a semi-analytic model extending the torus evolution to $\sim$kyr post merger, @margalit_2017 found that conditions conducive to formation of planetary bodies consistent with the B1257+12 pulsar planets can be achieved for sufficiently low values of the alpha viscosity parameter $\alpha$ and accretion exponent $p$. The index of $p \sim 0.7$ we find in our current work (Figure \[f:accretion\_exponents\_rns\]) is somewhat higher than that required to obtain significant mass at the location of the planets and to spin-up the NS to millisecond periods, however this is subject to several uncertainties. Simulations of radiatively-inefficient accretion disks typically find $p \sim 0.4-0.8$ depending on the physics (hydrodynamic vs MHD simulations), the value of the alpha viscosity parameter, and the initial magnetic field , while observations of Sgr A\* suggest even lower values, $p \sim 0.3$ (although the physical accretion regime of Sgr A\* is very different than the WD-NS merger accretion disks considered here). Whether or not some of the matter expelled from the disk midplane remains bound and eventually circulates back is also not entirely resolved and can increase the remaining disk mass at late times, increasing the viability of the WD-NS merger pulsar-planet formation scenario.
Summary and Discussion {#s:discussion}
======================
We have carried out two-dimensional axisymmetric, time-dependent simulations of accretion disks formed during the (quasi-circular) merger of a CO or ONe WD by a NS or BH. Our models include a physical equation of state, viscous angular momentum transport, self-gravity, and a coupled $19$-isotope nuclear reaction network. We studied both the long-term mass ejection from the disk, by excluding the innermost regions, and fully global models that resolve the compact object but which can only be evolved for shorter than the viscous timescale of the disk. Our main results are the following:
1\. In all of the models we study, accretion and mass ejection proceed in a quasi-steady manner on the viscous time, with no detonations. Nuclear energy generation is at most comparable to viscous heating (Figures \[f:timeave\_profiles\_LB\], \[f:timeave\_profiles\_resolution\], \[f:timeave\_profiles\_rns\], and \[f:timeave\_profiles\_bh\]).
2\. The radiatively-inefficient character of the disk results in vigorous outflows. At least $50\%$ of the initial torus should be ejected in the wind (Figure \[f:evolution\_LB\] and Table \[t:results\]). The velocity distribution of this outflow is broad, covering the range $10^2-10^4$kms$^{-1}$ (Figures \[f:evolution\_LB\] and \[f:hist\_vel\_def\_LB-SB\]). The outflow is concentrated within a cone of $\sim 40$deg from the rotation axis (Figure \[f:evolution\_LB\] and \[f:hist\_ang\_def\_LB-SB\]). Energy losses due to photodissociation and thermal neutrino emission become important only near the central compact object, with neutrino emission from electron/positron capture onto nucleons being sub-dominant (Figure \[f:timeave\_profiles\_rns\] and \[f:timeave\_profiles\_bh\]).
3\. Our models can capture the burning of increasingly heavier elements, as accretion proceeds to large radii, all the way to the iron group elements and its subsequent photodissociation into ${}^{4}$He and nucleons (Figures \[f:timeave\_abund\_snapshots\], \[f:abund\_profiles\_SB\], \[f:abund\_profiles\_rns\] and \[f:abund\_profiles\_bh\]). The outflow composition is dominated by that of the initial WD, with burning products accounting for $10-30\%$ by mass (Table \[t:results\]). Based on the mass fractions of elements in the wind and the ejecta masses from large boundary models, we estimate that $10^{-3}-10^{-2}M_\odot$ of ${}^{56}$Ni should be produced generically by these disk outflows. The wind composition is relatively robust to variations in the disk viscosity, rotation rate of the NS or BH, and spatial resolution. No significant neutronization (and thus $r$-process production) is expected from our models.
4\. Two predictions from our results are that (1) the average velocities of burning products generated at smaller radii are higher (i.e., helium and iron should be faster on average that Mg or Si; Figure \[f:hist\_vel\_def\_LB-SB\]), and that (2) these burning products should be (on average) concentrated closer to the rotation axis than lighter elements (Figure \[f:hist\_ang\_def\_LB-SB\]).
5\. Based on the ejecta mass and velocity, we estimate that the resulting transients should rise to their peak brightness within a few days (Figures \[f:rise\_time\_ni56\]). When including only heating due to radioactive decay of ${}^{56}$Ni (and ${}^{56}$Co) generated in the outflow, we obtain peak bolometric luminosities in the range $\sim 10^{40}-10^{41}$ergs$^{-1}$. This luminosity can be enhanced by circumstellar interaction or late-time accretion onto the central object (Figure \[f:mdot\_LB\]), potentially accounting for the properties of rapidly-evolving blue transients (§\[s:observations\]). The generation of a relativistic jet by accretion onto the central object could also account for X-ray transients such as CDF-S XT2.
The main improvement to be made in our models is the replacement of a viscous stress tensor for full magnetohydrodyanmic (MHD) modeling. Comparison between hydrodynamic and MHD models (with initial poloidal geometry) of accretion disk from NS-NS/BH mergers shows close similarity in the ejection properties of the thermal outflows in the radiatively-inefficient phase; this thermal component is the entirety of the wind in hydrodynamics, but only a subset of the ejection when MHD is included [@fernandez2019]. While in principle such magnetized disks can generate jets, the disruption of the WD will leave a significant amount of material along the rotation axis, which can pose difficulties for launching relativistic outflows.
The second possible improvement is using realistic initial conditions obtained from a self-consistent simulation of unstable Roche lobe overflow. Since the thermodynamics of the disk become quickly dominated by heating from angular momentum transport and nuclear reactions, it is not expected that the details of the initial disk thermodynamics will have much incidence on the subsequent dynamics except if (1) nuclear burning becomes important during the disruption process itself, as expected for a ONe WD merging with a NS (e.g., @M12) or (2) if the magnetic field configuration post merger (which should be mostly toroidal, in analogy with NS-NS mergers) generates significant deviations from the evolution obtained with viscous hydrodynamics.
The evolution of disks from He WDs around NS or BHs is expected to be more sensitive to the choice of parameters such as the disk entropy, mass, and viscosity parameter (MM16). We therefore leave simulations of such systems for future work.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Craig Heinke for helpful discussions, and the anonymous referee for constructive comments. RF acknowledges support from the National Science and Engineering Research Council (NSERC) of Canada and from the Faculty of Science at the University of Alberta. BM is supported by the U.S. National Aeronautics and Space Administration (NASA) through the NASA Hubble Fellowship grant $\#$HST-HF2-51412.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. BDM is supported in part by NASA through the Astrophysics Theory Program (grant number $\#$NNX17AK43G). The software used in this work was in part developed by the DOE NNSA-ASC OASCR Flash Center at the University of Chicago. This research was enabled in part by support provided by WestGrid (www.westgrid.ca), the Shared Hierarchical Academic Research Computing Network (SHARCNET, www.sharcnet.ca), and Compute Canada (www.computecanada.ca). Computations were performed on *Graham* and *Cedar*. This research also used compute and stoage resources of the U.S. National Energy Research Scientific Computing Center (NERSC), which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Computations were performed in *Edison* (repository m2058). Graphics were developed with [matplotlib]{} [@hunter2007].
Implementation of Self-Gravity {#s:self_gravity_appendix}
==============================
We implement self-gravity in spherical coordinates using the multipole algorithm of @MuellerSteinmetz1995. While [FLASH3]{} includes a version of this algorithm, it is not optimized for non-uniform axisymmetric spherical grids. Here we provide a brief description of our customized implementation and tests of it.
The truncated multipole expansion of the gravitational potential $\Phi$ in axisymmetry is $$\label{eq:multipole_expansion}
\Phi(r,\theta) = -2\pi G\sum_{\ell=0}^{\ell_{\rm max}}\,P_\ell(\cos\theta)
\left[\frac{1}{r^{\ell+1}}C_\ell(r) + r^\ell D_\ell(r) \right],$$ where $P_\ell$ is the Legendre polynomial of index $\ell$, and the radial density moments are given by $$\begin{aligned}
\label{eq:C_sg_moment_def}
C_\ell(r) & = & \int_0^\pi \sin\theta d\theta P_\ell(\cos\theta)\int_0^r dR\,R^{2+\ell}\rho(R,\theta)\\
\label{eq:D_sg_moment_def}
D_\ell(r) & = & \int_0^\pi \sin\theta d\theta P_\ell(\cos\theta)\int_r^\infty dR\,R^{1-\ell}\rho(R,\theta).\end{aligned}$$ Equation (\[eq:multipole\_expansion\]) is an exact solution of Poisson’s equation (equation \[eq:poisson\]) when $\ell_{\rm max}\to\infty$. In practice, the sum needs to be truncated at some finite $\ell_{\rm max}$, the optimal value of which is problem-dependent [@MuellerSteinmetz1995]. The main computational work involves calculation of the moments $C_\ell(r)$ and $D_\ell(r)$.
In the @MuellerSteinmetz1995 algorithm, the integrals in equations (\[eq:C\_sg\_moment\_def\])-(\[eq:D\_sg\_moment\_def\]) are first replaced by sums of integrals inside each computational cell. One then assumes that the density varies smoothly within a cell, thus decoupling the angular integral of Legendre polynomials, the radial integral of the weight, and the density. The angular integral can be calculated exactly from recursion relations of these polynomials, while the radial integral can be solved analytically. For a cell with indices $(i,j)$, the moments are thus $$\begin{aligned}
\label{eq:C_sg_discrete}
C^{ij}_\ell & = & \sum_{q=i}^{i_{\rm max}}\sum_{j=1}^{j_{\rm max}}
\left[\int_{\theta_{j-1/2}}^{\theta_{j+1/2}} \sin\theta d\theta P_\ell(\cos\theta)\right]\nonumber\\
& & \qquad\times\,\frac{\rho_{ij}}{3+\ell}\left(r_{q+1/2}^{3+\ell} - r_{q-1/2}^{3+\ell} \right)\\
\label{eq:D_sg_discrete}
D^{ij}_\ell & = & \sum_{q=1}^{i}\sum_{j=1}^{j_{\rm max}}
\left[\int_{\theta_{j-1/2}}^{\theta_{j+1/2}} \sin\theta d\theta P_\ell(\cos\theta)\right]\nonumber\\
& & \times\rho_{ij}\left\{\begin{array}{l}
\left(r_{q+1/2}^{2-\ell} - r_{q-1/2}^{2-\ell} \right)/(2-\ell)\quad (\ell\ne 2)\\
\noalign{\smallskip}
\ln{\left(r_{q+1/2}/r_{q-1/2}\right)}\qquad\qquad\phantom{a}(\ell=2)
\end{array}\right.\end{aligned}$$ where half-integer indices denote cell edges, and $\rho_{ij}$ is the volume-averaged density of the cell.
The angular and radial integrals are computed once at the beginning of the simulation. To improve accuracy, @MuellerSteinmetz1995 recommend computing the sum in equation (\[eq:C\_sg\_discrete\]) from small radii to large radii, and vice-versa for equation (\[eq:D\_sg\_discrete\]). In practice, in our non-uniform grid implementation, the domain is spatially decomposed by compute core. The sums are first computed locally within each core, and then each core broadcasts the total of the sum within itself to the others. Finally, global cumulative sums are constructed locally with the information from all other cores. The final gravitational potential is computed by adding the contribution of the point mass from the neutron star (equation \[eq:poisson\]). Overall, the gravity solver ads a cost of approximately $50\%$ that of the hydrodynamic solver. The latter is comparable or smaller than the cost of the nuclear reaction network, therefore the inclusion of self-gravity is only a moderate addition to the computational budget.
![Numerical test of the multipole solver. Shown as a function of radius is the fractional difference between the numerical solution and the analytic potential in equation (\[eq:solution\_sg-test\]), when initializing the domain with the density profile of equation (\[eq:density\_profile\_sg-test\]). The top panel restricts the density profile to the spherical component only ($P_2 = 0$), while the bottom panel uses both spherical and dipolar components (the multipole solver allows $\ell \leq \ell_{\rm max}=12$). The resolutions shown correspond to grids logarithmically spaced in radius and equispaced in $\cos\theta$, extending over the full range of polar angles, and an inner boundary at $r_{\rm in}=2\times 10^8$cm and an outer boundary $100$ times larger. The parameters of the analytic solution are $\rho_0 = 10^7$gcm$^{-3}$ and $R = 2\times 10^8$cm. Curves are computed for a single angular direction (cell center adjacent to equatorial plane from above).[]{data-label="f:sg_moments_test"}](fA1.pdf){width="\columnwidth"}
![Same as Figure \[f:sg\_moments\_test\], but showing results as a function of polar angle when including both spherical and dipolar components. Top and bottom panels use radii in the interior ($r \leq R$) and exterior ($r>R$) parts of the solution, respectively.[]{data-label="f:sg_moments_angular"}](fA2.pdf){width="\columnwidth"}
We test our implementation by comparing it against an analytic solution. Using the density profile $$\label{eq:density_profile_sg-test}
\rho(r,\theta) =
\bigg\{
\begin{array}{lr}
\rho_0\left[ 1 + P_2(\cos\theta)\right] & r\leq R\\
\noalign{\smallskip}
0 & r > R
\end{array}$$ with $\rho_0$ and $R$ constant, yields the following gravitational potential: $$\label{eq:solution_sg-test}
\Phi = -2\pi G\rho_0\left[I_0(r) + I_2(r)P_2(\cos\theta)\right]$$ with $$\begin{aligned}
I_0(r) & = &
\bigg\{\begin{array}{lr}
(2/3)(r^3 - r_{\rm in}^3)/r + (R^2 - r^2) & r \leq R\\
\noalign{\smallskip}
(2/3)(R^3 - r_{\rm in}^3)/r & r > R
\end{array}\\
\noalign{\smallskip}
\noalign{\smallskip}
I_2(r) & = &\frac{2}{5}
\bigg\{\begin{array}{lr}
(r^5 - r_{\rm in}^5)/(5r^3) + r^2\ln (R/r) & r\leq R\\
\noalign{\smallskip}
(R^5 - r_{\rm in}^5)/(5r^3) & r > R
\end{array}\end{aligned}$$ where $r_{\rm in}$ corresponds to the inner radial boundary.
For our tests, we use a computational domain extending from $r_{\rm in}=2\times 10^8$cm to a radius $100$ times larger, covering all polar angles ($\theta \in [0,\pi]$). The density normalization and transition radius are $\rho_0 = 10^7$gcm$^{-3}$ and $R = 2\times 10^8$cm, respectively. The multipole solver is run with $\ell_{\rm max}=12$. The grid sizes used are $64\times 28$, $128\times 28$, $128\times 56$, $256\times 56$, and $256\times 112$ in radius and polar angle (logarithmic, and equispaced in $\cos\theta$, respectively). Figure \[f:sg\_moments\_test\] shows the fractional difference between the potential obtained from the multipole solver and that in the analytic solution. In all cases, increasing the spatial resolution brings the numerical value closer to the analytic solution. Agreement is better when restricting the density profile to be spherical only ($P_2 = 0$ in equations \[eq:density\_profile\_sg-test\] and \[eq:solution\_sg-test\]) than when using both spherical and dipole components. Note that agreement requires all other moments (up to $\ell_{\rm max}=12$) to have vanishing amplitudes.
At our standard resolution ($128\times 56$), agreement is of the order of $10^{-3}$, with a very weak radial dependence. The small bump at $r=R$ coincides with the transition from interior to exterior solution in equation \[eq:solution\_sg-test\]. Figure \[f:sg\_moments\_angular\] also shows that the fractional deviation is mostly uniform with polar angle, both in the interior and exterior regions. The accuracy of the $\ell=0$ moment is determined by the radial resolution only, while the angular resolution becomes more important when adding the dipole component, with smaller changes introduced by the radial resolution.
![Properties of equilibrium tori constructed with the Helmholtz equation of state and point mass gravity, as a function of disk (WD) mass. Shown are (a) entropy, (b) ratio of radiation to total pressure at density maximum, and (c) degeneracy parameter at density maximum. Each curve is labeled by the value of the torus distortion parameter (eq. \[\[eq:pp\_general\]\]). For reference, $d=\{1.2,1.5,3\}$ correspond to $e_{\rm int}/(GM_{\rm c}/R_0)\simeq\{5\%,10\%,20\%\}$ at pressure maximum, respectively, and to $H/R_0\sim \{0.4,0.6,1\}$, respectively. Solid and dashed lines correspond to $M_{\rm c}=1.4M_\odot$ and $M_{\rm c}=5M_\sun$, respectively. Colors label the composition: $\{X_{\rm C}=X_{\rm O}=0.5\}$ (black), $X_{\rm He}=1$ (red), and $\{X_{\rm O}=0.6,X_{\rm Ne}=0.4\}$ (blue). The horizontal dotted line in panel (c) marks the onset of degeneracy, $\mu_{\rm e} \ge \pi k_{\rm B} T$. The black and blue dots correspond to our fiducial CO and ONe WDs (c.f. Table \[t:models\]).[]{data-label="f:pptorus_helmholtz"}](fA3.pdf){width="\columnwidth"}
{width="\textwidth"}
![Relaxation of the initial torus with self-gravity, starting from an initial condition calculated with the gravity of the central object only. Shown are the torus mass relative to its initial value, restricted to densities higher than $10^{-3}$ times the maximum (top), and the total torus energy (bottom). Parameters correspond to model [CO+NS(l)]{}.[]{data-label="f:sg_e-mass"}](fA5.pdf){width="\columnwidth"}
![Density weighted radius (top) and opening angle from the equator (bottom) as a function of time, for different values of the maximum Legendre index $\ell_{\rm max}$ in the multipole expansion (equation \[eq:multipole\_expansion\]). Torus parameters correspond to our fiducial model, relaxed in self-gravity without other source terms. See the text for the definition $\tilde r$ and $\Delta \tilde \theta$.[]{data-label="f:lmax_convergence"}](fA6.pdf){width="\columnwidth"}
Construction of initial torus {#s:initial_condition_appendix}
=============================
Equilibrium torus without self-gravity {#s:initial_torus_ptmass}
--------------------------------------
As a starting point for the initial condition, we construct an equilibrium torus with constant entropy $s$, angular momentum, and composition $\mathbf{X}$. By solving the Bernoulli equation (e.g., @PP84), we obtain an expression for the specific enthalpy of the torus as a function of position, given a central mass $M_{\rm c}$, radius of density maximum in the torus $R_{\rm t}$ (set to the circularization radius of the tidally-disrupted white dwarf; Paper I), and a dimensionless ‘distortion parameter’ $d$ (which is a function of the torus entropy or $H/R$, see e.g., @stone1999) $$\label{eq:pp_general}
w(r,\theta) = \frac{GM_{\rm c}}{R_{\rm t}}\left[
\frac{R_{\rm t}}{r}-\frac{1}{2}\frac{R_{\rm t}^2}{(r\sin\theta)^2} - \frac{1}{2d}\right],$$ where $w = e_{\rm int} + p/\rho$ is the specific enthalpy of the fluid.
For fixed entropy and composition, there is also a one-to-one thermodynamic mapping between the enthalpy and density $w(\rho)|_{s,\mathbf{X}}$ from the equation of state. Inverting this function in combination with equation (\[eq:pp\_general\]) yields the mass of the torus after spatial integration. The limits of integration are obtained by setting the left-hand side to zero in equation (\[eq:pp\_general\]). An iteration is required to find the distortion parameter $d$ that yields the desired torus mass $M_{\rm t}$ \[which amounts to solving for the function $d(s)$\]. Note that the circularization radius (and thus $R_{\rm t}$) is a function of the torus mass and central object mass, hence $M_{\rm t}$ and $R_{\rm t}$ are not independent in this problem.
Figure \[f:pptorus\_helmholtz\] shows properties of these tori as a function of mass, for three different compositions. Our fiducial C/O WD of mass $M_{\rm wd}=0.6M_\odot$ with distortion parameter $d=1.5$ has an entropy $3k_{\rm B}$ per baryon, has very small degree of electron degeneracy, and a small contribution of radiation to the total pressure. Helium WDs of the same mass and distortion parameter have higher entropy, lower contribution of radiation pressure, and higher degeneracy. Increasing the WD mass at constant distortion parameter decreases the entropy, decreases the contribution of radiation pressure, and increases electron dengeneracy. Our fiducial ONe WD has very similar entropy and degeneracy level compared to the fiducial CO WD, but with a higher relative contribution from radiation to the total pressure.
Relaxation with self-gravity {#s:relax_sg}
----------------------------
We obtain a quiescent initial torus with self-gravity by evolving the initial torus solution obtained without self-gravity (§\[s:initial\_torus\_ptmass\]) for $20$ orbits without any other source terms. The torus undergoes radial and vertical oscillations as it adjusts to the new gravitational field, eventually reaching a new equilibrium configuration. Figure \[f:relax\_snapshots\] shows snapshots in the evolution of the fiducial $0.6M_\odot$ CO WD, illustrating the amplitude of these oscillations. The new radius of maximum density is $5\%$ smaller than the original, and the maximum density is a factor $1.6$ higher.
The relaxation process results in the ejection of some mass to large radii. Figure \[f:sg\_e-mass\] shows that about $2\%$ of the mass contained in material denser than $10^{-3}$ times the maximum density is redistributed to larger radii. The frequency of the oscillations is approximately the orbital frequency at the density maximum. Figure \[f:sg\_e-mass\] also shows the total energy of the torus, which undergoes oscillations of decreasing amplitude, eventually settling into a new equilibrium value. By the time we stop the relaxation, the amplitude of the oscillations has decrease to about $1\%$.
We also use this torus relaxation process to find the optimal Legendre index at which to truncate the multipole expansion (equation \[eq:multipole\_expansion\]). We perform the relaxation process over 20 orbits for our fiducial torus using different values of the maximum Legendre index $\ell_{\rm max}$, with a reference value of $12$ as recommended by @MuellerSteinmetz1995. Convergence is quantified by the radial position of the torus and its opening angle. We define these quantities as an average radius $\tilde r$, weighted by the angle-averaged density profile, and the opening polar angle $\Delta \tilde\theta$ from the equator (at a constant radius $r=2\times 10^9$cm) at which the density drops to $10^{-3}$ of its maximum value in the simulation. Figure \[f:lmax\_convergence\] shows the evolution of these two metrics as a function of time for different values of $\ell_{\rm max}$. The evolution is essentially converged after $\ell_{\rm max}=6$, with $\ell_{\rm max}=12$ and $24$ causing an indistinguishable change relative to each other. We therefore adopt $\ell_{\rm max}=12$ for all of our simulations.
![Absolute value of the mass accretion rate as a function of radius, averaged in time and within $30$deg of the equatorial plane, for selected models, as labeled. *Top:* ONe WD model with non-spinning BH at the center (large boundary: blue, small boundary: red), the dashed line indicates the position of the ISCO radius. *Bottom:* fiducial WD+NS model (large radius: blue, small radius: red). The dashed line is the same power-law fit ($\propto r^{0.71}$) to the accretion rate as in Figure \[f:accretion\_exponents\_rns\].[]{data-label="f:mdot_radius_appendix"}](fB1.pdf){width="\columnwidth"}
Long-term accretion rate at the central object {#s:accretion_central}
==============================================
Despite the fact that our fully global (“small boundary") models cannot be evolved for long enough to obtain a reliable long-term measure of the accretion rate at the central object (to assess jet or fallback power, etc.), we can still estimate this quantity from our large boundary models by examining the radial behavior of the accretion rate, as shown in Figure \[f:mdot\_radius\_appendix\].
A general feature of large boundary models is that the placement of an outflow boundary condition at a radius when the flow is subsonic alters the behavior compared to what it would be had that boundary not be there. Figure \[f:mdot\_radius\_appendix\] shows that there is an increase in the accretion rate by a factor of a few as this boundary is approached, deviating from the power-law behavior at larger radii.
In the case of fully global BH models, the boundary is placed midway between the ISCO and the horizon, where the flow is supersonic and thus causally disconnected from that at larger radii. Figure \[f:mdot\_radius\_appendix\] shows that the ISCO results in an increase in the accretion rate similar to that obtained when placing the boundary further out, such that the value at the ISCO is essentially the same as that measured at the innermost active radius in the large boundary run. We therefore estimate the accretion rate onto the black hole, for Figure \[f:mdot\_LB\], as simply the value of the accretion rate at the smallest radius in the large boundary run.
When a NS sits at the center, the discrepancy in the accretion rate at the smallest radii between the large- and small boundary runs is more significant. Nevertheless, we can estimate a reasonable value by measuring the accretion rate in the large boundary model at a radius where the power-law behavior still holds, and then extrapolating using the power-law exponent, as indicated in Figure \[f:mdot\_radius\_appendix\]. In Figure \[f:mdot\_LB\], the accretion rate for the two NS models is obtained by measuring it at $r=10^8$cm and applying a suppression factor $(10^{-2})^{0.7}$. This assumes that the radial exponent of the accretion rate remains constant in time, which is roughly satisfied.
\[lastpage\]
[^1]: NASA Einstein Fellow
[^2]: The neutron star is assumed to be old enough that its internal neutrino flux has no effect on the disk evolution.
[^3]: This acceleration is a subset of standard geometric source terms for finite-volume hydrodynamic solvers in curvilinear coordinates (our reference frame is inertial).
[^4]: We choose this radius for sampling as a trade-off between being far enough away from the disk to avoid including convective eddies, while also sampling enough outflow given the finite simulation time.
[^5]: The eddy turnover time is of the order of the orbital time at each radius, as inferred from the r.m.s fluctuation of the meridional velocity.
[^6]: At late times, the burning fronts are expected to recede (MM16) which should increase the average speed of ejecta again. However, this is not expected to be a dominant contribution to the total ${}^{56}$Ni mass ejected.
|
---
author:
- |
Nancy Lynch\
MIT\
lynch@csail.mit.edu
- |
Cameron Musco\
MIT\
cnmusco@mit.edu
- |
Merav Parter\
MIT\
parter@mit.edu
bibliography:
- 'wta.bib'
title: |
Computational Tradeoffs in Biological Neural Networks:\
Self-Stabilizing Winner-Take-All Networks
---
### Acknowledgments {#acknowledgments .unnumbered}
We are grateful to Mohsen Ghaffari for noting the general upper bound network construction and for many helpful discussions on the lower bound proof. We would also like to thank Nir Shavit, Rati Gelashvili, and Sergio Rajsbaum for insightful discussions.
Additional Discussion
=====================
Biological Motivation for Network Dynamics {#append:bio}
------------------------------------------
Missing Proofs and Auxiliary Claims {#sec:missing}
===================================
Throughout, we make use of the following Corollary of the Chernoff bound.
\[thm:simplecor\] Suppose $X_1$, $X_2$, …, $X_\ell \in [0,1]$ are independent random variables. Let $X=\sum_{i=1}^{\ell} X_i$ and $\mu = \mathbb{E}[X]$. If $\mu \geq 5 \log n$, then w.h.p. $X \in \mu \pm \sqrt{5\mu\log n}$, and if $\mu < 5 \log n$, then w.h.p. $X \leq \mu +5\log n$.
WTA with Two Inhibitors {#append:twoi}
-----------------------
WTA with One Inhibitor {#appenx:oneinhib}
----------------------
WTA with $O(\log n)$ Inhibitors {#appenx:logn}
-------------------------------
WTA with $\alpha \ge 2$ Inhibitors {#appenx:alpha}
----------------------------------
Missing Proofs for Main Lower Bound (Theorem \[thm:lbzaconst\]) {#appenx:lb}
---------------------------------------------------------------
### Inhibitors are Nearly Deterministic for Most Density Classes {#append:det}
### Detailed Description of the Prediction Process {#Append:LBEDetailed}
Complete Description for High Probability Lower Bound (Lemma \[lem:highprobfromcprob\]) {#appenx:lbhp}
---------------------------------------------------------------------------------------
Extension to Excitatory Auxiliary Neurons {#sec:excitat}
=========================================
|
---
abstract: 'We present a general framework and procedure to derive uncertainty relations for observables of quantum systems in a covariant manner. All such relations are consequences of the positive semidefiniteness of the density matrix of a general quantum state. Particular emphasis is given to the action of unitary symmetry operations of the system on the chosen observables, and the covariance of the uncertainty relations under these operations. The general method is applied to the case of an $n$-mode system to recover the $Sp(2n,\,R)$-covariant multi mode generalization of the single mode Schrödinger-Robertson Uncertainty Principle; and to the set of all polynomials in canonical variables for a single mode system. In the latter situation, the case of the fourth order moments is analyzed in detail, exploiting covariance under the homogeneous Lorentz group $SO(2,\,1)$ of which the symplectic group $Sp(2,\,R)$ is the double cover.'
author:
- 'J. Solomon Ivan'
- Krishna Kumar Sabapathy
- 'N. Mukunda'
- 'R. Simon'
title: Invariant theoretic approach to uncertainty relations for quantum systems
---
Introduction
============
It is a well known historical fact that the 1925–1926 discoveries of two equivalent mathematical formulations of quantum mechanics—Heisenberg’s matrix form followed by Schrödinger’s wave mechanical form—preceded the development of a physical interpretation of these formalisms[@sch-hei]. The first important ingredient of the conventional interpretation was Born’s 1926 identification of the squared modulus of a complex Schrödinger wavefunction as a probability[@born26]. The second ingredient developed in 1927 was Heisenberg’s Uncertainty Principle (UP)[@heisenbergup]. To these may be added Bohr’s Complementarity Principle which has a more philosophical flavour[@bohrcp].
Heisenberg’s original derivation of his position-momentum UP combined the formula for the resolving power of an optical microscope extrapolated to a hypothetical gamma ray microscope, with the energy and momentum relations for a single photon, in analysing the inherent limitations in simultaneous determinations of the position and momentum of an electron. His result indicated the limits of applicability of classical notions, in particular the spatial orbit of a point particle, in quantum mechanics.
More formal mathematical derivations of the UP, using the Born probability interpretation, soon followed. Prominent among them are the treatments of Kennard, Schrödinger, and Robertson[@uncer0]. Such a derivation was also presented by Heisenberg in his 1930 Chicago lectures[@heisenberg-chicago].
The Heisenberg position-momentum UP is basically kinematical in nature. In contrast, the Bohr UP for time and energy involves quantum dynamics in an essential manner[@bohr-te-up]. Later work on the UP has introduced a wide variety of ideas[@ideas] and interpretations of the fluctuations or the uncertainties involved[@interpretUP], such as in entropic[@EUP] and other formulations[@otherUP].
Even for a one-dimensional quantum system, the Schrödinger-Robertson form of the UP displays more invariance than the Heisenberg form. Thus while the latter is invariant only under reciprocal scalings of position and momentum, and their interchange amounting to Fourier transformation, the former is invariant under the three-parameter Lie group $Sp(2,\,R)$ of linear canonical transformations. Fourier transformation, as well as reciprocal scalings, belong to $Sp(2,\,R)$[@simon88]. The generalisation of the Schrödinger-Robertson UP to any finite number, $n$, of degrees of freedom displays invariance under the group $Sp(2n,\,R)$[@dutta94].
The purpose of this paper is to outline an invariant theoretic approach to general uncertainty relations for quantum systems. It combines a recapitulation and reexpression of some past results[@moments2012] with some new ones geared to practical applications. The analysis throughout is in the spirit of the Schrödinger-Robertson treatment, and, in particular, our considerations do not cover the entropic type uncertainty relations. All our considerations will be kinematical in nature.
The material of this paper is presented as follows. Section II sets up a general framework and procedure for deriving consequences of the positive semidefiniteness of the density matrix of a general quantum state, for the expectation values and fluctuations of a chosen (linearly independent) set of observables for the system. This has the form of a general uncertainty relation. A natural way to separate the expressions entering it into a symmetric fluctuation part, and an antisymmetric part contributed by commutators among the observables, hence specifically quantum in origin, is described. With respect to any unitary symmetry operation associated with the system, under which the chosen observables transform in a suitable manner, the uncertainty relation is shown to transform covariantly and to be preserved in content. In Section III this general framework is applied to the case of a quantum system involving $n$ Cartesian canonical Heisenberg pairs, i.e., an $n$-mode system; and to the fluctuations in canonical ‘coordinates’ and ‘momenta’ in any state. The resulting $n$-mode generalization of the original Schrödinger-Robertson UP is seen to be explicitly covariant under the group $Sp(2n,\,R)$ of linear homogeneous canonical transformations. Section IV returns to the single mode system, but considers as the system of observables the infinite set of operator polynomials of all orders in the two canonical variables. The treatment is formal to the extent that unbounded operators are involved. An important role is played by the set of all finite-dimensional real nonunitary irreducible representations of the covariance group $Sp(2,\,R)$. We follow in spirit the structure of the basic theorems in the classical theory of moments. Thus the formal infinite-dimensional matrix uncertainty relation is reduced to a nested sequence of finite-dimensional requirements, of steadily increasing dimensions. While this case has been treated elsewhere[@moments2012], some of the subtler aspects are now carefully brought out. In this and the subsequent Sections the method of Wigner distributions is used as an extremely convenient technical tool. Section V treats in more detail the uncertainty relations of Section IV that go one step beyond the original Schrödinger-Robertson UP. Here all the fourth order moments of the canonical variables in a general state are involved. Their fully covariant treatment brings in the defining and some other low dimensional representations of the three-dimensional Lorentz group $SO(2,\,1)$. It is shown that the uncertainty relations (to the concerned order) are all expressible in terms of $SO(2,\,1)$ invariants. In Section VI we describe an interesting aspect of the Schrödinger-Robertson UP in the light of three-dimensional Lorentz geometry, which becomes particularly apparent through the use of Wigner distribution methods. We argue that this should generalise to the conditions on fourth (and higher) order moments as well. The paper ends with some concluding remarks in Section VII.
General Framework
=================
We consider a quantum system with associated Hilbert space ${\cal H}$, state vectors $|\psi \rangle$, $|\phi \rangle$, $\cdots$ and inner product $\langle \phi | \psi \rangle$ as usual. A general (mixed) state is determined by a density operator or density matrix $\hat{\rho}$ acting on ${\cal H}$ and obeying $$\begin{aligned}
\hat{\rho}^{\dagger} =\hat{\rho} \geq 0,\,\,\,\,\,{\rm Tr}\,\hat{\rho}=1.
\label{2.1}\end{aligned}$$ Then ${\rm Tr}\,\hat{\rho}^2=1$ or $< 1$ distinguishes between pure and mixed states. Any hermitian observable $\hat{A}$ of the system possesses the expectation value $$\begin{aligned}
\langle \hat{A} \rangle = {\rm Tr}\,(\hat{\rho}\,\hat{A})
\label{2.2}\end{aligned}$$ in the state $\hat{\rho}$, the dependence of the left hand side on $\hat{\rho}$ being generally left implicit.
We now set up a general method which allows the drawing out of the consequences of the nonnegativity of $\hat{\rho}$ in a systematic manner. This along with two elementary lemmas will be the basis of our considerations.
Let $\hat{A}_{a}$, $a=1,\,2,\,\cdots,\,N$ be a set of $N$ [*linearly*]{} independent [*hermitian*]{} operators, each representing some observable of the system. We set up two formal $N$-component and $(N+1)$-component column vectors with hermitian operator entries as follows: $$\begin{aligned}
\hat{A}=\left(
\begin{array}{c}
\hat{A}_{1} \\
\vdots \\
\vdots \\
\hat{A}_{N}
\end{array}
\right),
\,\,\,\,\,
\hat{{\cal A}}=\left( \begin{array}{c} 1 \\ \hat{A} \end{array}\right)
= \left(
\begin{array}{c}
1\\
\hat{A}_{1} \\
\vdots \\
\vdots \\
\hat{A}_{N}
\end{array}
\right),
\label{2.3}\end{aligned}$$ From $\hat{{\cal A}}$ we construct a square $(N+1)$-dimensional ‘matrix’ with operator entries as $$\begin{aligned}
\hat{\Omega} = \hat{{\cal A}}\hat{{\cal A}}^{T} =
\left(\begin{array}{ccccc}
1 & \cdots & \cdots & \hat{A}_{b} & \cdots \\
\vdots &&&\vdots&\\
\hat{A}_{a}&\cdots & \cdots & \hat{A}_{a}\hat{A}_{b} & \cdots \\
\vdots &&&\vdots&
\end{array}
\right).
\label{2.4}\end{aligned}$$ Since $(\hat{A}_{a}\hat{A}_{b} )^{\dagger}= \hat{A}_{b}\hat{A}_{a}$, $\hat{\Omega}$ is ‘hermitian’ in the following sense: taking the operator hermitian conjugate of each element and then transposing the rows and columns leaves $\hat{\Omega}$ unchanged. In a state $\hat{\rho}$ we then have an $(N+1)$-dimensional numerical hermitian matrix $\Omega$ of the expectation values of the elements of $\hat{\Omega}$: $$\begin{aligned}
\Omega = \langle \hat{\Omega} \rangle &=& {\rm
Tr}(\hat{\rho}\,\hat{\Omega})=
\left(\begin{array}{ccccc}
1 & \cdots & \cdots & \langle \hat{A}_{b}\rangle & \cdots \\
\vdots &&&\vdots&\\
\langle \hat{A}_{a}\rangle &\cdots & \cdots &
\langle \hat{A}_{a}\hat{A}_{b}\rangle & \cdots \\
\vdots &&&\vdots&
\end{array}
\right), \nonumber \\
{\rm i.e.}, ~ \Omega_{ab} &=& {\rm Tr}(\hat{\rho}\,\hat{\Omega}_{ab})\,;\nonumber\\
\Omega^{\dagger}&=& \Omega.
\label{2.5}\end{aligned}$$ Now for any complex $(N+1)$ component column vector ${\bf C}=
(c_0,\,c_1,\, \cdots, \, c_N)^T$ we have $$\begin{aligned}
{\bf C}^{\dagger}\,\hat{\Omega}\,{\bf C}&=&
{\bf C}^{\dagger}\hat{{\cal
A}}\,({\bf C}^{\dagger}\hat{{\cal A}})^{\dagger}\geq 0, \nonumber \\
\langle {\bf C}^{\dagger}\,\hat{\Omega}\,{\bf C} \rangle
&=& {\bf C}^{\dagger}\,{\Omega}\,{\bf C} \geq 0,
\label{2.6}\end{aligned}$$ leading immediately to:
Positivity of $\hat{\rho}$ imputes positivity to the matrix $\Omega$, for every choice of $\hat{{\cal A}}$: [$$\hat{\rho}\geq 0 \Rightarrow \Omega = \langle \hat{\Omega} \rangle = {\rm
Tr}(\hat{\rho}\,\hat{\Omega})\geq 0\,, ~~ \forall \,\hat{\cal A}\,.
\label{2.7}$$]{}
This is thus an uncertainty relation valid in every physical state $\hat{\rho}$.
0.2cm [**Remark**]{}: It is for the sake of definiteness and keeping in view the ensuing applications that we have assumed the entries $\hat{{A}}_{a}$ of $\hat{\cal A}$ and $\hat{A}$ to be all hermitian. This can be relaxed and each $\hat{A}_{a}$ can be a general linear operator pertinent to the system. The only change would be the replacement of $\hat{\cal A}^{T}$ in Eq.(\[2.4\]) by $\hat{\cal A}^{\dagger}$, leading to a result similar to Theorem 1.
0.2cm
Depending on the basic kinematics of the system we can imagine various choices of the $\hat{A}_{a}$ geared to exhibiting corresponding symmetries or covariance properties of the uncertainty relation (\[2.7\]). Specifically suppose there is a unitary operator $\overline{U}$ on ${\cal H}$ such that under conjugation the $\hat{A}_{a}$ go into (necessarily real) linear combinations of themselves: $$\begin{aligned}
\overline{U}\,\overline{U}^{\,\dagger}&=&
\overline{U}^{\,\dagger}\,\overline{U}=1\!\!1,
\nonumber \\
\overline{U}^{\,-1}\,\hat{A}_{a}\,\overline{U}&=& R_{ab}\,\hat{A}_{b}, \nonumber
\\
\overline{U}^{\,-1}\,\hat{\cal A}\,\overline{U}&=& {\cal R} \,\hat{\cal A},
\nonumber \\
{\cal R}&=& \left(
\begin{array}{cc}
1 & 0 \\
0 & R
\end{array}
\right), \,\,\,\, R=(R_{ab}).
\label{2.8}\end{aligned}$$ The matrix $R$ here is real $N$-dimensional nonsingular. Then combined with Eq.(\[2.5\]) we have: $$\begin{aligned}
\hat{\rho}^{\, \prime} = \overline{U}\,\hat{\rho}\,\overline{U}^{\,-1} \Rightarrow \Omega^{\, \prime}
&=&{\rm Tr}(\hat{\rho}^{\, \prime}\,\hat{\Omega})={\rm
Tr}(\hat{\rho}\,\overline{U}^{\,-1}\,\hat{\cal A}\,\hat{\cal A}^{T}\,\overline{U})
\nonumber \\
&=& {\cal R} \,\Omega \,{\cal R}^{T}, \nonumber \\
\Omega \geq 0 &\Leftrightarrow& \Omega^{\, \prime} \geq 0.
\label{2.9}\end{aligned}$$ This is because the passage $\Omega \rightarrow \Omega^{\, \prime}$ is a congruence transformation. Thus the uncertainty relation (\[2.7\]) is covariant or explicitly preserved under the conjugation of the state $\hat{\rho}$ by the unitary transformation $\overline{U}$.
We now introduce two lemmas concerning (finite-dimensional) nonnegative matrices, whose proofs are elementary:
For a hermitian positive definite matrix in block form, $$\begin{aligned}
Q=Q^{\dagger}= \left( \begin{array}{cc}
A & C^{\dagger} \\
C & B
\end{array} \right),\,\,\,A^{\dagger}=A\,,\;\,\,B^{\dagger}= B,
\label{2.10}\end{aligned}$$ we have $$\begin{aligned}
Q > 0 \,\, \Leftrightarrow \,\, A > 0\,\,\,{\rm
and}\,\,\,B-C\,A^{-1}C^{\dagger} >0.
\label{2.11}\end{aligned}$$
The proof consists in noting that by a congruence we can pass from $Q$ to a block diagonal form[@hjbook]: $$\begin{aligned}
Q = \left( \begin{array}{cc}
1\!\!1 & 0 \\
-CA^{-1} & 1\!\!1
\end{array} \right)
\left( \begin{array}{cc}
A & 0 \\
0 & B- C A^{-1}C^{\dagger}
\end{array} \right)
\left( \begin{array}{cc}
1\!\!1 & 0 \\
-CA^{-1} & 1\!\!1
\end{array} \right)^{\dagger}.
\label{2.12}\end{aligned}$$
If we separate a hermitian matrix $Q$ into real symmetric and pure imaginary antisymmetric parts $R,\,S$ then $$\begin{aligned}
Q=Q^{\dagger} = R + iS\geq 0, \,\,{\rm det}\,S \not= 0 \,\Rightarrow R
> 0.
\label{2.13}\end{aligned}$$
The nonsingularity of $S$ means that $Q$ must be even dimensional. (The proof, which is elementary, is omitted).
Now we apply Lemma 1 to the $(N+1)$-dimensional matrix $\Omega$ in Eq.(\[2.5\]), choosing a partitioning where $B$ is $N \times N$, $C$ is $N \times 1$ and $C^{\dagger}$ is $1 \times N$: $$\begin{aligned}
\Omega = \left( \begin{array}{cc}
A & C^{\dagger} \\
C & B
\end{array} \right):\,\, A=1,\,\,B=(\langle
\hat{A}_{a}\hat{A}_{b}\rangle),\,\,\,C=(\langle \hat{A_a} \rangle).
\label{2.14}\end{aligned}$$ Then from Eq.(\[2.11\]) we conclude:
$$\begin{aligned}
\hat{\rho}\geq 0 &\Rightarrow & \Omega \geq 0 \Leftrightarrow
\nonumber \\
\tilde{\Omega}&=& (\langle(\hat{A}_{a} - \langle\hat{A}_{a}
\rangle)(\hat{A}_{b}- \langle \hat{A}_{b} \rangle) \rangle ) \geq 0.
\label{2.15}\end{aligned}$$
All expectation values involved in the elements of the $N\times N$ matrix $\tilde{\Omega}$ are with respect to the state $\hat{\rho}$.
The motivation for the definitions of $\hat{\cal A}$, $\hat{\Omega}$ as in Eq.(\[2.3\]) is now clear: after an application of Lemma 1 we immediately descend from the matrix $\Omega$ to the matrix $\tilde{\Omega}$ involving only expectation values of products of deviations from means. It is then natural to write the elements of $\tilde{\Omega}$ as follows: $$\begin{aligned}
\Delta \hat{A}_{a}&=& \hat{A}_{a}- \langle \hat{A}_{a} \rangle,
\nonumber \\
\tilde{\Omega}_{ab}&=& \langle\Delta \hat{A}_{a}\Delta \hat{A}_{b} \rangle.
\label{2.16}\end{aligned}$$ We revert to this form shortly.
The covariance of the statement (\[2.15\]), Theorem 2, under a unitary symmetry $\overline{U}$ acting as in Eq.(\[2.8\]) follows from a brief calculation: $$\begin{aligned}
\hat{\rho} \rightarrow \hat{\rho}^{\, \prime} = \overline{U} \hat{\rho}\,\overline{U}^{\,-1}
& \Rightarrow & \nonumber \\
\overline{U}^{\,-1} (\hat{A}_{a} - {\rm Tr}(\hat{\rho}^{\, \prime} \hat{A}_{a}))\overline{U}
&=&\overline{U}^{\,-1} \hat{A}_{a} \overline{U} -{\rm
Tr}(\hat{\rho}\,\overline{U}^{\,-1}\hat{A}_{a} \overline{U}) \nonumber \\
&=& R_{ab}(\hat{A}_{b}- {\rm Tr}(\hat{\rho}\hat{A}_{b})); \nonumber \\
\tilde{\Omega}\rightarrow \tilde{\Omega}^{\, \prime}&=& R \tilde{\Omega} R^{T};
\nonumber \\
\tilde{\Omega} \geq 0 & \Leftrightarrow & \tilde{\Omega}^{\, \prime} \geq 0.
\label{2.17}\end{aligned}$$
We now return to Eq.(\[2.16\]). The state $\hat{\rho}$ being kept fixed, we can split the hermitian $N \times N$ matrix $\tilde{\Omega}$ into real symmetric and pure imaginary antisymmetric parts as follows: $$\begin{aligned}
\tilde{\Omega}_{ab} &=& V_{ab}(\hat{\rho}\,;\,\hat{A})+
\frac{i}{2}\,\omega_{ab}(\hat{\rho}\,; \,\hat{A}), \nonumber \\
V_{ab}(\hat{\rho}\,;\,\hat{A})&=&V_{ba}(\hat{\rho}\,;\,\hat{A})=
\frac{1}{2}\langle\{\Delta \hat{A}_{a}, \, \Delta \hat{A}_{b} \}
\rangle \nonumber \\
&=& \frac{1}{2}\langle\{ \hat{A}_{a}, \, \hat{A}_{b} \}
\rangle - \langle \hat{A}_{a}\rangle \langle \hat{A}_{b} \rangle;
\nonumber \\
\omega_{ab}(\hat{\rho}\,;\,\hat{A})&=&- \omega_{ba}(\hat{\rho}\,;\,\hat{A})=
\,- \, i\, \langle [ \hat{A}_{a}, \, \hat{A}_{b}] \rangle .
\label{2.18}\end{aligned}$$ The brackets $[\,\cdot\,,\,\cdot\,]$ and $\{\,\cdot\,,\,\cdot\,\}$ denote, as usual, the commutator and anticommutator respectively. The natural physical identification of the $N \times N$ real symmetric matrix $V(\hat{\rho}\,;\,\hat{A})=(V_{ab}(\hat{\rho}\,;\,\hat{A}))$ is that it is the variance matrix (or matrix of covariances) associated with the set $\{\hat{A}_{a}
\}$ in the state $\hat{\rho}$. The uncertainty relation (\[2.15\]) now reads: $$\begin{aligned}
\hat{\rho}\geq 0 \,\,\Rightarrow \,\,V(\hat{\rho}\,;\,\hat{A})+
\frac{i}{2}\,\omega(\hat{\rho}\,;\,\hat{A}) \geq 0,
\label{2.19}\end{aligned}$$ and then by Lemma 2 we have the possible further consequence: $$\begin{aligned}
{\rm det}\,\omega(\hat{\rho}\,;\,\hat{A}) \not= 0 \Rightarrow
V(\hat{\rho}\,;\,\hat{A}) > 0\,.
\label{2.20}\end{aligned}$$
0.2cm
[**Remark**]{}: In case the operators $\hat{A}_{a}$ commute pairwise, in any state $\hat{\rho}$ there is a ‘classical’ joint probability distribution over the sets of simultaneous eigenvalues of all the $\hat{A}_{a}$. In such a case, the term $\omega$ in Eqs.(\[2.18\],\[2.19\]) vanishes identically, and the uncertainty relation (\[2.19\]) is a ‘classical’ statement[@fine-jmp82]. Therefore in the general case a good name for $\omega_{ab}(\hat{\rho}\,; \,\hat{A})$ is that it is the ‘commutator correction’ term.
0.2cm
It is instructive to appreciate that while the original definitions of $\Omega$ and $\tilde{\Omega}$, starting from the operator sets $\hat{\cal A}$ and $\hat{\Omega}$, make it essentially trivial to see that they must be nonnegative, the form (\[2.19\]) of the general uncertainty relation gives prominence to the variance matrix $V(\hat{\rho}\,;\,\hat{A})$. In addition, as seen earlier, the matrix $\Omega$ does not directly deal with fluctuations. It is after the use of Lemma 1 that we obtain the matrix $\tilde{\Omega}$ involving the fluctuations.
From Eqs.(\[2.8\], \[2.17\]), the effect of a unitary symmetry transformation on the real matrices $V(\hat{\rho}\,;\, \hat{A})$ and $\omega(\hat{\rho}\,;\, \hat{A})$ is seen to be: $$\begin{aligned}
\hat{\rho}^{\, \prime}=\overline{U}\,\hat{\rho}\,\overline{U}^{\,-1}\,: &&~
V(\hat{\rho}^{\, \prime}\,;\,\hat{A})=R\,V(\hat{\rho}\,;\, \hat{A})\, R^{T},
\nonumber \\
&&~\omega(\hat{\rho}^{\, \prime}\,;\,\hat{A})=R\,\omega(\hat{\rho}\,; \hat{A})\, R^{T},
\label{2.21}\end{aligned}$$ so that the form (\[2.19\]) of the uncertainty relation is manifestly preserved.
In later work, when there is no danger of confusion, we sometimes omit the arguments $\hat{\rho}$ and $\hat{A}$ in $V$ and $\omega$.
The multi mode Schrödinger-Robertson Uncertainty Principle
==========================================================
As a first example of the general framework we consider briefly the Schrödinger-Robertson UP for an $n$-mode system, which has been extensively discussed elsewhere[@dutta94; @gaussians].
The basic operators, Cartesian coordinates and momenta, consist of $n$ pairs of canonical $\hat{q}$ and $\hat{p}$ variables obeying the Heisenberg canonical commutation relations. The operator properties and relations are: $$\begin{aligned}
a=1,\,2,\,\cdots,\,2n\,: \;\; \hat{\xi}_{a} &=& \left\{ \begin{array}{cc}
\hat{q}_{(a+1)/2}, \;& a ~ {\rm odd}\,, \\
\hat{p}_{a/2},\; & a ~ {\rm even}\,;
\end{array} \right. \nonumber \\
\hat{\xi}_{a}^{\dagger} &=& \hat{\xi}_{a} \,; \nonumber \\
\left[ \hat{\xi}_{a},\,\hat{\xi}_{b} \right] &=& i \hbar \beta_{ab}\,,\nonumber\\
\beta = \,{\rm block ~ diag}\,(\,i\sigma_2,\,i\sigma_2,\,\cdots,\,i\sigma_2\,) &=&
{1\!\!1}_{n \times n} \otimes i\sigma_2\,.
\label{3.1}\end{aligned}$$ These operators act irreducibly on the system Hilbert space ${\cal
H}=L^{2}(\mathbb{R}^n)$.
We take these $\hat{\xi}_{a}$ as the $\hat{A}_{a}$ of Eq.(\[2.3\]), so here $N=2n$: $$\begin{aligned}
\hat{{\cal A}} \rightarrow
\left( \begin{array}{c} 1 \\ \hat{\xi} \end{array} \right),\,\,\,
\hat{A}\rightarrow \hat{\xi}
=\left( \begin{array}{c}
\hat{\xi}_{1} \\ \vdots \\\hat{\xi}_{2n} \end{array}\right)
=\left( \begin{array}{c}
\hat{q}_{1} \\ \hat{p}_{1}\\ \vdots \\ \hat{q}_{n} \\ \hat{p}_{n} \end{array}\right).
\label{3.2}\end{aligned}$$ Then for any state $\hat{\rho}$, the variance matrix $V$ has elements $$\begin{aligned}
V_{ab}&=& \frac{1}{2}\,{\rm Tr} \left(\hat{\rho}\,\{\hat{\xi}_{a}-{\rm
Tr}(\hat{\rho}\,\hat{\xi}_{a}),\, \hat{\xi}_{b} - {\rm
Tr}(\hat{\rho}\,\hat{\xi}_{b}) \}\right) \nonumber \\
&=&\frac{1}{2}\langle\{ \hat{\xi}_{a},\, \hat{\xi}_{b}\} \rangle -
\langle \hat{\xi}_{a} \rangle\,\langle \hat{\xi}_{b} \rangle,
\label{3.3}\end{aligned}$$ while the antisymmetric matrix $\omega$ is just the [*state-independent*]{} numerical ‘symplectic metric matrix’ $\beta$: $$\begin{aligned}
\omega_{ab}= -i \, \langle\left[\hat{\xi}_{a},\,\hat{\xi}_{b}
\right] \rangle = \hbar \beta_{ab}.
\label{3.4}\end{aligned}$$ The uncertainty relation (\[2.19\]) then becomes the $n$-mode Schrödinger-Robertson UP: $$\begin{aligned}
\hat{\rho}\geq 0 \Rightarrow V + i \frac{\hbar}{2} \beta \geq
0\,\,(\Rightarrow V > 0),
\label{3.5}\end{aligned}$$ the second step following from Eq.(\[2.20\]) as $\beta$ is nonsingular.
For $n=1$, a single mode, the matrices $V$ and $\beta$ are two-dimensional: $$\begin{aligned}
&& V = \left( \begin{array}{cc}
(\Delta q)^2 & \Delta (q,p) \\
\Delta (q,p) & (\Delta p)^2
\end{array} \right), \nonumber \\
&& (\Delta q)^2 = \langle (\hat{q} -\langle \hat{q} \rangle)^2
\rangle,\,\,\,\,
(\Delta p)^2 = \langle (\hat{p} -\langle \hat{p} \rangle)^2\rangle, \nonumber
\\
&& \Delta (q,p) = \frac{1}{2}\langle \{\hat{q}-\langle \hat{q} \rangle,\,
\hat{p} - \langle \hat{p} \rangle \} \rangle\,; \nonumber \\
&& \beta = i \sigma_{2} = \left(\begin{array}{cc} 0 & 1 \\ -1 & 0
\end{array} \right).
\label{3.6}\end{aligned}$$ Then (\[3.5\]) simplifies to $$\begin{aligned}
\left( \begin{array}{cc}
(\Delta q)^2 & \Delta (q,p)+\frac{i}{2} \hbar \\
\Delta (q,p)-\frac{i}{2}\hbar & (\Delta p)^2
\end{array} \right) &\geq& 0,\nonumber\\
i.e., \; {\rm det}\left( V + \frac{i}{2}\hbar \beta \right)\equiv
(\Delta q)^2 (\Delta p)^2 -(\Delta(q,p))^2 -\frac{\hbar^2}{4} &\geq& 0,\nonumber\\
i.e., \; {\rm det}\,V &\geq& \frac{\hbar^2}{4}\,,
\label{3.7}\end{aligned}$$ the original Schrödinger-Robertson UP.
Returning to $n$ modes, the $Sp(2n,\,R)$ covariance of the Schrödinger-Robertson UP (\[3.5\]) takes the following form: If $S \in Sp(2n,\, R)$, i.e., any real $2n \times 2n$ matrix obeying $S
\beta S^T = \beta$, then the new operators $$\begin{aligned}
\hat{\xi}_{a}^{'}=S_{ab} \, \hat{\xi}_{b}
\label{3.8}\end{aligned}$$ preserve the commutation relations in Eq.(\[3.1\]) and hence are unitarily related to the $\hat{\xi}_{a}$. These unitary transformations constitute the double valued metaplectic unitary representation of $Sp(2n,\,R)$[@pramana95]: $$\begin{aligned}
S \in Sp(2n,\,R) &\rightarrow& \overline{U}(S)={\rm unitary\,\,operator\,\,
on\,\,{\cal H}}, \nonumber \\
\overline{U}(S^{\, \prime})\overline{U}(S) &=& \pm \overline{U}(S^{\, \prime} S)\,; \nonumber \\
\overline{U}(S)^{-1}\,\hat{\xi}_{a}\, \overline{U}(S)&=& S_{ab}\,\hat{\xi}_{b}.
\label{3.9}\end{aligned}$$ Then, as an instance of Eqs.([\[2.21\]]{}) we have the results: $$\begin{aligned}
&&\hat{\rho}\rightarrow \hat{\rho}^{\, \prime} =\overline{U}(S)\,\hat{\rho}\,
\overline{U}(S)^{-1} \Rightarrow V \rightarrow V^{\, \prime} = S\,V \, S^{T},
\nonumber \\
&&V + \frac{i}{2}\,\hbar\, \beta \geq 0 \,\, \Leftrightarrow \,\, V^{\, \prime} +
\frac{i}{2}\, \hbar\, \beta \geq 0.
\label{3.10}\end{aligned}$$
0.2cm
[**Remark**]{}: The $n$-mode Schrödinger-Robertson UP (\[3.5\]), with its explicit $Sp(2n,\,R)$ covariance (\[3.10\]), constitutes the answer to an important question raised by Littlejohn[@littlejohn86]: under what conditions is a real normalized Gaussian function on a $2n$-dimensional phase space the Wigner distribution for some quantum state? The answer is stated in terms of the variance matrix which of course determines the Gaussian up to phase space displacements \[And these phase space displacements have no role to play on the ‘Wigner quality’ of a phase space distribution\]. This result has been used extensively in both classical and quantum optics[@gaussians], and more recently in quantum information theory of continuous variable canonical systems[@cv].
0.2cm
As a last comment we mention that as according to Eq.(\[3.5\]) the variance matrix $V$ is always positive definite, by Williamson’s celebrated theorem an $S \in Sp(2n,\,R)$ can be found such that $V^{\, \prime}$ in Eq.(\[3.10\]) becomes diagonal[@Williamson; @RSSCVS]. In general, though, the diagonal elements of $V^{\, \prime}$ will not be the eigenvalues of $V$.
Higher order moments for single mode system
===========================================
We now revert to the $n=1$ case of one canonical pair of hermitian operators $\hat{q}$ and $\hat{p}$, but consider expectation values of expressions in these operators of order greater than two. The relevant Hilbert space is of course ${\cal H}= L^{2}(\mathbb{R})$. As a useful computational tool we work with the Wigner distribution description of quantum states, and the associated Weyl rule of association of (hermitian) operators with (real) classical phase space functions.
Given a quantum mechanical state $\hat{\rho}$, the corresponding Wigner distribution is a function on the classical two-dimensional phase space: $$\begin{aligned}
W(q, p)=\frac{1}{2 \pi \hbar} \int_{-\infty}^{\infty} dq^{\, \prime} \left\langle
q -\frac{1}{2} q^{\, \prime} \right|\hat{\rho} \left| q + \frac{1}{2} q^{\, \prime} \right\rangle
e^{ipq^{\, \prime}/\hbar}.
\label{4.1}\end{aligned}$$ Thus it is a partial Fourier transform of the position space matrix elements of $\hat{\rho}$. This function is real and normalised to unity, but need not be pointwise nonnegative: $$\begin{aligned}
\hat{\rho}^{\dagger}=\hat{\rho} & \Rightarrow & W(q,p)^{*}=W(q,p)\,;
\nonumber \\
{\rm Tr}\,\hat{\rho}=1 &\Rightarrow&
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}dqdp \,W(q,p)=1.
\label{4.2}\end{aligned}$$ The operator $\hat{\rho}$ and the function $W(q,p)$ determine each other uniquely. The [*key property*]{} is that the quantum expectation values of operator exponentials are equal to the classical phase space averages of classical exponentials with respect to $W(q,p)$ [@Cahill]: $$\begin{aligned}
{\rm Tr}(\hat{\rho}\,e^{i(\theta\, \hat{q}- \tau\, \hat{p})}) =
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} dq dp \,W(q,p)
e^{i(\theta\, {q}- \tau\, {p})},\,\,\,-\infty <\, \theta,\, \tau\,<
\infty.
\label{4.3}\end{aligned}$$ By expanding the exponentials and comparing powers of $\theta$ and $\tau$ we get: $$\begin{aligned}
{\rm Tr}(\hat{\rho}\, \widehat{(q^n\,p^{n'})}) &=&
\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} dq dp \,W(q,p)
q^{n}p^{n'}, \nonumber \\
\widehat{(q^n\,p^{n'})}&=&{\rm coefficient\,\,of\,\,}
\frac{(i\theta)^n}{n!}\,\frac{(-i\tau)^{n'}}{{n'}!}\,\,{\rm in}\,\,\
e^{i(\theta\, \hat{q}- \tau\, \hat{p})} \nonumber \\
&=&\frac{n!\,{n'}!}{(n+n')!} \times \,\,{\rm coefficient \,\,of \,\,}
\theta^n(-\tau)^{n'} \,\,{\rm in}\,\,(\theta\,\hat{q}-
\tau\,\hat{p})^{n + n'}, \nonumber \\
&& n,\,n'=0,\,1,\,2,\, \cdots.
\label{4.4}\end{aligned}$$ Thus $\widehat{(q^n\,p^{n'})}$ is an hermitian operator polynomial in $\hat{q}$ and $\hat{p}$ associated to the classical real monomial $q^np^{n'}$. This is the Weyl rule of association indicated by $$\begin{aligned}
\widehat{(q^n\,p^{n'})}={(q^n\,p^{n'})}_{W},
\label{4.5}\end{aligned}$$ so Eq.(\[4.4\]) appears as $$\begin{aligned}
{\rm Tr}(\hat{\rho}\,{(q^n\,p^{n'})}_{W} )=
\int \int dq dp\, W(q,p) \,q^n p^{n'}.
\label{4.6}\end{aligned}$$ We regard the polynomials ${(q^n\,p^{n'})}_{W}$ as the basic ‘quantum monomials’. By linearity the association (\[4.5\]) can be extended to general functions on the classical phase space, leading to the scheme: $$\begin{aligned}
f(q,p)={\rm real\,\,classical\,\,function\,\,} &\rightarrow & \hat{F}=
(f(q,p))_{W}={\rm hermitian\,\, operator\,\, on \,\,{\cal H}},
\nonumber \\
{\rm Tr}(\hat{\rho}\,\hat{F})&=& \int \int dqdp\, W(q,p)\,f(q,p).
\label{4.7}\end{aligned}$$
0.2cm [**Remarks**]{}: Two useful comments may be made at this point. For any pair of states $\hat{\rho},\;\hat{\rho}^{\prime}$ we have $$\begin{aligned}
{\rm Tr}(\hat{\rho}\,\hat{\rho}^{\prime})&=& \int \int dq dp\,
W(q,p)\,W^{\prime}(q,p)\geq 0.
\label{4.7a}\end{aligned}$$ Based on this, one can see the following: a given real normalised phase space function $W(q,p)$ is a Wigner distribution (corresponding to some physical state $\hat{\rho}$) if and only if the overlap integral on the right hand side of Eq.(\[4.7a\]) is nonnegative for all Wigner distributions $W^{\, \prime}(q,p)$. Secondly, we refer to the remarks made following Eq.(\[2.20\]) concerning the commutative case $\left[\hat{A}_{a},\,\hat{A}_{b} \right]=0$. This happens for instance when $\hat{A}_{a}=f_{a}(\hat{q})$ for all $a$. In that case, only the integral of $W(q,p)$ over $p$ is relevant, and this is known to be the coordinate space probability density in the state $\hat{\rho}$[@hilleryPR]. In the multi mode case this generalizes to the following statement: the result of integrating $W(q_1,p_1,\,q_2,p_2,\,\cdots\,,\,q_n,p_n)$ over any ($n$-dimensional) linear Lagrangian subspace in phase space is always a genuine probability distribution (the marginal) over the ‘remaining’ $n$ phase space variables. [*This marginal is basically the squared modulus, or probability density in the Born sense, of a wavefunction on the corresponding ‘configuration space’, generalised to the case of a mixed state*]{}[@hilleryPR].
The covariance group of the canonical commutation relation obeyed by $\hat{q}$ and $\hat{p}$ is (apart from phase space translations) the group $Sp(2,\,R)$: $$\begin{aligned}
Sp(2,\,R)&=& \left\{ S=\left(\begin{array}{cc} a & b \\ c &
d \end{array} \right) ={\rm real \,\,} 2\times 2\,\,{\rm matrix}
\,\,|\,\,S\,\sigma_2\,S^{T} =\sigma_2,\,\;{\rm i.e.,}\; {\rm det}\,S =1
\right\}\,.~~
\label{4.8} \end{aligned}$$ The actions on $\hat{q}$ and $\hat{p}$ by matrices and by the unitary metaplectic representation of $Sp(2,\,R)$ are connected in this manner: $$\begin{aligned}
S \in Sp(2,\,R) &\rightarrow &\overline{U}(S)={\rm
unitary\,\,operator\,\,on\,\,{\cal H}}\,; \nonumber \\
\xi = \left(\begin{array}{c} q \\ p \end{array} \right)
& \rightarrow & \hat{\xi} =(\xi)_{W}= \left(\begin{array}{c}
\hat{q} \\ \hat{p} \end{array} \right)\,: \nonumber \\
\overline{U}(S)^{-1}\,\hat{\xi}\,\overline{U}(S)&=& S\,\hat{\xi}.
\label{4.9}\end{aligned}$$ The effect on $W(q,p)\equiv W(\xi)$ is then given as[@dutta94; @gaussians]: $$\begin{aligned}
\hat{\rho}^{\, \prime}=\overline{U}(S)\,\hat{\rho}\,\overline{U}(S)^{-1} \leftrightarrow
W^{\, \prime}(\xi)=W(S^{-1}\,\xi).
\label{4.10}\end{aligned}$$
We now introduce a more suggestive notation for the classical monomials $q^n p^{n'}$ and their operator counterparts $(q^n
p^{n'})_{W}$. This is taken from the quantum theory of angular momentum (QTAM) and uses the fact that finite-dimensional nonunitary irreducible real representations of $Sp(2,\,R)$ are related to the unitary irreducible representations of $SU(2)$ by analytic continuation. (Indeed the two sets of generators are related by the unitary Weyl trick). We use ‘quantum numbers’ $j=0,\,
\frac{1}{2},\,1,\, \cdots$, $m = j,\,j-1,\, \cdots,\, -j$ as in QTAM and define the hermitian monomial basis for operators on ${\cal H}$ in this way: $$\begin{aligned}
\hat{T}_{jm}=(q^{j+m}p^{j-m})_{W} &=& {\rm coefficient \,\,of\,\,}
\frac{(2j)!}{(j+m)!(j-m)!} \,\theta^{j+m} (-\tau)^{j-m}\,\,{\rm in}
\,\, (\theta\,\hat{q}-\tau\,\hat{p})^{2j},\nonumber \\
&&j=0,\,\frac{1}{2},\,1,\,\cdots,\,;\,\,\,m=j,\,j-1,\,\cdots,\, -j.
\label{4.11}\end{aligned}$$ For the first few values of $j$ we have $$\begin{aligned}
&&(\hat{T}_{\frac{1}{2} m})= \left(\begin{array}{c} \hat{q}\\ \hat{p}
\end{array} \right);\,\,\,
(\hat{T}_{1 m})= \left(\begin{array}{c} \hat{q}^2\\
\frac{1}{2}\{\hat{q},\, \hat{p}\} \\ \hat{p}^2
\end{array} \right);\,\,\,
(\hat{T}_{\frac{3}{2} m})= \left(\begin{array}{c} \hat{q}^3\\
\frac{1}{3}(\hat{q}^2\hat{p} + \hat{q}\hat{p}\hat{q}+ \hat{p}
\hat{q}^2)\\
\frac{1}{3}(\hat{q}\hat{p}^2 + \hat{p}\hat{q}\hat{p}+ \hat{p}^2
\hat{q})\\
\hat{p}^3
\end{array} \right); \nonumber \\
&&(\hat{T}_{2 m})= \left(\begin{array}{c} \hat{q}^4\\
\frac{1}{4}(\hat{q}^3\hat{p} + \hat{q}^2\hat{p}\hat{q}+ \hat{q}\hat{p}\hat{q}^2
+ \hat{p} \hat{q}^3)\\
\frac{1}{6}(\hat{q}^2 \hat{p}^2 +\hat{q}\hat{p}\hat{q}\hat{p}
+\hat{q}\hat{p}^2 \hat{q} + \hat{p}\hat{q}^2 \hat{p} +
\hat{p}\hat{q}\hat{p}\hat{q} + \hat{p}^2 \hat{q}^2) \\
\frac{1}{4}(\hat{q}\hat{p}^3 + \hat{p}\hat{q}\hat{p}^2+ \hat{p}^2\hat{q}\hat{p}
+ \hat{p}^3\hat{q})\\
\hat{p}^4
\end{array} \right).
\label{4.12}\end{aligned}$$ Then we have the consequences: $$\begin{aligned}
&& {\rm Tr}(\hat{\rho}\,\hat{T}_{jm}) = \int \int dqdp\, W(q,p)
\,q^{j+m}\,p^{j-m} \equiv \overline{q^{j+m}\,p^{j-m}}\,; \nonumber \\
&& S\in Sp(2,\,R):\;\, \overline{U}(S)^{-1}\, \hat{T}_{jm}\, \overline{U}(S) =
\sum_{m'= -j}^{j} K^{(j)}_{m m'} (S)\,\hat{T}_{jm'}.
\label{4.13}\end{aligned}$$ The quantum expectation values of the $\hat{T}_{jm}$ are phase space moments of $W(q,p)$, denoted for convenience with an overhead bar. The matrices $K^{(j)}(S)$ constitute the $(2j+1)$-dimensional real nonunitary irreducible representation of $Sp(2,\,R)$ obtained from the familiar ‘spin $j$’ unitary irreducible representation of $SU(2)$ by analytic continuation. For $j = \frac{1}{2}$, we have $K^{(1/2)}(S)=S$. The representation $K^{(1)}(S)$ corresponding to $j=1$ will be seen to engage our sole attention in Section V.
The noncommutative (but associative) product law for the hermitian monomial operators $\hat{T}_{jm}$ has an interesting form, being essentially determined by the $SU(2)$ Clebsch-Gordan coefficients. This is not surprising, in view of the connection between $SU(2)$ and $Sp(2,\,R)$ representations (in finite dimensions) mentioned above. In fact for these representations and in chosen bases, $SU(2)$ and $Sp(2,\, R)$ share the same Clebsch-Gordan coefficients[@moments2012]. The product formula has a particularly simple structure if we (momentarily) use suitable numerical multiples of $\hat{T}_{jm}$: $$\begin{aligned}
\hat{\tau}_{jm}= \hat{T}_{jm}/\sqrt{(j+m)!(j-m)!}.
\label{4.14}\end{aligned}$$ Then we find[@moments2012] $$\begin{aligned}
\hat{\tau}_{jm}\,\hat{\tau}_{j' m'} &=& \sum _{j'' = |j-j'|}^{j+j'}
\left(\frac{i \hbar}{2} \right)^{j+j'-j''}
\sqrt{\frac{(j+j' + j'' +1)!}{(2j'' +1)(j+j'-j'')! (j'+j'' -j)! (j'' +j
-j')!}} \nonumber \\
&& \times C^{j\,j'\,j''}_{m\,m'\, m+m'}\, \hat{\tau}_{j'',\,m +m'}.
\label{4.15}\end{aligned}$$ The $C^{j\,j'\,j''}_{m\,m'\,m''}$ are the $SU(2)$ Clebsch-Gordan coefficients familiar from QTAM[@edmonds]. We will use this product rule in the sequel.
Now we apply the general framework of Section II to the present situation. We will use a notation similar to that in the main theorems of the classical theory of moments. We take $\hat{A}$ and $\hat{{\cal A}}$ to formally be infinite component column vectors with hermitian entries: $$\begin{aligned}
\hat{A} &=& \left(\begin{array}{c} \vdots \\
\hat{T}_{jm} \\ \vdots \end{array} \right) =
(\hat{T}_{\frac{1}{2}\, \frac{1}{2}}, \,\hat{T}_{\frac{1}{2}\,
\frac{-1}{2}},\, \hat{T}_{1\,1},\, \hat{T}_{1\,0},\,\hat{T}_{1\,
-1},\,\cdots, \,\hat{T}_{jj},\,\cdots,\,
\hat{T}_{j,-j},\,\cdots)^{T}, \nonumber \\
\hat{\cal A}&=& \left(\begin{array}{c}1 \\ \hat{A} \end{array} \right).
\label{4.16}\end{aligned}$$ Thus the subscript $a$ of Eq.(\[2.4\]) is now the pair $jm$ taking values in the sequence given above. To simplify notation, as $\hat{A}$ is kept fixed, we will not indicate it as an argument in various quantities. The general entries in the infinite-dimensional matrices $\hat{\Omega}$, $\Omega$, $\tilde{\Omega}$ in Eqs.(\[2.4\],\[2.5\]) are then: $$\begin{aligned}
\hat{\Omega}_{jm, j'm'} &=& \hat{T}_{jm}\,\hat{T}_{j'm'}\,; \nonumber \\
\Omega_{jm,j'm'}(\hat{\rho}) &=&{\rm
Tr}(\hat{\rho}\,\hat{T}_{jm}\,\hat{T}_{j'm'})=
\langle \hat{T}_{jm}\,\hat{T}_{j'm'} \rangle\,; \nonumber \\
\tilde{\Omega}_{jm,j'm'}(\hat{\rho}) &=&
\langle (\hat{T}_{jm}- \langle\hat{T}_{jm}
\rangle)\,(\hat{T}_{j'm'}-\langle \hat{T}_{j'm'} \rangle) \rangle.
\label{4.17}\end{aligned}$$ (In $\hat{\Omega}$ and $\Omega$, for $j=m=0$, we have $\hat{T}_{00}
=1$). By using the product rule (\[4.15\]) the (generally nonhermitian) operator $\hat{T}_{jm}\,\hat{T}_{j'm'}$ can be written as a complex linear combination of $\hat{T}_{j'',m+m'}$ with $j'' = j+j',\,
j+j'-1,\,\cdots,\, |j-j'|$. The variance matrix $V(\hat{\rho})$ in Eq.(\[2.18\]) has the elements $$\begin{aligned}
V_{jm,j'm'}(\hat{\rho}) =\frac{1}{2} \langle \{\hat{T}_{jm},
\,\hat{T}_{j'm'} \} \rangle - \langle \hat{T}_{jm} \rangle\,\langle
\hat{T}_{j'm'} \rangle.
\label{4.18}\end{aligned}$$ From the known symmetry relation [@edmonds] $$\begin{aligned}
C^{j'\,j\,j''}_{m'\,m\, m+m'} = (-1)^{j+j'-j''}\,C^{j\,j'\,j''}_{m\,m'\,m+m'}
\label{4.19}\end{aligned}$$ we see that in the anticommutator term in Eq.(\[4.18\]) only $\hat{T}_{m+m'}^{j''}$ for $j''= j+j',\,j+j'-2,\, j+j'-4, \, \cdots$ will appear with real coefficients. On the other hand, for the antisymmetric part $\omega_{ab}$ of Eq.(\[2.18\]) we have $$\begin{aligned}
\omega_{jm,j'm'}(\hat{\rho}) = -i \left\langle \left[\hat{T}_{jm}
,\,\hat{T}_{j'm'}\right] \right\rangle ,
\label{4.20}\end{aligned}$$ so now by Eq.(\[4.19\]) the commutator here is a linear combination of terms $\hat{T}_{m+m'}^{j''}$ for $j''= j+j'-1,\,j+j'-3,\, \cdots$ with pure imaginary coefficients. There is, therefore, a clean separation of the product $\hat{T}_{jm}\, \hat{T}_{j'm'}$ into a hermitian part in $V$ and an antihermitian part in $\omega$. With these facts in mind, the uncertainty relation (\[2.19\]) is in hand: $$\begin{aligned}
V_{jm,j'm'}(\hat{\rho})&=&\sum_{j+j'-j''\,\,{\rm even}}
\cdots\,\,\langle \hat{T}_{j'',m+m'} \rangle - \langle \hat{T}_{jm}
\rangle\, \langle \hat{T}_{j'm'} \rangle, \nonumber \\
\omega_{jm,j'm'}(\hat{\rho})&=& \sum_{j+j'-j''\,\,{\rm odd}}
\cdots\,\,\langle \hat{T}_{j'',m+m'} \rangle\,; \nonumber \\
(\tilde{\Omega}_{jm,j'm'}(\hat{\rho})) &=&(V_{jm,j'm'}(\hat{\rho}))
+\frac{i}{2}\,(\omega_{jm,j'm'}(\hat{\rho})) \geq 0.
\label{4.21}\end{aligned}$$ Each matrix element of $V(\hat{\rho})$ (apart from the subtracted term) and of $\omega(\rho)$ appears as some real linear combination of expectation values of hermitian monomial operators, i.e., of moments of $W(q,p)$; however, in this way of writing, the essentially trivial nature of the statement $\tilde{\Omega}(\hat{\rho}) \geq 0$ is not manifest.
The covariance group in this problem is of course $Sp(2,\,R)$. From Eq.(\[4.13\]) we see that under conjugation by the metaplectic group unitary operator $\overline{U}(S)$, the column vector $\hat{A}$ of Eq.(\[4.16\]) transforms as a direct sum of the sequence of finite-dimensional real irreducible nonunitary representation matrices $K^{(1/2)}(S)=S$, $K^{(1)}(S)$, $K^{(3/2)}(S)\,\cdots$; so Eq.(\[2.8\]) in the present context is[@moments2012]: $$\begin{aligned}
&&S\in Sp(2,\,R)\,:\,\,\,\, \overline{U}(S)^{-1}\, \hat{A}\, \overline{U}(S)
= K(S)\,\hat{A}, \nonumber \\
&&K(S)= K^{(1/2)}(S)\oplus K^{(1)}(S)\oplus K^{(3/2)}(S)\,\oplus \,\cdots
\label{4.22}\end{aligned}$$ From Eq.(\[2.21\]), when $\hat{\rho}\rightarrow \hat{\rho}'=
\overline{U}(S) \,\hat{\rho}\, \overline{U}(S)^{-1}$ both $V(\hat{\rho})$ and $\omega(\hat{\rho})$ experience congruence transformations by $K(S)$, and the formal uncertainty relation (\[4.21\]) is preserved.
Up to this point the use of infinite component $\hat{A}$ and infinite-dimensional $\Omega$, $\tilde{\Omega}$, $V$ and $\omega$ has been formal. We may now interpret the uncertainty relation (\[4.21\]) in practical terms to mean that for each finite $N=1,\,2,\,\cdots\,$, the principal submatrix of $\tilde{\Omega}(\hat{\rho})$ formed by its first $N$ rows and columns should be nonnegative. However, in order to maintain $Sp(2,\,R)$ covariance, a slight modification of this procedure is desirable. If for each $J=\frac{1}{2},\,1,\,\frac{3}{2},\,\cdots$ we include all values of $j\,m$ for $j\leq J$, the number of rows (and columns) involved is $N_{J}=J(2J+3)$, the sequence of integers $2,\,5,\,9,\,14,\,\cdots$. Let us then define hierarchies of $N_{J}$-dimensional matrices as: $$\begin{aligned}
J=\frac{1}{2},\,1,\,\frac{3}{2},\,\cdots\,:~~&& \nonumber\\
\tilde{\Omega}^{(J)}(\hat{\rho})&=&
(\tilde{\Omega}_{jm,j'm'}(\hat{\rho})), \nonumber \\
V^{(J)}(\hat{\rho}) &=&
(V_{jm,j'm'}(\hat{\rho})), \nonumber \\
{\omega}^{(J)}(\hat{\rho}) &=&
({\omega}_{jm,j'm'}(\hat{\rho})),
\,\,\,j,\,j'=\frac{1}{2},\,1,\,\cdots,\,J\,;
\nonumber \\
\tilde{\Omega}^{(J)}(\hat{\rho}) &=&
V^{(J)}(\hat{\rho})+\frac{i}{2}
{\omega}^{(J)}(\hat{\rho}).
\label{4.23}\end{aligned}$$ However, in each of these matrices [*there is no $J$ dependence in their matrix elements*]{}. Each also naturally breaks up into blocks of dimension $(2j+1)(2j^{\, \prime}+1)$ for each pair $(j,j^{\, \prime})$ present, and these can be denoted by $\tilde{\Omega}^{(j,j')}(\hat{\rho})$, $V^{(j,j')}(\hat{\rho})$, ${\omega}^{(j,j')}(\hat{\rho})$. Symbolically, $$\begin{aligned}
\tilde{\Omega}^{(J)}(\hat{\rho})= \left(
\begin{array}{ccc}
& \vdots & \\
\cdots & \tilde{\Omega}^{(j,j')}(\hat{\rho}) & \cdots \\
& \vdots &
\end{array}
\right)
\label{4.24}\end{aligned}$$ and similary for $V^{(J)}(\hat{\rho})$ and $\omega^{(J)}(\hat{\rho})$. As examples we have: $$\begin{aligned}
\tilde{\Omega}^{(1/2)}(\hat{\rho})&=&(\tilde{\Omega}^{\left(
\frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho}))\,; \nonumber \\
\tilde{\Omega}^{(1)}(\hat{\rho})&=&
\left(\begin{array}{cc}
\tilde{\Omega}^{\left(
\frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho}) &
\tilde{\Omega}^{\left(
\frac{1}{2}, 1 \right)}(\hat{\rho})\\
\tilde{\Omega}^{\left(
1 , \frac{1}{2} \right)}(\hat{\rho}) &
\tilde{\Omega}^{\left(
1,1 \right)}(\hat{\rho})
\end{array} \right),
\label{4.25}\end{aligned}$$ and correspondingly for $V^{(J)}$, $\omega^{(J)}$. Moreover, in going from $J$ to $J +\frac{1}{2}$, we have an augmentation of each matrix with $2(J+1)$ new rows and columns, $$\begin{aligned}
\tilde{\Omega}^{(J + 1/2)}(\hat{\rho})=
\left( \begin{array}{ccc}
\tilde{\Omega}^{(J)}(\hat{\rho}) &
\begin{array}{c}
\vdots \\
\vdots \\
\vdots
\end{array}
& \begin{array}{c}
\tilde{\Omega}^{\left(
\frac{1}{2}, J + \frac{1}{2} \right)}(\hat{\rho}) \\
\vdots \\
\tilde{\Omega}^{\left(
J, J+ \frac{1}{2} \right)}(\hat{\rho})
\end{array} \\
\begin{array}{ccccc}
\cdots\,\,\,\, &\,\,\,\, \cdots\,\,\,\, &\,\,\,\, \cdots\,\,\,\, &
\,\,\,\,\cdots\,\,\,\,& \,\,\,\,\cdots
\end{array} && \cdots\,\,\,\,\cdots \\
\begin{array}{ccc}
\tilde{\Omega}^{\left(
J+ \frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho}) & \cdots & \tilde{\Omega}^{\left(
J + \frac{1}{2}, J \right)}(\hat{\rho})
\end{array} &\vdots & \tilde{\Omega}^{\left(
J+ \frac{1}{2}, J+ \frac{1}{2} \right)}(\hat{\rho})
\end{array}\right).
\label{4.26}\end{aligned}$$ The formal uncertainty relation (\[4.21\]) now translates into a hierarchy of finite-dimensional matrix conditions $$\begin{aligned}
\tilde{\Omega}^{(J )}(\hat{\rho}) =
V^{(J )}(\hat{\rho})+ \frac{i}{2}
{\omega}^{(J)}(\hat{\rho}) \geq 0,\,\,J=\frac{1}{2},\,1,\,
\frac{3}{2},\, \cdots.
\label{4.27}\end{aligned}$$ (Of course, for a given state $\hat{\rho}$, moments may exist and be finite only up to some value $J_{\rm max}$ of $J$, so the hierarchy (\[4.27\]) also terminates at this point). The lowest condition in this hierarchy, $J=\frac{1}{2}$, takes us back to Eqs.(\[3.6\],\[3.7\]): $$\begin{aligned}
\tilde{\Omega}^{\left(\frac{1}{2} \right)}(\hat{\rho}) &=&
\tilde{\Omega}^{\left(
\frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho})=
V^{\left(\frac{1}{2} \right)}(\hat{\rho})+ \frac{i}{2}
{\omega}^{\left( \frac{1}{2} \right)}(\hat{\rho})\,;
\nonumber \\
V^{\left(\frac{1}{2} \right)}(\hat{\rho}) &=&
V^{\left(\frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho}) =
\left( \left\langle \frac{1}{2} \{\hat{T}_{\frac{1}{2}, m},
\,\hat{T}_{\frac{1}{2}, m'} \}\right\rangle\right)-
\left( \begin{array}{c}
\langle \hat{q} \rangle \\
\langle \hat{p} \rangle
\end{array}
\right)\,
\begin{array}{c}
\left( \begin{array}{cc}
\langle\hat{q}\rangle &\langle\hat{p} \rangle
\end{array} \right)
\\
\\
\end{array} \nonumber \\
&=& \left( \begin{array}{cc}
(\Delta q)^2 & \Delta(q,p) \\
\Delta (q,p) & (\Delta p)^2
\end{array} \right)\,; \nonumber \\
{\omega}^{\left(\frac{1}{2} \right)}(\hat{\rho}) &=&
{\omega}^{\left(
\frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho})=
-i \left( \begin{array}{cc}
0 & \left[\hat{q},\,\hat{p} \right] \\
\left[\hat{p},\,\hat{q} \right] & 0
\end{array} \right)= i\,\hbar\, \sigma_2\,; \nonumber \\
\tilde{\Omega}^{\left(\frac{1}{2} \right)}(\hat{\rho}) &\geq&
0\,\,\,\,\,\Leftrightarrow \,\,\,\,\,
\left( \begin{array}{cc}
(\Delta q)^2 & \Delta(q,p)+ i \,\frac{\hbar}{2} \\
\Delta (q,p)-i\, \frac{\hbar}{2} & (\Delta p)^2
\end{array} \right)\,\ge 0\,\,\Leftrightarrow \nonumber \\
&& (\Delta q)^2\,(\Delta p)^2 - (\Delta (q,p))^2 \geq \frac{\hbar^2}{4},
\label{4.28}\end{aligned}$$ the original Schrödinger-Robertson UP.
It is natural to ask for the new conditions that appear at each step in the hierarchy (\[4.27\]), in passing from $J$ to $J +\frac{1}{2}$. In the generic case, when we have a strict inequality we can find the answer using Lemma 1 of Section II. Comparing $\tilde{\Omega}^{\left(J+ \frac{1}{2} \right)}(\hat{\rho})$ and $\tilde{\Omega}^{\left(J\right)}(\hat{\rho})$, in the notation of Eq.(\[2.10\]) and using Eq.(\[4.26\]) we have: $$\begin{aligned}
&&\tilde{\Omega}^{\left(J+ \frac{1}{2} \right)}(\hat{\rho})=
\left( \begin{array}{cc}
A & C^{\dagger} \\
C & B
\end{array} \right)\,;\nonumber \\
&& A = \tilde{\Omega}^{\left(J \right)}(\hat{\rho})\,,\,\,\,
\,\,B = \tilde{\Omega}^{\left(J+ \frac{1}{2}, J +\frac{1}{2}
\right)}(\hat{\rho}), \nonumber \\
&& C= \left( \begin{array}{ccc}
\tilde{\Omega}^{\left(J+ \frac{1}{2}, \frac{1}{2}
\right)}(\hat{\rho})& \cdots &
\tilde{\Omega}^{\left(J+ \frac{1}{2}, J
\right)}(\hat{\rho})
\end{array} \right).
\label{4.29}\end{aligned}$$ The ‘dimensions’ are $N_{J} \times N_{J}$, $2(J+1)\times 2(J+1)$, $2(J+1)\times N_{J}$ respectively. Then $$\begin{aligned}
\tilde{\Omega}^{\left(J +\frac{1}{2} \right)}(\hat{\rho}) > 0 \,\,
\Leftrightarrow \,\, \tilde{\Omega}^{\left(J \right)}(\hat{\rho}) > 0,
\,\,\,\,B -C \,A^{-1}\,C^{\dagger} > 0,
\label{4.30}\end{aligned}$$ where $A$, $B$, $C$ are taken from Eq.(\[4.29\]). One can see that some complication arises from the need to compute $A^{-1}$ in the new condition.
In the next Section, we analyse the case $J=\frac{1}{2}
\rightarrow J +\frac{1}{2}=1$ in some detail, as the first nontrivial step going beyond the Schrödinger-Robertson UP (\[3.7\],\[4.28\]). Before we turn to this task, however, a note on the non-generic case of singular $A$ seems to be in order.
0.2cm
[**Remark**]{}: Lemma 1 expresses the positive definiteness of a hermitian matrix $Q$ in the block form (\[2.10\]) in terms of conditions on the lower dimensional blocks. The block form itself is a description of $Q$ with respect to a given breakup of the underlying vector space on which $Q$ acts, into two mutually orthogonal subspaces. Both $A$ and $B$ are hermitian. For the case of positive semidefinite $Q$, there are two possibilities at the level of $A$, $B$, $C$. If $A^{-1}$ exists, then $Q\geq 0$ translates into $A > 0$, $B-C\,A^{-1}\,C^{\dagger}\geq 0$. In case $A$ is singular, while of course $Q \geq 0$ implies $A \geq 0$, the question is what other condition on $B$, $C$ is implied. To answer this, we further separate the subspace on which $A$ acts into two mutually orthogonal subspaces—one corresponding to the null subspace of $A$, and the other on which $A$ acts invertibly, say as $A_{1}$. Then in such a description, the block form of $Q$ is initially refined to the form $$\begin{aligned}
Q \simeq \left( \begin{array}{ccc}
0 & 0 & C_{2}^{\dagger} \\
0 & A_{1} & C_{1}^{\dagger} \\
C_{2} & C_{1} & B
\end{array}
\right),\end{aligned}$$ with the original $A$ and $C$ being respectively $\left(\begin{array}{cc} 0 & 0 \\ 0 & A_1\end{array} \right)$ and $(C_2,\,C_1)$. But now one sees easily that $Q \geq 0$ implies $C_2=0$, so as $A_1^{\,-1}$ exists, we have in this situation $$\begin{aligned}
Q \geq 0 \,\,\,\Leftrightarrow \,\,\, A_{1} > 0,\,\,\, B-
C_{1}\,A_{1}^{-1}\, C_{1}^{\dagger} \geq 0.\end{aligned}$$ This is the description of the nongeneric situation mentioned above.
$SO(2,1)$ analysis of fourth order moments
==========================================
The first nontrivial step in the hierarchy of uncertainty relations (\[4.27\]), after the Schrödinger-Robertson UP (\[3.7\],\[4.28\]), occurs in going from $J =\frac{1}{2}$ to $J + \frac{1}{2}=1$. We study this in some detail, especially as it brings into evidence the equivalence of the irreducible representation $K^{(1)}(S)$ of $Sp(2,\,R)$ and the defining representation of the three-dimensional proper homogeneous Lorentz group $SO(2,1)$[@simon84]. Indeed $K^{(2)}(S)$, $K^{(3)}(S)$, $\cdots$ are all true representations of $SO(2,1)$[@LCB].
It is useful to introduce specific symbols for the operators $\hat{T}_{\frac{1}{2}m}$, $\hat{T}_{1m}$ in the present context. We write $$\begin{aligned}
(\hat{T}_{\frac{1}{2}m})&=&(\hat{\xi}_m) =\left( \begin{array}{c}
\hat{q} \\ \hat{p}
\end{array} \right),\,\,\,\,m=\frac{1}{2},\,-\frac{1}{2}\,; \nonumber \\
(\hat{T}_{1m})&=&(\hat{X}_{m})= \left( \begin{array}{c}
\hat{q}^2 \\ \frac{1}{2}\,\{\hat{q},\,\hat{p} \} \\ \hat{p}^2
\end{array} \right),\,\,\,\,m=1,\,0,\,-1\,;
\label{5.1}\end{aligned}$$ so that one immediately recognises that $\hat{\xi}$ is a two-component spinor, and $\hat{X}$ a three-component vector, with respect to $SO(2,1)$ (see below). Their products can be computed using Eq.(\[4.15\]) or more directly by simple algebra: $$\begin{array}{rclc}
\hat{\xi}_{m}\,\hat{\xi}_{m'} &=& \hat{X}_{m+m'} + i\,\frac{\hbar}{2}
(-1)^{m-\frac{1}{2}} \,\delta_{m+m',\,0}\,; & (a) \\
\hat{\xi}_{m}\,\hat{X}_{m'}&=& \hat{T}_{\frac{3}{2},\,m+m'} + i
\,\frac{\hbar}{2} (-1)^{m-\frac{1}{2}}\,\hat{\xi}_{m+m'}\,;
& (b) \\
\hat{X}_{m}\,\hat{X}_{m'} &=& \hat{T}_{2,\,m+m'} +
\frac{\hbar^2}{4}(-1)^m\, (1+m^2)\,\delta_{m+m',0} +
i\,\hbar\,(m-m')\,\hat{X}_{m+m'}. &(c)
\end{array}
\label{5.2}$$ In ($5.2a$) the leading $J=1$ term is symmetric in $m$, $m^{\, \prime}$; while the pure imaginary $J=0$ second term is antisymmetric. In ($5.2b$) it is understood that $\hat{\xi}_{\pm \frac{3}{2}}=0$. In ($5.2c$) the first two $J=2$ and $J=0$ terms are symmetric in $m$, $m^{\, \prime}$; while the third $J=1$ term is antisymmetric. These features agree with the pattern in Eq.(\[4.21\]).
For $J=\frac{1}{2}$ in Eq.(\[4.29\]) we have $$\begin{aligned}
A=\tilde{\Omega}^{\left(\frac{1}{2},\,\frac{1}{2}
\right)}(\hat{\rho}),\,\,\,\,
B=\tilde{\Omega}^{\left(1,\,1
\right)}(\hat{\rho}),\,\,\,\,
C=\tilde{\Omega}^{\left( 1,\,\frac{1}{2}
\right)}(\hat{\rho}),\,\,\,\,
\label{5.3}\end{aligned}$$ with ‘dimensions’ $2 \times 2$, $3 \times 3$, $3 \times 2$ respectively. (Throughout this Section, $A$, $B$, $C$ will have these meanings). Their behaviours under $Sp(2,\,R)$ are $$\begin{aligned}
\hat{\rho}\rightarrow \hat{\rho}^{\, \prime}
=\overline{U}(S)\,\hat{\rho}\,\overline{U}(S)^{-1} &\Rightarrow &
A \rightarrow S\,A\,S^{T},\,\,\,\,B \rightarrow K^{(1)}(S)\, B \,
K^{(1)}(S)^{T}, \nonumber \\
&& C \rightarrow K^{(1)}(S) \,C \,S^{T}.
\label{5.4}\end{aligned}$$ Assuming $A^{-1}$ exists, we have $$\begin{aligned}
A^{-1} \rightarrow (S^{-1})^{T}\, A^{-1}\, S^{-1},
\label{5.5}\end{aligned}$$ and consequently, $$\begin{aligned}
B- C \,A^{-1}\,C^{\dagger} &\rightarrow& K^{(1)}(S)\,(B - C\,
A^{-1}\,C^{\dagger})\,K^{(1)}(S)^{T},
\label{5.6}\end{aligned}$$ which as expected is a congruence.
The matrix $K^{(1)}(S)$ is easily found. At the level of classical variables: $$\begin{aligned}
&&S=\left( \begin{array}{cc}
a & b\\ c&d
\end{array} \right) \in Sp(2,\,R)\,:\,\,\,\,
\left(\begin{array}{c} q \\ p \end{array} \right)
\rightarrow S\,\left(\begin{array}{c} q \\ p \end{array} \right)
\Rightarrow \nonumber \\
&&(X_{m}(q,\,p))= \left( \begin{array}{c}
q^2 \\ qp \\ p^2
\end{array} \right) \rightarrow K^{(1)}(S)\,(X_{m}(q,p)), \nonumber \\
&&K^{(1)}(S)=\left( \begin{array}{ccc}
a^2 & 2ab & b^2 \\ ac & ad+bc & bd \\
c^2 & 2cd & d^2
\end{array} \right).
\label{5.7}\end{aligned}$$ The link to $SO(2,1)$ can be seen in two (essentially equivalent) ways, either through $A$ or through $(X_{m}(q,p))$. We now outline both.
We introduce indices $\mu$, $\nu$, $\cdots$ going over values $0,\,3,\,1$ (in that sequence) and a three-dimensional Lorentz metric $g_{\mu \,\nu}={\rm diag}(+1,\,-1,\,-1)$. This metric and its inverse $g^{\mu\,\nu}$ are used for lowering and raising Greek indices. The defining representation of the proper homogeneous Lorentz group $SO(2,1)$ is then: $$\begin{aligned}
SO(2,1)=\{ \Lambda =({\Lambda^{\mu}}_{\nu})=3\times3 {\rm\,\,
real\,\,matrix\,\,}&|&\,\,{\Lambda^{\mu}}_{\nu}\,\Lambda_{\mu \lambda}
\equiv g_{\mu
\tau}\,{\Lambda^{\mu}}_{\nu}\,{\Lambda^{\tau}}_{\lambda}=g_{\nu
\lambda},\nonumber \\
&&~~~{\rm det}\,\Lambda =+1,\,\,\,{\Lambda^{0}}_{0}\geq 1 \}.
\label{5.8}\end{aligned}$$ This is a three-parameter noncompact Lie group. Now expand $A=\tilde{\Omega}^{\left(\frac{1}{2},\,\frac{1}{2}
\right)}(\hat{\rho})$ in terms of Pauli matrices as follows: $$\begin{aligned}
A=\tilde{\Omega}^{\left(\frac{1}{2},\,\frac{1}{2}
\right)}(\hat{\rho}) = x^{\mu}\,\sigma_{\mu} -
\frac{\hbar}{2}\,\sigma_2 =
\left(\begin{array}{cc}
x^{0}+x^{3} & x^{1} \\
x^{1} & x^{0} - x^{3}
\end{array}\right) - \frac{\hbar}{2}\,\sigma_2 .
\label{5.9}\end{aligned}$$ From Eqs.(\[3.6\],\[4.28\]) we have (indicating $\hat{\rho}$ dependences): $$\begin{aligned}
x^{0}(\hat{\rho})=\frac{1}{2}((\Delta q)^2 + (\Delta p)^2),\,\,\,
x^{3}(\hat{\rho})= \frac{1}{2}((\Delta q)^2 - (\Delta p)^2),\,\,\,
x^{1}(\hat{\rho})=\Delta(q,p)
\label{5.10}\end{aligned}$$ Then the transformation rule for $A$ in Eq.(\[5.4\]), combined with $S\,\sigma_2 \,S^T = \sigma_2$, leads to a rule for $x^{\mu}(\hat{\rho})$: $$\begin{aligned}
&& \hat{\rho} \rightarrow
\overline{U}(S)\,\hat{\rho}\,\overline{U}(S)^{-1}\,\,\Rightarrow \,\,
A \rightarrow S\,A\,S^{T}\,\,\Rightarrow \,\, x^{\mu}(\hat{\rho})
\rightarrow {\Lambda^{\mu}}_{\nu}(S)\,x^{\nu}(\hat{\rho}), \nonumber \\
\Lambda(S)&=&\left( \begin{array}{ccc}
\frac{1}{2}(a^2 + b^2 + c^2 + d^2) &\frac{1}{2}(a^2 - b^2 + c^2 - d^2)
& ab + cd \\
\frac{1}{2}(a^2 + b^2 - c^2 - d^2) &\frac{1}{2}(a^2 - b^2 - c^2 + d^2)
& ab-cd \\
ac+bd & ac-bd & ad+bc
\end{array} \right) \,\,\in SO(2,1)\,.
\label{5.11}\end{aligned}$$ Thus $x^{\mu}(\hat{\rho})$ transforms as a Lorentz three-vector, and the associated invariant is seen to be $$\begin{aligned}
x^{\mu}(\hat{\rho})\,x_{\mu}(\hat{\rho})=g_{\mu \nu}
\,x^{\mu}(\hat{\rho})\,x^{\nu}(\hat{\rho}) =
(\Delta q)^2\,(\Delta p)^2 - (\Delta(q,p))^2 \geq \frac{\hbar^2}{4},
\label{5.12}\end{aligned}$$ so the Schrödinger-Robertson UP implies the geometrical statement that $x^{\mu}(\hat{\rho})$ is positive time-like.
The matrices $K^{(1)}(S)$ by which $\hat{X}_{m}$ transform under $Sp(2,\,R)$ are related by a fixed similarity transform to the $\Lambda(S)$ above. If in terms of classical variables we pass from the components $X_{m}(q,p)$ in Eq.(\[5.7\]) to a new set of components $X^{\mu}(q,p)$ by $$\begin{aligned}
(X^{\mu}(q,p))&=& \left( \begin{array}{c}
\frac{1}{2}(q^2 + p^2) \\ \frac{1}{2}(q^2 -p^2) \\qp
\end{array} \right) =
M\,\left( \begin{array}{c}
q^2 \\ qp \\ p^2
\end{array} \right), \nonumber \\
X^{\mu}(q,p)&=& {M^{\mu}}_{m}\,X_{m}(q,p),\,\,\,\,X_{m}(q,p)=M^{-1}_{m
\mu}\,X^{\mu}(q,p), \nonumber \\
M&=& ({M^{\mu}}_{m})=\left(
\begin{array}{rrr}
\frac{1}{2} & ~0 & \frac{1}{2} \\
\frac{1}{2} & ~0 & - \frac{1}{2} \\
0 & ~1 & 0
\end{array}
\right),\,\,\,\, M^{-1}=(M^{-1}_{m \mu})=
\left( \begin{array}{rrr}
1 & 1 & ~0 \\
0 & 0 & ~1 \\
1 & -1 & ~0
\end{array} \right),
\label{5.13}\end{aligned}$$ then in place of Eq.(\[5.7\]) we have $$\begin{aligned}
\left( \begin{array}{c}q \\p \end{array} \right)
\rightarrow S\,\left( \begin{array}{c}q \\p \end{array} \right)
\Rightarrow X^{\mu}(q,p)&\rightarrow &{M^{\mu}}_{m} \, K^{(1)}_{mm'}(S)\,
M^{-1}_{m' \nu}\,X^{\nu}(q,p) \nonumber \\
&=& {\Lambda^{\mu}}_{\nu}(S)\,X^{\nu}(q,p), \nonumber \\
K^{(1)}(S)&=& M^{-1}\,\Lambda(S)\,M.
\label{5.14}\end{aligned}$$ At the operator level we have $$\begin{aligned}
&&\hat{X}^{0}=\frac{1}{2}(\hat{q}^2 + \hat{p}^2),\,\,\,\hat{X}^{3}=
\frac{1}{2}(\hat{q}^2 -
\hat{p}^2),\,\,\,\hat{X}^1=\frac{1}{2}\{\hat{q},\, \hat{p} \},\nonumber \\
&& \hat{X}^{\mu}= {M^{\mu}}_{m}\,\hat{X}_{m},
\label{5.15}\end{aligned}$$ and, as consequence of Eq.(\[4.13\])), the twin equivalent transformation laws: $$\begin{aligned}
S \in Sp(2,\,R)\,:~~&&
\overline{U}(S)^{-1}\,\hat{X}_{m}\,\overline{U}(S)=K^{(1)}_{mm'}(S)
\,\hat{X}_{m'} , \nonumber \\
&&\overline{U}(S)^{-1}\,\hat{X}^{\mu}\,\overline{U}(S)={\Lambda^{\mu}}_{\nu}(S)
\,\hat{X}^{\nu}.
\label{5.16}\end{aligned}$$ The upshot is that the matrices $K^{(1)}(S)$ are just the ‘ordinary’ homogeneous Lorentz transformation matrices $\Lambda(S)$ in a ‘tilted’ basis. The metric preserved by them is easily found though unfamiliar: $$\begin{aligned}
&&K^{(1)}(S)\,g_K \,K^{(1)}(S)^T = g_K , \nonumber \\
&& g_{K}=M^{-1}\,g\, (M^{-1})^T =
\left( \begin{array}{ccc}
0 & 0 & 2 \\ 0 &-1 & 0 \\ 2 & 0 &0
\end{array} \right).
\label{5.17}\end{aligned}$$ This enables us to use the nomenclature and geometrical features of three-dimensional Minkowski space even while working with operators $\hat{X}_{m}$ and transformation matrices $K^{(1)}(S)$.
Now we proceed to analyse the three matrices $A$, $B$, $C$ and the combination $B- C\,A^{-1}\,C^{\dagger}$. (We have already parametrised $A$ in Eqs.(\[5.9\],\[5.10\])). Using Eqs.(\[5.2\]), their matrix elements are $$\begin{aligned}
A_{mm'}&=& \langle\hat{\xi}_{m}\,\hat{\xi}_{m'} \rangle -
\langle\hat{\xi}_{m} \rangle \,\langle \hat{\xi}_{m'} \rangle
\nonumber \\
&=& \langle \hat{X}_{m+m'} \rangle -
\langle\hat{\xi}_{m} \rangle \,\langle \hat{\xi}_{m'} \rangle
+i\,\frac{\hbar}{2}\,(-1)^{m-\frac{1}{2}}\,\delta_{m,\,-m'} \nonumber \\
&=&(x^{\mu}\,\sigma_{\mu})_{mm'} +i
\,\frac{\hbar}{2}\,(-1)^{m-\frac{1}{2}}\,\delta_{m,\,-m'}\,; \nonumber
\\
B_{mm'}&=& \langle\hat{X}_{m}\,\hat{X}_{m'} \rangle -
\langle\hat{X}_{m} \rangle \,\langle \hat{X}_{m'} \rangle
\nonumber \\
&=& \langle \hat{T}_{2,m+m'} \rangle +
\frac{\hbar^2}{4}\,(-1)^{m}\,\delta_{m,-m'}-
\langle\hat{X}_{m} \rangle \,\langle \hat{X}_{m'} \rangle + i\,\hbar
\,(m-m')\,\langle \hat{X}_{m+m'} \rangle ; \nonumber \\
C_{mm'}&=& \langle\hat{X}_{m}\,\hat{\xi}_{m'} \rangle -
\langle\hat{X}_{m} \rangle \,\langle \hat{\xi}_{m'} \rangle
\nonumber \\
&=& \langle \hat{T}_{\frac{3}{2},m+m'} \rangle -
\langle\hat{X}_{m} \rangle \,\langle \hat{\xi}_{m'} \rangle
- i\,\frac{\hbar}{2}\,(-1)^{m'-\frac{1}{2}}\,\langle \hat{\xi}_{m+m'} \rangle.
\label{5.18}\end{aligned}$$ In each of these expressions, the possible values for $m$, $m^{\, \prime}$ are evident from the context. We now note an important fact in respect of the final forms of all three expressions: apart from explicit appearances of $i$ in the last terms, [*all other quantities are real*]{}. This allows us to easily separate each of $A$, $B$, $C$ into real and imaginary parts, which in the cases of $A$ and $B$ are respectively symmetric and antisymmetric in $m$ and $m^{\, \prime}$. \[This is already seen in Eq.(\[5.9\]) for $A$\]. We write these as follows: $$\begin{array}{rclc}
A &=& A_{1}+ i\,A_{2}, & \\
(A_{1})_{mm'}&=& \langle\hat{X}_{m+m'} \rangle -
\langle\hat{\xi}_{m} \rangle \,\langle \hat{\xi}_{m'} \rangle
=(x^{\mu}\,\sigma_{\mu})_{mm'}\,,
& \\
(A_{2})_{mm'}&=& \,\frac{\hbar}{2}\,(-1)^{m-\frac{1}{2}}\,\delta_{m,\,-m'};
& ~~(a) \\
B&=&B_{1} + i\,B_{2}, & \\
(B_{1})_{mm'}&=& \langle \hat{T}_{2,m+m'} \rangle +
\frac{\hbar^2}{4}\,(-1)^{m}\,\delta_{m,-m'}-
\langle\hat{X}_{m} \rangle \,\langle \hat{X}_{m'} \rangle, & \\
(B_{2})_{mm'} & =& \,\hbar
\,(m-m')\,\langle \hat{X}_{m+m'} \rangle ; & ~~(b) \\
C&=&C_{1}+ i\,C_{2}; & \\
(C_{1})_{mm'}&=& \langle \hat{T}_{\frac{3}{2},m+m'} \rangle -
\langle\hat{X}_{m} \rangle \,\langle \hat{\xi}_{m'} \rangle, & \\
(C_{2})_{mm'}&=& - \,\frac{\hbar}{2}\,(-1)^{m'-\frac{1}{2}}\,\langle
\hat{\xi}_{m+m'} \rangle\,; & \\
C^{\dagger}&=& C_{1}^{T} - i\,C_{2}^{T}\,. & ~~(c)
\end{array}
\label{5.19}$$ To deal similarly with $B - C\,A^{-1} \,C^{\dagger}$, we need an expression for $A^{-1}$. We will assume the generic situation in which $A$ is nonsingular, $$\begin{aligned}
{\rm det}\,A \equiv \kappa^{-1}=x^{\mu}\,x_{\mu} - \frac{\hbar^2}{4} > 0,
\label{5.20}\end{aligned}$$ so that $$\begin{aligned}
A^{-1}&=& \kappa (x^0 - x^3 \sigma_3 -x^1 \sigma_1 + \frac{\hbar}{2}\,\sigma_2)
\nonumber \\
&=& \kappa(\tilde{x}^{\mu}\,\sigma_{\mu} + \frac{\hbar}{2}\,\sigma_2),
\nonumber \\
\tilde{x}^{\mu}&=& (x^0,\,-x^3,\,-x^1).
\label{5.21}\end{aligned}$$ The transformation law for $A^{-1}$ under $S \in Sp(2,\,R)$ given in Eq.(\[5.5\]) is different from (though equivalent to) the law for $A$. Thus, while the $\tilde{x}^{\mu}$ do follow a definite (i.e., well defined tensorial[@simon84]) transformation law, there are some differences (in signs) compared to the law followed by $x^{\mu}$. Clearly the two terms in Eq.(\[5.21\]) are, as they stand, the real symmetric and the pure imaginary antisymmetric parts of $A^{-1}$. We can now handle $B- C\, A^{-1}\,C^{\dagger}$ in the same manner as above: $$\begin{aligned}
B - C\,A^{-1}\,C^{\dagger} &=& B_{1}+ i\,B_{2}- \kappa (C_{1}+
i\,C_{2})\,(\tilde{x}\cdot \sigma
+\frac{\hbar}{2}\,\sigma_2)\,(C_{1}^{T}-i\,C_{2}^{T}) \nonumber \\
&=&V^{({\rm eff})}+ \frac{i}{2} \,\omega^{({\rm eff})}, \nonumber \\
V^{({\rm eff})}&=& B_{1} - \kappa(C_{1}\,\tilde{x}\cdot\sigma\,C_{1}^{T}
+C_{2}\,\tilde{x}\cdot\sigma C_{2}^{T} + i \,\frac{\hbar}{2}
C_{2}\,\sigma_{2}\,C_{1}^{T} -i\,\frac{\hbar}{2}\,
C_{1}\,\sigma_2\,C_{2}^{T}), \nonumber \\
\frac{1}{2}\,\omega^{({\rm eff})}&=& B_{2} -
\kappa(C_{2}\,\tilde{x}\cdot\sigma\,C_{1}^{T}
- C_{1}\,\tilde{x}\cdot\sigma C_{2}^{T} - i \,\frac{\hbar}{2}
C_{1}\,\sigma_{2}\,C_{1}^{T} -i\,\frac{\hbar}{2}\,
C_{2}\,\sigma_2\,C_{2}^{T}). ~~
\label{5.22}\end{aligned}$$ This decomposition is in the spirit and notation of Eq.(\[2.18\]) of the general framework. However, $V^{({\rm eff})}$ and $\omega^{({\rm eff})}$ do not correspond any longer to expectation values of simple anticommutators and commutators among relevant operators, as was the case in Eqs.(\[2.18\],\[4.18\],\[4.20\]).
Both $V^{({\rm eff})}$ and $\omega^{({\rm eff})}$ are real three-dimensional matrices with elements $V^{({\rm eff})}_{m m'}$, $\omega^{({\rm eff})}_{m m'}$, where $m,\,m^{\, \prime} = 1,\,0,\,-1$; and they are respectively symmetric and antisymmetric. It does not seem possible to simplify the expressions (\[5.22\]) to any significant extent, as they are already expressed in terms of the independent real expectation values $\langle \hat{\xi}_m \rangle$, $\langle \hat{X}_{m} \rangle$, $\langle \hat{T}_{\frac{3}{2}, m} \rangle$, $\langle \hat{T}_{2,m}
\rangle$ which are the moments of the Wigner distribution $W(q,\, p)$ of orders up to and including the fourth. Under action by $S\in
Sp(2,\,R)$ we have from Eq.(\[5.6\]): $$\begin{aligned}
\hat{\rho}\rightarrow \overline{U}(S)\,\hat{\rho}\, \overline{U}(S)^{-1}\,\,
&\Rightarrow& \,\,V^{\rm (eff)} \rightarrow K^{(1)}(S)\,V^{\rm
(eff)}\,K^{(1)}(S)^{T}, \nonumber \\
&& \,\,\omega^{\rm (eff)} \rightarrow K^{(1)}(S)\,
\omega^{\rm (eff)}\,K^{(1)}(S)^{T}.
\label{5.23}\end{aligned}$$ The added uncertainty relation up to the fourth order going beyond the Schrödinger-Robertson UP (\[3.7\],\[4.28\]), reads \[in the generic case ${\rm det}\,A > 0$\]: $$\begin{aligned}
V^{\rm (eff)} + \frac{i}{2}\,\omega^{\rm (eff)} \geq 0,
\label{5.24}\end{aligned}$$ which is an $SO(2,1)$ covariant statement by virtue of Eq.(\[5.23\]).
For further analysis it is rather awkward to work with $SO(2,1)$ matrices and Lorentz metric in the form $K^{(1)}(S)$, $g_{K}$, therefore we pass to the ‘standard’ forms via the matrices $M$, $M^{-1}$ in Eq.(\[5.13\]): $$\begin{aligned}
V^{\rm (eff)\,\mu \nu} ={M^{\mu}}_{m}\,{M^{\nu}}_{m'}\,V^{\rm (eff)}_{m
m'}, \nonumber \\
\omega^{\rm (eff)\,\mu \nu} = {M^{\mu}}_{m}\,{M^{\nu}}_{m'}\,\omega^{\rm (eff)}_{m
m'}\,,
\label{5.25}\end{aligned}$$ which are congruences. Then the $Sp(2,\,R)$ or $SO(2,\,1)$ actions (\[5.23\]) appear as: $$\begin{aligned}
&&V^{\rm (eff)\,\mu \nu} \rightarrow
{\Lambda^{\mu}}_{\mu'}(S)\,{\Lambda^{\nu}}_{\nu'}(S)\, V^{\rm (eff)\, \mu'
\nu'}, \nonumber \\
&&\omega^{\rm (eff)\,\mu \nu} \rightarrow
{\Lambda^{\mu}}_{\mu'}(S)\,{\Lambda^{\nu}}_{\nu'}(S)\, \omega^{\rm (eff)\, \mu'
\nu'},
\label{5.26}\end{aligned}$$ and the condition (\[5.24\]) becomes: $$\begin{aligned}
(V^{\rm (eff) \,\mu \nu}) + \frac{i}{2}\,(\omega^{\rm (eff)\, \mu
\nu}) \geq 0.
\label{5.27}\end{aligned}$$ While $V^{\rm (eff)\,\mu \nu}$ transforms as a symmetric second rank $SO(2,1)$ tensor, $\omega^{\rm (eff)\,\mu \nu}$ is an antisymmetric second rank tensor, which by the use of the Levi Civita invariant tensor is the same as a three vector. Thus we can write, with $\epsilon^{031}=\epsilon_{031} =+1$, $$\begin{aligned}
\omega^{\rm (eff)\,\mu \nu} &=& \epsilon^{\mu \nu \lambda}\, a_{\lambda},
\nonumber \\
(\omega^{\rm (eff)\,\mu \nu})&=& \left( \begin{array}{ccc}
0 & a_1 & -a_3 \\
-a_1 & 0 & a_0 \\
a_3 & -a_0 & 0
\end{array} \right),
\label{5.28}\end{aligned}$$ with transformation law $$\begin{aligned}
a^{\mu} \rightarrow {\Lambda^{\mu}}_{\nu}(S)\, a^{\nu}.
\label{5.29}\end{aligned}$$ Of course, $V^{\rm (eff) \,\mu \nu}$ itself is made up of two irreducible parts: the symmetric second rank ‘trace-free’ part belonging to the $SO(2,\,1)$ representation $K^{(2)}(S)$, and the $SO(2,\,1)$ invariant trace which is a scalar.
We now appeal to a remarkable result[@RSSCVS], which is similar in spirit to the Williamson theorem for $Sp(2n,\,R)$ quoted in Section 3. It states that if $V^{\rm
(eff)\,\mu \nu}$ transforming as in Eq.(\[5.26\]) is positive definite, it is possible to bring it to a diagonal form by a suitable choice of $\Lambda \in SO(2,1)$; however in general the resulting diagonal values are not the eigenvalues of the initial matrix. This diagonal form may be called the ‘SCS normal form’ of $V^{\rm (eff)}$, which in the generic case is unique. Passing to this normal form of $V^{\rm (eff)}$, and transforming $\omega^{\rm (eff)}$ as well by the same (generically unique) $\Lambda \in SO(2,1)$, these matrices appear as $$\begin{aligned}
V^{\rm (eff)} \rightarrow \left(\begin{array}{ccc}
v^{00} & 0 & 0 \\ 0 & v^{33} & 0 \\ 0 & 0 & v^{11}
\end{array} \right),\,\,\,
\omega^{\rm (eff)}\rightarrow \left( \begin{array}{ccc}
0 & -b^1 & b^3 \\ b^1 & 0 & b^0 \\-b^3 & -b^0 &0
\end{array} \right),
\label{5.30}\end{aligned}$$ with all the quantities $v^{00}$, $v^{33}$, $v^{11}$, $b^0$, $b^3$, $b^1$ being real $SO(2,1)$ (and $Sp(2,\,R)$) invariants. The uncertainty relation (\[5.27\]) expressed in terms of these invariants, and in its maximally simplified form thanks to the SCS theorem, is $$\begin{aligned}
\left(\begin{array}{ccc}
v^{00} & 0 & 0 \\ 0 & v^{33} & 0 \\ 0 & 0 & v^{11}
\end{array} \right) +\frac{i}{2}\,
\left( \begin{array}{ccc}
0 & -b^1 & b^3 \\ b^1 & 0 & b^0 \\-b^3 & -b^0 &0
\end{array} \right)\geq 0.
\label{5.31}\end{aligned}$$
As an (admittedly elementary) example of the discussion of this Section, we consider the Fock states $|n \rangle$, $n \geq 0$. The $(\hat{q},\,\hat{p})$ — $(\hat{a},\,\hat{a}^{\dagger})$ relations are $$\begin{aligned}
\hat{a} = (\hat{q} + i \hat{p})/\sqrt{2 \hbar}, ~~~ \hat{a}^{\dagger} = (\hat{q} - i \hat{p})/\sqrt{2 \hbar},
\label{5.32}\end{aligned}$$ so both $\hat{q}$ and $\hat{p}$ have dimensions $\hbar^{{{{1}/{2}}}}$. In the Fock states $|n\rangle$, by parity arguments we have $$\begin{aligned}
\langle n | \hat{\xi}_m |n\rangle = \langle n | \hat{T}_{3/2,m}| n
\rangle =0.
\label{5.33}\end{aligned}$$ For $\hat{X}_{m}, \, \hat{T}_{2,m}$ explicit calculations give: $$\begin{aligned}
\langle n | \hat{X}_m | n \rangle &= \hbar (n+{{\frac{1}{2}}}) (1,\,0,\,1),
~~ m = 1,\,0,\,-1;\nonumber\\
\langle n | \hat{T}_{2,m} | n \rangle &= \frac{1}{2} \hbar^2 (n^2 + n+
{{\frac{1}{2}}}) (3,\,0,\,1,\,0,\,3), ~~ m = 2,\,1,\,0,\,-1,\,-2.
\label{5.34}\end{aligned}$$ Then the matrices $A,\, B,\, C$ of Eq. follow easily: $$\begin{aligned}
\begin{array}{rclc}
\left(A_{mm^{\,'}}\right) &=& \hbar (n+{{\frac{1}{2}}}) 1\!\!1
- \frac{\hbar}{2}\, \sigma_2, & \\
x^0 &=& \hbar (n+{{\frac{1}{2}}}), ~~ x^3 = x^1 =0; & (a)\\
\left(B_{mm^{\,'}}\right) &=& \frac{\hbar^2}{2} (n^2 + n +1
) \left( \begin{array}{rrr}1& ~0&-1 \\ 0&1&0 \\ -1&0&1 \end{array} \right) + i \hbar^2
(n + {{\frac{1}{2}}}) \left( \begin{array}{rrr} 0&1&~0 \\
-1&0 &1 \\ 0 & -1 & 0\end{array} \right); & ~~(b) \\
\left( C_{mm^{\,'}}\right) &=& 0. & (c)
\end{array}
\label{5.35}\end{aligned}$$ Therefore the combination $B - CA^{-1} C^{\dagger} = B$, and from Eq., $$\begin{aligned}
\left(V^{(\rm eff)}_{mm^{\,'}} \right) &= \frac{\hbar^2}{2} (n^2 + n+
1) \left( \begin{array}{rrr} 1& ~0&-1 \\ 0&1&0 \\ -1&0&1\end{array} \right), \nonumber\\
\frac{1}{2}\left(\omega^{(\rm eff)}_{mm^{\,'}} \right) &=
\hbar^2 (n+{{\frac{1}{2}}})
\left( \begin{array}{rrr} 0&1& ~0 \\ -1&0 &1 \\ 0 & -1 & 0\end{array} \right).
\label{5.36}\end{aligned}$$ Transforming to the standard $SO(2,1)$ tensor components by the congruence transformation we find: $$\begin{aligned}
\left(V^{(\rm eff)\,\mu\nu} \right) &= \frac{\hbar^2}{2} (n^2 + n+
1) \left( \begin{array}{ccc} 0&~0~&0 \\ 0&1&0 \\ 0&0&1\end{array} \right), \nonumber\\
\frac{1}{2}\left(\omega^{(\rm eff)\,\mu\nu} \right) &=
\hbar^2 (n+{{\frac{1}{2}}}) \left( \begin{array}{rrr} 0&0&~0 \\ 0&0 &1 \\ 0 & -1 & 0\end{array} \right).
\label{5.37}\end{aligned}$$ As expected, both these matrices are invariant under the $SO(2)$ subgroup of $SO(2,\,1)$, as the Fock states are eigenstates of the phase space rotation generator $\hat{a}^\dagger\hat{a}$.
We see that $\left(V^{(\rm eff)\,\mu\nu} \right)$ is already in the $SCS$ normal form, and as the eigenvalues of $\left(V^{(\rm
eff)\,\mu\nu} \right) + \frac{\,i\,}{2}\, \left(\omega^{(\rm eff)\,\mu\nu} \right)$ are $\,0,\, \frac{\hbar^2}{2}(n^2+n+1) \pm \hbar^2 (n+ {{\frac{1}{2}}})$, i.e., $\,0,\,
\frac{\hbar^2}{2}(n+1)(n+2),\, \frac{\hbar^2}{2} n(n-1)$, the uncertainty relation is clearly respected; indeed it is saturated!
Lorentz geometry and the Schrödinger-Robertson UP
=================================================
The original Schrödinger-Robertson UP has a very interesting character when viewed in the Wigner distribution language, bringing out the role of the group $SO(2,\,1)$ in a rather striking manner. This seems worth exploring in some detail.
For a given state $\hat{\rho}$ with Wigner distribution $W(q,\,p)$, the means are $$\begin{aligned}
\overline{q} = \int\int dqdp\,q\,W(q,\,p)\,, ~~~
\overline{p} = \int\int dqdp\,p\,W(q,\,p)\,.
\label{6.1}\end{aligned}$$ Referring to Eq., at each point $(q,\,p)$ in the phase plane we define the $SO(2,\,1)$ three-vector (a displaced form of $(X^{\mu}(q,p))$ in Eq.) $$\begin{aligned}
(\,X^\mu(q,\,p)\,) = \left(\begin{array}{c}
\frac{1}{2}\,[\,(q - \overline{q})^2 + (p - \overline{p})^2\,]\\
\frac{1}{2}\,[\,(q - \overline{q})^2 - (p - \overline{p})^2\,]\\
(q - \overline{q})(p - \overline{p})
\end{array}
\right),
\label{6.2}\end{aligned}$$ which (except at $q = \overline{q},\,p=\overline{p}$) is pointwise positive light-like. The elements of the variance matrix $V$ in Eqs.($3.6, \, 4.29$) are obtained by ‘averaging’ this three-vector over the phase plane with the quasiprobability $W(q,p)$ as ‘weight’ function, resulting in the three-vector $x^{\mu}(\hat{\rho})$ of Eq.: $$\begin{aligned}
(x^{\mu}(\hat{\rho})) =
\left( \begin{array}{c}
\frac{1}{2}\, [\,(\triangle q )^2+(\triangle p)^2\,]\\
\frac{1}{2}\, [\,(\triangle q )^2-(\triangle p)^2\,]\\
\triangle (q,p)
\end{array}
\right) = \int \int dqdp\, W(q,p)\, (\,X^\mu(q,\,p)\,).
\label{6.3}\end{aligned}$$ Given that $W(q,p)$ can in principle be negative over certain regions of the phase space, this ‘averaging’ could have led to a result which need not be either time-like or light-like positive. However the Schrödinger-Robertson UP assures us that in fact the result has to be a time-like positive three-vector, thus implying a subtle limit on the extent to which $W(q,p)$ could become negative. In fact it specifies that the three-vector obtained as a result of the ‘averaging’ must be within or on the positive time-like (solid) hyperboloid $\sum_{\mu} x^{\mu}(\hat{\rho}) x_{\mu}(\hat{\rho}) \geq \hbar^2/4$ corresponding to ‘squared mass’ $\hbar^2/4$ presented in Eq.. On the other hand, while pointwise nonnegativity of $W(q,p)$ will certainly ensure that the averaging in Eq. takes $\left( x^{\mu}(\hat{\rho})\right)$ inside the time-like positive cone, it will not itself ensure that it is taken all the way inside the said hyperboloid. To ensure the latter, $W(q,p)$ should have a threshold effective spread. Thus, pointwise nonnegativity is neither a necessary nor sufficient requirement to ensure ‘Wigner quality’ on $W(q,p)$ as is known from other considerations.
The argument above has been presented in such a way that the interpretation in terms of Lorentz geometry in $2+1$ dimensions is obvious. However, comparing Eqs. and , we see that it could be expressed equally well as follows. At each point $(q,p)$ in the phase plane we define a $2\times 2$ real symmetric matrix $$\begin{aligned}
V(q,p) = \left( \begin{array}{c} q -\overline{q}
\\ p-\overline{p} \end{array} \right) \,
\begin{array}{c}
\left( \begin{array}{cc} q-\overline{q} & p-\overline{p} \end{array}
\right) \\ ~~~~
\end{array} .
\label{6.4}\end{aligned}$$ Pointwise (except at $q = \overline{q},\, p = \overline{p}$ ) this is proportional to a one-dimensional projection matrix, and in particular it has vanishing determinant. After ‘averaging’ with $W(q,p)$ as weight function, however, we obtain the $2 \times 2$ variance matrix $V$ in Eq.: $$\begin{aligned}
V = \int \int dp\,dq \, W(q,p) V(q,p) = \left( \begin{array}{cc}
(\triangle q)^2 & \triangle(q,p) \\
\triangle(q,p) & (\triangle p)^2
\end{array}
\right),
\label{6.5}\end{aligned}$$ and now the Schrödinger-Robertson UP shows that $V$ is non-singular and has determinant bounded below by the ‘squared mass’ $\hbar^2/4$.
In this form, just like the Schrödinger-Robertson UP, this geometrical picture based on the Wigner distribution language generalises in both directions—second order moments for a multi mode system, and higher order moments for a single mode system. As an example of the former, consider a two-mode system for simplicity. The classical phase space variables are $\xi_a$ and the hermitian quantum operators obeying Eq. are $\hat{\xi}_a$, for $a=1,\cdots,4$. Given a two-mode state $\hat{\rho}$, we pass to its Wigner distribution $W(\xi)$ (something we did not do in Section III) and compute the means $$\begin{aligned}
\langle \hat{\xi}_a \rangle = {\rm Tr}(\hat{\rho}\,\hat{\xi}_a) = \int
d^4 \xi \, \xi_a W(\xi) = \overline{\xi}_a,\, a = 1,\cdots,4.
\label{6.6}\end{aligned}$$ Then, generalising Eq. above, at each point $\xi$ in the 4-dimensional phase space we define a real symmetric $4\times
4 $ matrix $$\begin{aligned}
V(\xi) &= (V_{ab}(\xi)) = ((\xi_a - \overline{\xi}_a)(\xi_b -
\overline{\xi}_b)) = x(\xi) x(\xi)^T,\nonumber\\
x_a(\xi) &= \xi_a - \overline{\xi}_a.
\label{6.7}\end{aligned}$$ At each point $\xi$ (except at $\xi = \overline{\xi}$) we have here a real symmetric positive semidefinite matrix $V(\xi)$ which is essentially a one-dimensional projection matrix: the eigenvalues of $V(\xi)$ are $x(\xi)^T x(\xi),0,0,0$. The variance matrix $V$ for the state $\hat{\rho}$ is then obtained by ‘averaging’ $V(\xi)$ using the real normalised quasiprobability $W(\xi)$: $$\begin{aligned}
V = \int d^4 \xi \, W(\xi) V(\xi).
\label{6.8}\end{aligned}$$ Since in general $W(\xi)$ can assume negative values at some places in phase space, it may appear at first sight that some of the properties of $V(\xi)$ described above may be lost by the ‘averaging’ process leading to $V$. However the UP guarantees that this will not happen; indeed by Lemma 2, Section II, in Eq., $V$ is seen to be positive definite. Quantitatively we have the following situation: Williamson’s theorem assures us that under the congruence transformation by a suitable $S_0 \in Sp(4, {\cal R})$, $V $ is taken to a diagonal form: $$\begin{aligned}
V_0 = S_0 V S_0^T = diag(\kappa_1,\kappa_1,\kappa_2,\kappa_2),
~~\kappa_{1,2} >0.
\label{6.9}\end{aligned}$$ The congruence transformation becomes a similarity transformation on $V \beta^{-1} $[@dutta94], since: $$\begin{aligned}
S \in Sp(4,{\cal R}) \,: V^{\,'} = SVS^T \leftrightarrow V^{\,'}
\beta^{-1} = S V \beta^{-1} S^{-1}.
\label{6.10}\end{aligned}$$ Applying this to the transition $V \to V_0$ we see that as $$\begin{aligned}
V_0 \beta^{-1} = -i \left( \begin{array}{cc}
\kappa_1 \sigma_2 &0 \\
0& \kappa_2 \sigma_2
\end{array} \right) ,
\label{6.11}\end{aligned}$$ the eigenvalues of $i V \beta^{-1}$ are $\pm \kappa_1, \, \pm
\kappa_2 $; and the UP ensures that $\kappa_{1,2} \geq
\hbar/2$. The $\kappa$’s themselves are determined, upto an interchange, by the $Sp(4,{\cal R})$ invariant traces $$\begin{aligned}
{\rm Tr}(V\beta^{-1})^2 = -2(\kappa_1^2 + \kappa_2^2),\nonumber\\
{\rm Tr}(V\beta^{-1})^4 = 2(\kappa_1^4 + \kappa_2^4).
\label{6.12}\end{aligned}$$ The manner in which the geometrical picture, and the constraint on the extent to which $W(\xi)$ can become negative, both generalise in going to the multi mode situation is now clear.
A qualitatively similar situation (even if algebraically more involved) obtains when we generalise in the other direction—to higher order moments for a single mode system, and their uncertainty relations handled in the Wigner distribution language. Limiting ourselves to the moments upto fourth order, we are concerned in the notation of Eq. with the uncertainty relation $$\begin{aligned}
\tilde{\Omega}^{(1)}(\hat{\rho}) \geq 0
\label{6.13}\end{aligned}$$ contained in the hierarchy, and its rendering in the Wigner distribution language. Combining the notations of Sections II and V, we have a set of five real phase space functions $A_{a}(q,p)$, $a=1,2,\cdots,5$, and their hermitian operator counterparts in the Weyl sense: $$\begin{aligned}
(A_a(q,p)) &= (q,\,p,\,q^2,\,qp,\,p^2)^T;\nonumber\\
(\hat{A}_a) &= ((A_a(q,p))_W) =
(\hat{q},\,\hat{p},\,\hat{q}^2,\,\frac{1}{2} \{\hat{q},\hat{p}\},\,\hat{p}^2)^T,
\label{6.14}\end{aligned}$$ a listing of the components $\hat{\xi}_m$, $\hat{X}_m$. In a given state $\hat{\rho}$ with Wigner distribution $W(q,p)$ we have the means $$\begin{aligned}
\langle \hat{A}_a \rangle = {\rm Tr}(\hat{\rho} \hat{A}_a) = \int \int
dp\,dq \, W(q,p) A_a(q,p) = \overline{A}_a.
\label{6.15}\end{aligned}$$ To calculate the elements of $\tilde{\Omega}^{(1)}(\hat{\rho})$ we need to deal with the products $\hat{A}_a \hat{A}_b $. For these, using Eq. we find: $$\begin{aligned}
\hat{A}_a \hat{A}_b &= (A_a(q,p)A_b(q,p))_W +
(C_{ab}(q,p))_W,\nonumber\\
(C_{ab}(q,p)) &= \left( \begin{array}{ccccc}
0 & \frac{i\hbar}{2} &~ 0 ~& \frac{i\hbar q }{2} & \frac{i\hbar p }{2}\\
-\frac{i\hbar}{2} & 0 &~ -\frac{i\hbar q}{2} ~& -\frac{i\hbar p}{2} & 0
\\
0 & \frac{i\hbar q }{2} &~ 0~& i\hbar q^2 &~ -\frac{\hbar^2}{2} + 2i \hbar
q p ~\\
-\frac{i\hbar q}{2} & \frac{i\hbar p}{2} &~ -i\hbar q^2 ~&
\frac{\hbar^2}{4} & i\hbar p^2 \\
-\frac{i\hbar p}{2} & 0 &~~-\frac{\hbar^2}{2} - 2i \hbar
q p~ ~& -i \hbar p^2 & 0
\end{array}
\right).
\label{6.16}\end{aligned}$$ (We note that the real symmetric part of the matrix $C(q,p)$ is $-\frac{\hbar^2}{4} g_K$ in the lower $3 \times 3$ block, where $g_K$ is the tilted form of the $(2+1)$ Lorentz metric in Eq.). With these ingredients and referring to the general structure we have the expression for $\tilde{\Omega}^{(1)}(\hat{\rho})$ in the Wigner distribution language: $$\begin{aligned}
\tilde{\Omega}^{(1)}(\hat{\rho}) &=
\left(\tilde{\Omega}^{(1)}_{ab}(\hat{\rho})\right) = \left({\rm Tr}(\hat{\rho}(\hat{A}_a - \langle \hat{A}_a \rangle)(\hat{A}_b - \langle \hat{A}_b \rangle)) \right) \nonumber\\
&= \left({\rm Tr}(\hat{\rho}\,(\,(A_a(q,p)A_b(q,p))_W -
\overline{A}_a\overline{A}_b + (C_{ab}(q,p))_W)\,)\right) \nonumber\\
&= \int \int dp dq \,W(q,p)\left( x(q,p) x(q,p)^T + (C_{ab}(q,p))\right),\nonumber\\
x(q,p)^T&=(q-\overline{q},\,p-\overline{p},\,q^2-\overline{q^2},\,
qp-\overline{qp},\,p^2-\overline{p^2} ).
\label{6.17}\end{aligned}$$ At each point $(q,p)$ in the phase plane, we have essentially a one-dimensional projector $x(q,p) x(q,p)^T$, together with a five-dimensional hermitian matrix $(C_{ab}(q,p))$ with elements involving $\hbar$ and $\hbar^2$ terms. The uncertainty relation demands that the phase plane ‘average’ of this expression (hermitian matrix) with $W(q,p)$ as weight function be nonnegative. After this ‘averaging’, the leading term is no longer a one-dimensional projector; moreover, the pure imaginary antisymmetric part coming from this part of $C(q,p)$ being singular, Lemma 2 of Section II does not apply to the real symmetric part of $\tilde{\Omega}^{(1)}(\hat{\rho})$. In any event, again constrains the extent to which $W(q,p)$ can become negative.
Concluding Remarks
==================
In this paper we have set up a systematic procedure to obtain covariant uncertainty relations for general quantum systems. It applies equally well to continuous variable systems and to systems described by finite-dimensional Hilbert spaces, and even to systems based on the tensor product of the two, and consists of two ingredients: the choice of a collection of observables, and the action of unitary symmetry operations on them. We have shown that the uncertainty relations are automatically covariant—preserved in content—under every symmetry operation.
We have applied this to two important special cases: the fluctuations and covariances in coordinates and momenta of an $n$-mode canonical system; and to the set of all hermitian operator ‘monomials’ in canonical variables $\hat{q},\,\hat{p}$ of a single mode system. These are both generalisations of the Schrödinger-Robertson UP in two distinct directions. The latter generalisation has been treated for definiteness using the Wigner distribution method.
We hope to have set up a robust yet flexible formalism which can be applied to all quantum systems, in particular to composite, for instance bipartite, systems. In such a case, by judicious choices of the operator sets $\{\hat{A}_a \}$ of Section II, one can devise tests for entanglement, exhibiting covariance under corresponding local symmetry operations. If for a bipartite system the operator $\hat{\rho}^{\,{\rm PT}}$[@peres96], arising from partial transpose of a physical state $\hat{\rho}$, violates any uncertainty relation, the presence of entanglement in $\hat{\rho}$ follows[@recent3; @solomonnpt12]. A systematic analysis along these lines of higher order moments in the bipartite multi-mode case will be presented elsewhere, keeping in mind that our general methods are applicable for both discrete and continuous variable systems, and even to composite systems consisting of either or both types as subsystems.
W. Heisenberg, Z. Phys. [**33**]{}, 879 (1925); E. Schrödinger, Annalen der Physik, (Leipzig), 361 (1926).
M. Born, Z. Phys. [**37**]{}, 863 (1926).
W. Heisenberg, Z. Phys. [**43**]{}, 172 (1927).
N. Bohr, Naturwissenschaften [**16**]{}, 245 (1928).
E. H. Kennard, Z. Phys. [**44**]{}, 326 (1927); H. P. Robertson, Phys. Rev. [**34**]{}, 163 (1929); E. Schrödinger, Sitzungsberichte Preus. Acad. Wiss. (Berlin), Phys.-Math. Klasse [**19**]{}, 296 (1930); H. P. Robertson, Phys. Rev. [**46**]{}, 794 (1934).
W. Heisenberg, [*The Physical Principles of the Quantum Theory*]{} (University of Chicago Press, Chicago, 1930).
N. Bohr, Nature [**121**]{}, 580 (1928).
L. Mandelstam and I. Tamm, J. Phys. (USSR) [**9**]{}, 249 (1945); P. Carruthers, and M. M. Nieto, Rev. Mod. Phys. [**40**]{}, 411 (1968); F. J. Narcowich and R. F. O’Connell, , 1 (1986); G. B. Folland and A. Sitaram, J. Fourier Analysis and Applications [**3**]{}, 207 (1997); D. A. Trifonov, J. Phys. A: Math. Gen. [**33**]{}, L299 (2000).
See for example W. C. Prince and S. S. Chissick, Eds., [*The uncertainty principles and foundations of quantum mechanics: A Fifty Years Survey*]{} (John Wiley and Sons Ltd., 1977).
I. Bialynicki-Birula and J. Mycielski, Commun. Math. Phys. [**44**]{}, 129 (1975); M. Krishna and K. R. Parthasarathy, The Indian Journal of Statistics, Series A [**64**]{}, 842 (2002); S. Wehner and A. Winter, New Journal of Physics [**12**]{}, 025009 (2010); I. Bialynicki-Birula and L. Rudnicki, [*Statistical Complexity*]{} (Springer 2011), Ed. K. D. Sen, Ch. 1.
S. L. Braunstein and C. M. Caves, , 3439 (1994); M. J. W. Hall, , 3307 (1995); J. Oppenheim and S. Wehner, Science [**330**]{}, 1072 (2010); M. H. Partovi, , 052117 (2011).
R. Simon, E. C. G. Sudarshan, and N. Mukunda, , 3028 (1988).
R. Simon, N. Mukunda, and B. Dutta, , 1567 (1994).
J. Solomon Ivan, N. Mukunda, and R. Simon, J. Phys. A: Math. Theor. [**45**]{}, 195305 (2012).
R. J. Horn and C. R. Johnson, [*Matrix Analysis*]{} (Cambridge University Press, 1985), p. 472.
For discussions of this situation see for instance A. Fine, J. Math. Phys. [**23**]{}, 1306 (1982); R. F. Streater, J. Math. Phys. [**41**]{}, 3556 (2000).
R. Simon, E. C. G. Sudarshan, and N. Mukunda, , 2419 (1985); R. Simon, E. C. G. Sudarshan, and N. Mukunda, , 3868 (1987); R. Simon and N. Mukunda, J. Opt. Soc. Am. A [**17**]{}, 2440 (2000).
Arvind, B. Dutta, N. Mukunda, R. Simon, Pramana [**45**]{} 6, 471 (1995).
R. G. Littlejohn R G, Phys. Rep. [**138**]{}, 193 (1986).
R. Simon, , 2726 (2000).
J. Williamson, Amer. J. Math. [**58**]{}, 141 (1936).
R. Simon, S. Chaturvedi, and V. Srinivasan, J. Math. Phys. [**40**]{}, 3632 (1999).
K. E. Cahill and R. J. Glauber, Phys. Rev. [**177**]{}, 1857, 1882 (1969); G. S. Agarwal and E. Wolf, , 2161, 2187 (1970).
R. F. O’Connell and E. P. Wigner, Phys. Lett. A [**83**]{}, 145 (1981); M. Hillery, R. F. O’Connell, M. O. Scully, and E. P. Wigner, Phys. Rep. [**106**]{}, 121 (1984).
A. R. Edmonds, Angular Momentum in Quantum Mechanics, Princeton University Press, Princeton, p. 45 Eq. 3.6.11 (1957).
R. Simon, E. C. G. Sudarshan, and N. Mukunda, , 3273 (1984).
An extensive analysis of $Mp(2)$ can be found in R. Simon and N. Mukunda, The two-dimensional symplectic and metaplectic groups and their universal cover, [*Symmetries in Science*]{} VI Ed. B. Gruber (Plenum, New York) p. 659 (1993).
A. Peres, Phys. Rev. Lett. [**77**]{}, 1413 (1996).
A. Biswas and G.S. Agarwal, New. J. Phys. [**7**]{}, 211 (2005); E. Shchukin and W. Vogel, , 230502 (2005); J. S. Ivan, S. Chaturvedi, E. Ercolessi, G. Marmo, G. Morandi, N. Mukunda, and R. Simon, , 032118 (2011).
J. S. Ivan, N. Mukunda, and R. Simon, Quant. Inf. Process. [**11**]{}, 873 (2012).
|
---
abstract: 'A periodically inhomogeneous Schrödinger equation is considered. The inhomogeneity is reflected through a non-uniform coefficient of the linear and non-linear term in the equation. Due to the periodic inhomogeneity of the linear term, the system may admit spectral bands. When the oscillation frequency of a localized solution resides in one of the finite band gaps, the solution is a gap soliton, characterized by the presence of infinitely many zeros in the spatial profile of the soliton. Recently, how to construct such gap solitons through a composite phase portrait is shown. By exploiting the phase-space method and combining it with the application of a topological argument, it is shown that the instability of a gap soliton can be described by the phase portrait of the solution. Surface gap solitons at the interface between a periodic inhomogeneous and a homogeneous medium are also discussed. Numerical calculations are presented accompanying the analytical results.'
address:
- |
Department of Mathematics, University of North Carolina at Chapel Hill,\
Chapel Hill, NC 27599
- |
School of Mathematical Sciences, University of Nottingham, University Park,\
Nottingham NG7 2RD, UK
- |
Department of Mathematics and Statistics, University of Sydney,\
Sydney, NSW 2006 Australia
author:
- 'R. Marangell'
- 'H. Susanto'
- 'C.K.R.T. Jones'
title: Unstable gap solitons in inhomogeneous Schrödinger equations
---
Introduction
============
A homogeneous nonlinear system may admit a localized solutions with a natural frequency residing in the first (semi-infinite) band-gap of the corresponding linear system. When there is a periodic non-uniformity in the linear system, additional finite band-gaps will be formed and the nonlinear system will admit a novel type of solitons known as the gap solitons [@denz09]. One main characteristic of a gap soliton is the infinitely many number of zeros in the profile of the solution, inheriting a characteristic of Bloch waves. Gap solitons are intensively studied among others in nonlinear optics [@acev00] and Bose-Einstein condensates [@kevr08]. Several reports on the experimental observation of gap solitons in the fields in the one-dimensional setting are, e.g., [@chen87; @eggl96; @eier04; @mand03; @mand04; @rosb06].
Depending on particular underlying assumptions and specific limits, gap solitons have been studied analytically through several different approaches. The first theoretical approach is through the coupled-mode theory, which is based on a decomposition of the wave field into a forward and backward propagating wave [@volo81; @chen87; @ster94]. The applicability and justification of the method can be seen in [@good01; @peli07; @peli08]. The stability of gap solitons in this approach have been studied analytically in [@malo94; @ross98; @bara98]. The second formal approximation to gap soliton is through the so-called tight-binding approximation, which leads to a discrete nonlinear Schrödinger equation (DNLS) [@kevr09]. In this approach, a gap soliton can be related to the ’ordinary’ soliton through the so-called staggering transformation. The existence and the stability of discrete solitons in the uncoupled limit of this approach has been discussed in [@peli05]. The third analysis of gap solitons is based on the approximation when the eigenfrequency of the localized modes is close to one of the edges of the finite band gaps [@iizu94; @iizu97; @peli04; @yang10]. In this case, the envelope of the gap solitons is described by the nonlinear Schrödinger equation. It is shown in [@peli04] that gap solitons at least suffer from an oscillatory instability because gap solitons possess internal modes.
Relatively recently, another analytical method was proposed by Kominis et al. [@komi06; @komi06_2; @komi07], employing a phase space method for the construction of an analytical solitary wave. Even though the method is rather limited to piecewise-constant coefficients, it was shown that the method is effective in obtaining various types of localized modes belonging to gap solitons. For that new method, the stability result was so far only obtained through numerical simulations.
The phase-space method proposed in [@komi06; @komi06_2; @komi07] is similar to that used in our recent work [@rmckrtjhs10], where it was shown that the profile of a solution in the phase-space can be used to describe its instability. The method was based on the topological argument developed in [@ckrtj88]. Here, we propose to apply a similar method to determine the stability of gap solitons obtained through the phase-space method [@komi06; @komi06_2; @komi07]. Despite the similarity in the proposed method in investigating the instability of gap solitons, the problem is nontrivial. The topological argument in [@ckrtj88] is so far immediately applicable to nonlinear systems with finite inhomogeneity (see [@rmckrtjhs10] and references therein). By specifically constructing the solutions, we show that the argument is also useful to study gap solitons. In addition to inhomogeneities occupying the infinite domain, the so-called surface gap solitons sitting at the interface between inhomogeneities in the semi-infinite domain and a homogeneous region [@rosb06; @smir06; @komi07] will also be studied. Our result will complement the numerical results on the stability of surface gap solitons recently studied, e.g., in [@dohn08; @blan11].
The paper is outlined as the following. In Section 2, the governing equations are discussed and the corresponding linear eigenvalue problem is derived. The construction of gap solitons using the phase-space method is briefly explained. The instability of gap solitons is studied analytically in Section 3 using the topological argument. In Section 4, the linear eigenvalue problem for several gap solitons is solved numerically, where an agreement between the analytical results presented in the previous section is obtained. In the same section, the instability of surface gap solitons is also discussed. We conclude the paper in Section 5.
Mathematical model
==================
We consider the following governing system of differential equations $$\begin{array}{lll}
i\Psi_{t}+\Psi_{xx}+|\Psi|^2\Psi=V\Psi && x \in U_O := \R \setminus U_I\\
i\Psi_{t}+\Psi_{xx}-\eta |\Psi|^2\Psi=0 && x \in U_I
\end{array}
\label{gov1}$$ where the ‘outer’ equation has focusing type nonlinearity, the ‘inner’ equation can be defocusing ($\eta>0$) or linear ($\eta=0$), and $U_O, U_I$ are disjoint sets of intervals to be specified later.
To study standing waves of (\[gov1\]), we pass to a rotating frame and consider solutions of the form $\Psi(x,t) = e^{-i \w t} \psi(x,t)$. We then have $$\begin{array}{lll}
i\psi_{t}+\psi_{xx}+|\psi|^2\psi=(V-\w) \psi&& x \in U_O,\\
i\psi_{t}+\psi_{xx}-\eta |\psi|^2\psi=-\w \psi && x \in U_I.
\end{array}
\label{gov2}$$ Standing wave solutions of (\[gov1\]) will be steady-state solutions to (\[gov2\]). We consider real, $t$ independent solutions $u(x)$ to the ODE: $$\begin{array}{ccccc}
u_{xx} &=& (V-\w )u - u^3 & & x \in U_O, \\
u_{xx} &=& -\w u + \eta u^3 & & x \in U_I.
\end{array} \label{stat1}$$ To obtain solutions that decay to 0 as $x \to \pm \infty$, the condition that $V-\w > 0$ is required, with $\w \in \R_+$. We will also require that $u_x \to 0$ as $x \to \pm \infty$. To establish the instability of a standing wave solution we linearize (\[gov2\]) about a solution to (\[stat1\]). Writing $\psi=u(x) + \epsilon\left((r(x)+is(x))e^{\lambda t} + (r(x)^\star+is(x)^\star) e^{\lambda^\star t}\right)$ and retaining terms linear in $\epsilon$ leads to the eigenvalue problem $$\lambda\left(\begin{array}{cc} r \\ s \end{array}\right) = \left(\begin{array}{cc} 0 & D_{-} \\ - D_{+} & 0 \end{array}\right) \left(\begin{array}{cc} r \\ s \end{array}\right) = M \left(\begin{array}{cc} r \\ s \end{array}\right),
\label{eq:linear}$$ where the linear operators $D_{+}$ and $D_{-}$ are defined as $$\begin{aligned}
\begin{array}{lll}
D_{+} = \begin{array}{lll}
\frac{\partial^2}{\partial x^2} - (V- \w ) + 3u^2, & x \in U_O, \\
\frac{\partial^2}{\partial x^2} + \w - 3\eta u^2, & x \in U_I,
\end{array}
\end{array}
\label{eq:plusoperator}\\
\begin{array}{lll}
D_{-} = \begin{array}{lll}
\frac{\partial^2}{\partial x^2} - (V- \w ) + u^2, & x \in U_O, \\
\frac{\partial^2}{\partial x^2} + \w - \eta u^2, & x \in U_I.
\end{array}
\end{array}
\label{eq:minusoperator}\end{aligned}$$ It is then clear that the presence of an eigenvalue of $M$ with positive real part implies instability.
In [@komi06] a gap soliton was constructed via a method of superimposing the phase portraits of the ‘outer’ system: $$u_x = y, \qquad
y_x = (V-\omega)u-u^3,
\label{eq:outer}$$ and the ‘inner’ one: $$u_x = y, \qquad
y_x = -\omega u+\eta u^3.
\label{eq:inner}$$ We can view the composite picture as a single, non-autonomous system with phase plane given by: $$\begin{array}{lll}
u_x = y, \\
y_x = \left\{\begin{array}{lll}
(V-\omega)u-u^3, & x \in U_O, \\
-\omega u+\eta u^3, &x \in U_I.
\end{array}\right.
\end{array}
\label{fiberdyn}$$
In the phase plane of (\[eq:outer\]), the outer system admits a soliton solution, given by the equation: $$\label{eq:homo}
y^2 = (V - \w) u^2 - \frac{u^4}{2},$$ while solution curves of the inner system are given by $$\label{eq:innerC}
y^2 = - \w u^2 + \frac{\eta u^4}{2} + C.$$ The inner system (\[eq:inner\]) admits a heteroclinic orbit in the phase plane given by $C=\w^2/2$. The solutions we are interested in will travel in the phase plane along the homoclinic orbit of the outer system described by (\[eq:homo\]) and then ‘flip’ to the inner system as $x$ passes through $U_I$, and then ‘flip’ back to the outer system along the homoclinic orbit, repeating the process for each of the components of $U_I$ (see [@komi06]).
Let $U_S$ be the collection of intervals $U_S =[0,x_0)\cup (x_1,x_2) \cup (x_3,x_4) \ldots$. In the case of a gap soliton, $U_I = -U_S \cup
U_S$, and we have that the number of components of $U_I$ is infinite and the $x_i's$ are chosen so that the soliton travels from $(u_0,y_0)$ along the inner system to $(-u_0, -y_0)$. This is a key ingredient in the construction of the soliton, and will play a large role in establishing instability. In [@komi06], the inner system is linear, and the length of the interval $(x_{2k} , x_{2k-1})$ can be determined as ${\pi}/{\sqrt{\w}}$. Here, we do not require that the inner system be linear, however we do require that the $x_i's$ be chosen so that if $i\geq 1$, the soliton travels from $(u_0,y_0)$ on the homoclinic orbit along the inner system to $(-u_0,-y_0)$, which is also on the homoclinic orbit.
In Figure \[fig:gapsoliton\], we plot an example of a gap soliton of the governing equation (\[gov1\]) for parameter values that will be explained in Section \[numeric\]. One can notice the main characteristic of gap solitons in the plot, which is the infinitely many zeros in the soliton profile.
Instability Results
===================
To show instability of the standing waves, we will show that the matrix $M$ from above has a real positive eigenvalue. This is done by applying the main theorem of [@ckrtj88]. In [@rmckrtjhs10], systems like (\[gov1\]) were considered with $U_I = (-L,L)$, for some real number $L$. One can show that the following quantities are well defined (see for example [@ckrtj88], and the references therein): $$\begin{aligned}
P &=& \textrm{ the number of positive eigenvalues of } D_{+} \\
Q &=& \textrm{ the number of positive eigenvalues of } D_{-}.\end{aligned}$$ We then have the following:
If $P-Q \neq 0, 1,$ there is a real positive eigenvalue of the operator $M$. \[th:ckrtj88\]
From Sturm-Liouville theory, $P$ and $Q$ can be determined by considering solutions of $D_+ v = 0$ and $D_- v = 0$, respectively. In fact, they are the number of zeros of the associated solution $v$. Notice that $D_- v = 0$ is actually satisfied by the standing wave itself, and that $D_+ v = 0$ is the equation of variations of the standing wave equation. It follows that: $$\label{eq:pandq}
\begin{array}{lll}
&&Q = \textrm{ the number of zeros of the standing wave } u. \\
&&P = \textrm{ the number of zeros of a solution to the variational equation along $u$. }
\end{array}$$
For gap solitons, it is not immediately clear how to apply Theorem \[th:ckrtj88\] above as in this case, both, $P$ and $Q\to\infty$. The idea presented in this paper is to build an approximation to a gap soliton using more and more intervals of $U_I$ for which the quantity $P-Q$ remains constant. To this end define $S_0 = [0, x_0) $ and $S_n = [0,x_0) \cup (x_1,x_2) \cup (x_3, x_4) \cup \ldots (x_{4n-1},x_{4_n})$, where $(x_i, x_{i+1}) \subseteq U_S$. Thus $S_n$ adds two more components for each $n$. Then we can define $U_n = -S_n \cup S_n$, and we let $f_n$ be a solution to the ODE $$\begin{array}{ccccc}
f_{xx} &=& (V-\w )f - f^3, & & x \in \R \setminus U_n, \\
f_{xx} &=& -\w f + \eta f^3, & & x \in U_n.
\end{array} \label{stat2}$$ Thus for example $f_0$ would be the solution to $$\begin{array}{ccccc}
f_{xx} &=& (V-\w )f - f^3, & & |x|\geq x_0, \\
f_{xx} &=& -\w f + \eta f^3, & & |x| < x_0,
\end{array} \label{stat3}$$ while $f_1$ would be a solution to $$\begin{array}{ccccc}
f_{xx} &=& (V-\w )f - f^3, && x \notin (-x_4,-x_3) \cup (-x_2,-x_1) \cup (-x_0, x_0) \cup (x_1,x_2) \cup (x_3,x_4)\\
f_{xx} &=& -\w f + \eta f^3, && x \in (-x_4,-x_3) \cup (-x_2,-x_1) \cup (-x_0, x_0) \cup (x_1,x_2) \cup (x_3,x_4).
\end{array} \label{stat4}$$ A gap soliton then can be realized as the limit of successive $f_n$’s (in a variety of norms, but in particular in the $L^2$ and $H^1$ norms). In Figure \[fig:successivegapsolitons\] we present a plot of $f_n$, $n=0,1,2$, approximating the gap soliton in Figure \[fig:gapsoliton\].
We have the following theorem
\[th:main\] The quantity $P-Q$ is the same for all $f_i$ described above. Thus if $f_0$ is unstable then so is $f_n$ for all $n$. Further, if $f_0$ is unstable, then so is $f$, the gap soliton, corresponding to the limit.
The key idea is to use the interpretation of $P$ and $Q$ given in (\[eq:pandq\]) as the number of zeros of the solution $f$ and the number of zeros of the solution to the variational equation along $f$, for the partial solution defined on $(x_i, x_{i+4})$, to the ODE below: $$\begin{array}{ccccc}
f_{xx} &=& (V-\w )f - f^3, & & x \in (x_{i+1},x_{i+2}) \cup (x_{i+3}, x_{i+4}) \\
f_{xx} &=& -\w f + \eta f^3, & & x \in (x_i,x_{i+1}) \cup (x_{i+2},x_{i+3}).
\end{array} \label{stat5}$$
![A sketch of a phase portrait of the partial solution to equation (\[stat5\]). The points $a_i$ correspond to the points $(f(x_{i-1}),f_x(x_{i-1}))$ in the phase plane.[]{data-label="fig:gapdetour"}](sketch)
The number $Q$ is straight forward to calculate. We make the geometric observation as in [@rmckrtjhs10] that $P$, the number of zeros of a solution to the equation of variations along $f$, can be found by determining the number of times that a vector must pass through the vertical as the base point ranges over the entire orbit. It turns out that for the solution of (\[stat5\]) defined above, the rotation of a vector by the equation of variations is the same (mod $2 \pi)$ as if the base point had traveled along only the outer homoclinic orbit.
\[ex:lin\] To better illustrate this last point, we first consider the case when both the inner system and the outer systems are linear. That is we have the following systems of linear, constant coefficient equations $$\begin{aligned}
\begin{pmatrix} u \\ y \end{pmatrix}_x & = &
\begin{pmatrix} 0 & 1 \\ (V-\w) & 0 \end{pmatrix} \begin{pmatrix} u \\ y \end{pmatrix},\, \textrm{ when } x \in (x_{i+1}, x_{i+2}) \cup (x_{i+3},x_{i+4}) \label{eq:outersim} \\
& = & \begin{pmatrix} 0 & 1 \\ -\w & 0 \end{pmatrix} \begin{pmatrix} u \\ y \end{pmatrix},\, \textrm{ when } x \in (x_i, x_{i+1}) \cup
(x_{i+2},x_{i+3}) \label{eq:innersim}\end{aligned}$$ The solution to the above equation can be written explicitly. Further, because we are in the linear case, we have that the equation of variations along a solution is the same as the equation itself (\[eq:outersim\] \[eq:innersim\]).
Being led by the geometry of the phase plane, we let $\Phi_1(a,b)$ denote a fundamental solution matrix to the equation of variations of the outer system of equations (\[eq:outersim\]) along a solution to (\[eq:outersim\]) which travels from point $a$ to point $b$ in the phase plane. That is let $(u(x),y(x))$ be a solution to (\[eq:outersim\]), considered on the interval $(x_j,x_k)$. Then set $a := (u(x_j), y(x_j))$ and $b :=(u(x_k),y(x_k))$, and define $\Phi_1(a,b)$ to be a fundamental solution matrix of the equation of variations to the outer system, along the path $(u(x), y(x))$ with $x \in (x_j, x_k)$.
Similarly, let $\Phi_2(a,b)$ be a fundamental solution matrix to the equation of variations of the inner system (\[eq:innersim\]), along a solution to (\[eq:innersim\]) evolving from point $a$ to point $b$. We denote by $a_0, a_1, a_2, a_3$, the points in the phase plane of (\[eq:outersim\], \[eq:innersim\]) where the solutions switches between the two systems, and $a_4$ the point where we stop evolving (see Figure \[fig:gapdetour\]), and we let $\begin{pmatrix} \zeta_0 \\ \xi_0 \end{pmatrix}$ be a pair of initial conditions in the tangent plane to $\R^2$ at the point $a_0$. We have that a solution to the equation of variations along the orbit from $a_0$ to $a_1$ to $a_2$ to $a_3$ to $a_4$ can be described as $$\Phi_1(a_3,a_4) \Phi_2(a_2,a_3)\Phi_1(a_1,a_2)\Phi_2(a_0,a_1) \begin{pmatrix} \zeta_0 \\ \xi_0 \end{pmatrix}.$$ It turns out that modulo $2 \pi$, $$\label{eq:rotid}
\Phi_1(a_3,a_4) \Phi_2(a_2,a_3)\Phi_1(a_1,a_2)\Phi_2(a_0,a_1) \begin{pmatrix} \zeta_0 \\ \xi_0 \end{pmatrix} = \Phi_1(a_0,a_4) \begin{pmatrix} \zeta_0 \\ \xi_0 \end{pmatrix}.$$ The equality in equation (\[eq:rotid\]) can be verified by solving the appropriate systems. Another way to see the effect is to consider the following. As the base point evolves under equation (\[eq:outersim\] \[eq:innersim\]) from $a_i$ to $a_{i+1}$, we can consider the aggregate effect of a $\Phi_j(a_i, a_{i+1})$ on a tangent vector $\begin{pmatrix} \zeta_0 \\ \xi_0 \end{pmatrix}$, as a linear map from $\R^2 \to \R^2$, by simply determining where a tangent vector to $a_i$ gets sent to, when the base point is at $a_{i+1}$. That is we are considering $\Phi_j(a_i,a_{i+1})$ as a map between the tangent plane of $\R^2$ at the point $a_i$ to the tangent plane of $\R^2$ at the point $a_{i+1}$. This will give us the total rotation of a tangent vector modulo $2 \pi$ as we travel from point $a_i$ to point $a_{i+1}$ along the orbit. The key observation is to realize that for $\Phi_2(a_0,a_1)$ and $\Phi_2(a_2,a_3)$, this will be negative the identity $- \id$. That is, viewing $\Phi_2(a_j,a_{j+1}) \quad j = 0,2$ as a map between tangent spaces of $\R^2$, $\Phi_2(a_j,a_{j+1}):T_{a_j}\R \to T_{a_{j+1}}\R \quad j = 0,2 $, we have $\Phi_2(a_j,a_{j+1}) = -\id$. Moreover, by considering $\Phi_2(a_j,a_{j+1})$ in this way, we are just measuring the effect of rotation by $\Phi_2(a_j,a_{j+1})$ on an initial tangent vector modulo $2 \pi$, and we have that $$\label{eq:negid}\begin{array}{lll}
&&\Phi_1(a_3,a_4) \Phi_2(a_2,a_3)\Phi_1(a_1,a_2)\Phi_2(a_0,a_1) \begin{pmatrix} \zeta_0 \\ \xi_0 \end{pmatrix}\\
&&= (-\id)^2 \Phi_1(a_3,a_4) \Phi_1(a_1,a_2) \begin{pmatrix} \zeta_0 \\ \xi_0 \end{pmatrix} \\
&&= \Phi_1(a_0, a_4) \begin{pmatrix} \zeta_0 \\ \xi_0 \end{pmatrix},\end{array}$$ where the last equality follows from the facts that $a_0 = -a_1$, $a_2=-a_3$, the outer system of equations (\[eq:outersim\]) is symmetric about the origin, and the group property of variational flows.
We are now ready to state the main lemma used in the proof of theorem \[th:main\].
\[lem:main\] Redefine $\Phi_1(a,b)$ and $\Phi_2(a,b)$ as in the above example, but instead of using the linear ODE, let them be the fundamental solution matrices to the equations of variations along solutions to the inner and outer systems given in the nonlinear equation (\[stat5\]): $$\begin{array}{ccccc}
f_{xx} &=& (V-\w )f - f^3, & & x \in (x_{i+1},x_{i+2}) \cup (x_{i+3}, x_{i+4}), \\
f_{xx} &=& -\w f + \eta f^3, & & x \in (x_i,x_{i+1}) \cup (x_{i+2},x_{i+3}).
\end{array}$$ Likewise, let $a_j$ be defined analogously for the points in the phase plane of the nonlinear equation where the orbit switches between the inner and outer systems. Also, let $\displaystyle \begin{pmatrix} \zeta_0 \\ \xi_0 \end{pmatrix}$ be an initial condition to the equation of variations along a solution to (\[stat5\]) in the tangent plane to $\R^2$ at $a_0$. Then we have the following: $$\label{eq:rotidnl}
\Phi_1(a_3,a_4) \Phi_2(a_2,a_3)\Phi_1(a_1,a_2)\Phi_2(a_0,a_1) \begin{pmatrix} \zeta_0 \\ \xi_0 \end{pmatrix} = \Phi_1(a_0,a_4) \begin{pmatrix} \zeta_0 \\ \xi_0 \end{pmatrix}.$$
The exact same reasoning can be used to prove Lemma \[lem:main\] (the nonlinear case), as was used in the example (the linear case). The only difference is that in order to determine the aggregate effect of the inner system on an initial tangent vector some more care must be taken with the matrices $\Phi_2(a_i, a_{i+1})$. Write the equation of variations to the outer system as $$\label{eq:outrsys}
\begin{pmatrix} \zeta \\ \xi \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -3u_1^2 + V- \w & 0 \end{pmatrix} \begin{pmatrix} \zeta \\ \xi \end{pmatrix}, \,\textrm{ when } x \in (x_{i+1},x_{i+2}) \cup (x_{i+3},x_{i+4}),$$ where $u_1(x)$ is the equation satisfying the outer system with $\lim_{x\to\pm \infty} u_1(x) = \lim_{x\to\pm \infty} u_1'(x) = 0$. Write the equation of variations of the inner system as $$\label{eq:innersys}
\begin{pmatrix} \zeta \\ \xi \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ 3\eta u_2^2 - \w & 0 \end{pmatrix} \begin{pmatrix} \zeta \\ \xi \end{pmatrix}, \,\textrm{ when } x \in (x_i,x_{i+1}) \cup (x_{i+2},x_{i+3}),$$ where $u_2^2$ satisfies the appropriate conditions for the orbit. Now here is where the appropriate choices of the $x_i$’s must come into play. In the linear case, the $x_i$’s were chosen so that the length of an interval in $U_I$ was $\frac{\pi}{\sqrt{\w}}$. Here we choose the $x_i$’s in $U_I$ so that the length of an interval is such that we will return not only to the homoclinic orbit, but also if we leave the homoclinic orbit at the point $(u_0,y_0)$, we will return to the homoclinic orbit at the point $(-u_0,-y_0)$. This allows us to determine the effect of the rotation (modulo $2\pi$) by the flow associated to the equation of variations along the partial orbit $(u_2(x),y_2(x)) $. In fact, we claim that the exact same is true as in the linear case. If $B$ is the linear map from the tangent space at $a_0$ and at $a_2$ to the tangent spaces at $a_1, a_3$ respectively, then $B = -\id$. To see this we will write out $B$ in a suitable basis $\vec{v}_1, \vec{v}_2$ of the tangent space at $a_0$. One obvious choice of a basis vector is the tangent vector to the inner system. However given equation (\[eq:innersys\]), and the fact that along an orbit $(u_0,y_0) \to (-u_0,-y_0)$, this means that if $\vec{v}_1$ is the vector tangent to the inner orbit at $a_0$ (or $a_2$), then under $B$ $\vec{v}_1 \to -\vec{v}_1$. This means that $B$ has the form: $$B = \begin{pmatrix} -1 & b_{1,2} \\ 0 & b_{2,2} \end{pmatrix},$$ where $b_{i,j}$ are the coefficients of the linear combination of $\vec{v_1}$ and a suitably chosen $\vec{v}_2$. Now we appeal to two facts about the matrix $B$ which are evident from it’s definition. The first is that $B$ must be orientation preserving. This is an elementary consequence due of the fact that it is the matrix of a flow (see for example [@perko01]). This means that $b_{2,2}$ must be negative. The second fact is that since $B$ corresponds to the matrix of the equation of variations traveling half way along the periodic orbit given by $(u_2(x),y_2(x))$ (because we chose our $x_i$’s so it would be that way), we must have that $B^2 = \id$. But this means that $b_{1,2} =0$ and $b_{2,2} = -1$ and the matrix $B$ itself $B = -\id$. Now we simply repeat the computation done in equation (\[eq:negid\]) and the proof of Lemma \[lem:main\] is complete.
We are now ready to complete the proof of theorem \[th:main\] .
Recall that $f_n$ as constructed is the solution to the ODE (\[stat2\]). We let $P_n$ and $Q_n$ denote the count for $f_n$ of $P$ and $Q$ respectively. Lemma \[lem:main\] shows that $P_{n-1} = P_n+2$ and it is clear that $Q_{n-1} = Q_{n}$, and so the quantity $P_n-Q_n$ is the same for all $f_n$, and in particular is equal to $P-Q$ for $f_0$. This completes the first part of the proof of theorem \[th:main\].
In order to determine the instability of the limit soliton we must proceed topologically using the methods developed in the proof of the main theorem of [@ckrtj88].
We have already discussed that in $H^1$, $f_n \to f$ a solution to $$\begin{array}{ccccc}
f_{xx} &=& (V-\w )f - f^3 & & x \in \R \setminus U_I, \\
f_{xx} &=& -\w f + \eta f^3 & & x \in U_I.
\end{array} \label{top2}$$
Following [@ckrtj88] we can associate to each solution $f_n$ a curve $\gamma_n(x)$, and to $f$ a curve $\gamma(x)$ in $\Lambda(2)$ the space of Lagrangian planes in $\R^4$.
This is done as follows. Let $\Phi^n_{L_+}(x)$ denote the evolution operator of the ODE corresponding to the equation of variations of the ODE (\[stat2\]) along the solution $f_n$. Likewise, let $\Phi_{L_+}(x)$ denote the evolution operator of the ODE corresponding to the equation of variation of the ODE (\[top2\]) along the solution $\displaystyle f = \lim_{n \to \infty} f_n$. Thus if $\displaystyle \begin{pmatrix} v_0 \\ w_0 \end{pmatrix}$ is a pair of initial conditions at $x = 0$, then for any $x \in \R$ we have that the evolution of $\displaystyle \begin{pmatrix} v_0 \\ w_0 \end{pmatrix}$ under the equation of variations along $f$, $f_n$ respectively will be given by $\displaystyle \begin{pmatrix} \Phi_{L_+}(x) \cdot v_0 \\ \Phi_{L_+}(x)\cdot w_0 \end{pmatrix}$, respectively $\displaystyle \begin{pmatrix}
\Phi_{L_+}^n(x) \cdot v_0 \\\Phi_{L_+}^n(x) \cdot w_0 \end{pmatrix}$.
We remark that the initial conditions $\displaystyle \begin{pmatrix} v_0 \\ w_0 \end{pmatrix} $ will be the same for each $f_n$ as well as for $f$.
Again appealing to [@ckrtj88], we can explicitly write the curves $\gamma_n(x)$ and $\gamma(x)$ in the space of Lagrangian planes $\Lambda(2) \approx U(2)/O(n)$. This is given by $$\gamma_n(x) = \begin{pmatrix} e^{i \theta_{1,n}(x)} & 0 \\ 0 & e^{i \theta_{2,n}(x)} \end{pmatrix},$$ where $$\theta_{1,n} = 2 \arctan(\frac{\Phi_{L_+}^n(x) \cdot w_0}{\Phi_{L_+}^n(x) \cdot v_0}) \textrm{ and, } \theta_{2,n} = -2 \arctan(\frac{f_n'(x)}{f_n(x)}),$$ and $$\gamma(x) = \begin{pmatrix} e^{i \theta_1(x)} & 0 \\ 0 & e^{i \theta_2(x)} \end{pmatrix},$$ where $$\theta_1 = 2 \arctan(\frac{\Phi_{L_+}(x) \cdot w_0}{\Phi_{L_+}(x) \cdot v_0}) \textrm{ and, } \theta_2 = -2 \arctan(\frac{f'(x)}{f(x)}).$$ Now we observe that the curves $\gamma_n(x)$ and $\gamma(x)$ actually lie on a torus contained in $\Lambda(2)$.
It was established in [@ckrtj88] that because $f_n$ and $f$ are solutions corresponding to homoclinic orbits in the phase plane of equations (\[stat2\]),and (\[top2\]), the curves $\gamma_n(x)$, and $\gamma(x)$ have well defined end points. Let $\mu_{-,n}$, $\mu_{+,n}$ be the endpoints in $\Lambda(2)$ of $\gamma_n(x)$. That is, let $$\lim_{x \to -\infty} \gamma_n(x) = \mu_{-,n} \textrm{ and, } lim_{x \to \infty} \gamma_n(x) = \mu_{+,n},$$ and set $$\lim_{x \to -\infty} \gamma(x) = \mu_{-} \textrm{ and, } lim_{x \to \infty} \gamma(x) = \mu_{+}.$$ Further because $f_n \to f$ and lemma \[lem:main\], we have that $\mu_{-,n} = \mu_{-}$, and $\mu_{+,n} = \mu_{+}$ for all $n$. In the previously introduced coordinates on the torus in $\Lambda(2)$ this means that the limits of $\theta_{1,n}$, $\theta_{2,n}$ are equal to the limits of $\theta_1(x)$ and $\theta_2(x)$ as $\x \to \pm \infty$. Moreover, it is easy to calculate explicitly that $$\theta_{1}(x) \to 2 \arctan(\sqrt{V-\w}) := \theta_- \textrm{ and,} \quad \theta_2(x) \to - \theta_-$$ as $x \to -\infty$.
Still following the outline laid out in [@ckrtj88], we denote by $\tilde{}$ the lift of the point (or curve) in the torus embedded in $\Lambda(2)$ to its corresponding point in the universal cover of the torus, $\R^2$. We will parametrize the universal covering of the torus in the obvious way. Without loss of generality, all of the $\mu_{-,n}$’s and $\mu_{-}$ can be lifted to the same point $\tilde{\mu}_- = (\theta_-, -\theta_-)$. It was shown in [@ckrtj88] that for each $n$, $\mu_{+,n}$ lifts to the point $\tilde{\mu}_{+,n} = ( \pm \theta_-, \theta_- + (P-Q) 2 \pi$. Thus lemma \[lem:main\] implies that each $\mu_{+,n}$ lifts to the same point $\tilde{\mu}_{+,0} = (\pm \theta_-, \theta_- + 2 \pi k )$,
Next we observe that as $f_n \to f$ pointwise, $\gamma_n(x) \to \gamma(x)$ in the torus inside $\Lambda(2)$ pointwise, and the compactness of the torus and of $\Lambda(2)$, means that the end point $\mu_+$ must lift to the same point in the cover as $\mu_{+,0}$. Thus we have that $\tilde{\mu}_+ = (\pm \theta_- , \theta_- + 2 \pi k )$.
Finally, it was shown in [@ckrtj88], that if $|k| \neq 0, 1$, then the corresponding soliton underlying the curve $\gamma$ is unstable. This completes the proof of theorem \[th:main\]
The proof of theorem 2 may also be couched in the language of fixed end point homotopy classes. There are several ways to define such classes, see for example [@robsal93] or [@abbond01], and the references therein. In this context theorem \[th:main\] establishes that the fixed end-point homotopy class of the curve $\gamma$ is the same as those for $\gamma_n(x)$. An immediate consequence of this observation is that in $\Lambda(2)$, it is possible to deform the curves $\gamma$, and $\gamma_n$ all to the curve $\gamma_0$, in a continuous way.
\[remark\] One can also consider so-called [*surface*]{} gap solitons, and obtain exactly the same results as for theorem \[th:main\]. Mathematically, a surface gap soliton is the evolution of the solution to equation (\[top2\]) but with the chosen intervals $U_I$ replaced by $U_S$, defined earlier. In this case, we consider a sequence of functions $f_n$ which are solutions to equation (\[stat2\]), but with $U_n$ replaced by $S_n$. Then the functions $f_n \to f$, a solution to (\[top2\]) with the appropriate replacements. Lemma \[lem:main\] holds, as well as theorem \[th:main\], and the techniques used in each will be identical. Thus if we start with an unstable solution, then the surface gap soliton that we obtain in the limit will also be unstable. (See below for a further discussion of surface gap solitons).
Numerical solutions and Discussion {#numeric}
==================================
We have solved the time independent equation (\[stat1\]) numerically, where we have used a spectral difference method to approximate the Laplacian $u_{xx}$. Once a solution is obtained, the corresponding eigenvalue problem (\[eq:linear\]) is solved using a MATLAB routine. The time dependent equation (\[gov1\]) is integrated numerically using a fourth-order Runge-Kutta method. Throughout the paper, we consider the parameter values $$V=1,\,\omega=0.5.$$
First, we study Equation (\[gov1\]) with $$\eta=\left\{
\begin{array}{lll}
1,\quad x\in (-x_0,x_0),\\
0,\quad x\in (x_{2n+1},x_{2n+2}),\,(-x_{2n+2},-x_{2n+1}),
\end{array}
\right.
\label{eta1}$$ where $x_0=2,\,x_{2n+1}-x_{2n}=1,\,x_{2n+2}-x_{2n+1}=\pi/\sqrt{\omega}$ and $ n=0,1,2,\dots$. A gap soliton for the above periodic inhomogeneity is depicted in Figure \[fig:gapsoliton\].
Theorem \[th:main\] implies that to determine the instability of the gap soliton, it suffices to determine the instability of the corresponding solution $f_0$ shown in panel (a,b) of Figure \[fig:successivegapsolitons\]. As discussed in [@rmckrtjhs10], the positive solution $f_0$ is unstable, with $P=2$ and $Q=0$. We plot $\lambda_+$, i.e. the eigenvalues of the operator $D_+$, in Figure \[fig:lambda\_p\]. As shown in the figure, for $f_0$ there are two positive eigenvalues of $D_+$, i.e. $P=2$. The matrix $M$ in (\[eq:linear\]) for the solution has one pair of real eigenvalues [@rmckrtjhs10] in agreement with Theorem \[th:ckrtj88\].
According to Lemma \[lem:main\], $f_n$ must have the same value of $P-Q$ as $f_0$. In the same figure, we obtain that $f_1$ and $f_2$ respectively has $P=6$ and $P=10$. Considering the fact from Figure \[fig:successivegapsolitons\] that $f_1$ and $f_2$ respectively has $Q=4$ and $Q=8$, we indeed obtain that $P-Q=2$ for both $f_1$ and $f_2$. Using the lemma, one will obtain that $P-Q=2$ for $\lim_{n\to\infty}f_n$. Using Theorem \[th:main\], one can conclude that the gap soliton in Figure \[fig:gapsoliton\] will be unstable. We depict in Figure \[fig:lambda\](a) the eigenvalue structure of the gap soliton in the complex plane. When the corresponding $f_0$ of the gap soliton has one pair of real eigenvalues [@rmckrtjhs10], the gap soliton has several pairs of unstable eigenvalues. Nonetheless, one can easily notice that there is only one pair of real eigenvalues, similarly to $f_0$ [@rmckrtjhs10]. The time dynamics of the solution is shown in panel (b) of the same figure, where a typical instability is in the form of the dissociation of the solution.
Next, we study Equation (\[gov1\]) with $$\eta=\left\{
\begin{array}{lll}
1,\quad x\in (-x_0,x_0),\\
0,\quad x\in (x_{2n+1},x_{2n+2}),
\end{array}
\right.
\label{eta2}$$ for the same values of $x_n$, $n=0,1,2,\dots$, as above. The only difference with $\eta$ defined in Equation (\[eta1\]) is that the present periodic inhomogeneity only occupies the $x>0$-region. In this case, we will have surface gap solitons sitting at the interface between a homogeneous and a periodically inhomogeneous region. A corresponding surface gap soliton of that in Figure \[fig:gapsoliton\] and one of its successive approximations $f_1$ are shown in Figure \[fig:surface\]. The $f_0$ approximation of the soliton is nothing else but that shown in Figure \[fig:successivegapsolitons\](a).
Using Theorem \[th:main\] and Remark \[remark\], one can expect that in this case $P-Q=2$. Plotted in Figure \[fig:surface\_stability\](a) is the positive eigenvalues of $D_+$, i.e. $\lambda_+$. The positive eigenvalue $\lambda_+$ of $f_0$ is the same as before, which is $P=2$. For $f_1$ and $f_2$, from Figure \[fig:surface\_stability\](a) one can deduce that $P=4$ and $P=6$, respectively, with $Q=2$ and $Q=4$. Hence, the limiting quantity $P-Q$ of the surface gap soliton is the same as that of the gap soliton in Figure \[fig:gapsoliton\], i.e. $P-Q=2$. As expected, shown in Figure \[fig:surface\_stability\](b) is the eigenvalue structure of the gap soliton, where one also obtains one pair of real eigenvalues similarly to the stability the gap soliton depicted in Figure \[fig:lambda\](a). We plot the time dynamics of the surface gap soliton in Figure \[fig:surface\_evol\].
Conclusion
==========
We have considered a nonlinear Schrödinger equation with periodic inhomogeneity, both in the infinite and semi-infinite domain. Specifically we have studied the instability of gap solitons admitted by the system. We have established a proof that if the periodic inhomogeneity is arranged in a particular way, such that parts of the solutions belonging to closed trajectories in the phase-space have length half the period of the trajectories, then the solitons inherits the instability of the corresponding solution with finite inhomogeneity. The analytical study is based on the application of a topological argument developed in [@ckrtj88].
It is natural to extend the study to the case when the solutions are localized, but do not tend to the uniform zero solution (see, e.g., [@komi06_2]). The (in)stability of such solitons is proposed to be studied in the future using analytical methods similar to that presented herein.
[999]{} A. Abbondandolo. *Morse Theory for Hamiltonian Systems.* Pitman Research Notes in Mathematics, vol. 425, Chapman and Hall, London, 2001.
A.B. Aceves, *Optical gap solitons: Past, present, and future; theory and experiments*, Chaos 10, 584 (2000).
C. Denz, S. Flach, Yu.S. Kivshar, *Nonlinearities in Periodic Structures and Metamaterials*, Volume 150 (Springer, 2009).
I. V. Barashenkov, D. E. Pelinovsky, and E. V. Zemlyanaya, *Vibrations and Oscillatory Instabilities of Gap Solitons*, Phys. Rev. Lett. 80, 5117 (1998).
E. Blank and T. Dohnal, *Families of Surface Gap Solitons and their Stability via the Numerical Evans Function Method*, to appear in SIAM J. Appl. Dyn. Syst..
W. Chen and D. L. Mills, *Gap solitons and the nonlinear optical response of superlattices*, Phys. Rev. Lett. 58, 160 (1987)
A. De Rossi, C. Conti, and S. Trillo, *Stability, Multistability, and Wobbling of Optical Gap Solitons*, Phys. Rev. Lett. 81, 85 (1998).
C. M. de Sterke and J. E. Sipe, in Progress in Optics, edited by E. Wolf (North-Holland, Amsterdam, 1994), Vol. XXXIII, pp. 203260.
T. Dohnal and D. Pelinovsky, *Surface Gap Solitons at a Nonlinearity Interface*, SIAM J. Appl. Dyn. Syst. 7, 249-264 (2008).
B.J. Eggleton, R. E. Slusher, C. M. de Sterke, P.A. Krug, and J. E. Sipe, *Bragg Grating Solitons*, Phys. Rev. Lett. 76, 1627 (1996).
B. Eiermann, Th. Anker, M. Albiez, M. Taglieber, P. Treutlein, K.-P. Marzlin, and M. K. Oberthaler, *Bright Bose-Einstein Gap Solitons of Atoms with Repulsive Interaction*, Phys. Rev. Lett. 92, 230401 (2004).
R.H. Goodman, M.I. Weinstein, and P.J. Holmes, *Nonlinear propagation of light in one-dimensional periodic structures*, J. Nonlinear Science 11, 123168 (2001).
T. Iizuka, *Envelope Soliton of the Bloch Wave in Nonlinear Periodic Systems*, J. Phys. Soc. Jpn. 63, 4343 (1994).
T. Iizuka and M. Wadati, *Grating Solitons in Optical Fiber*, J. Phys. Soc. Jpn. 66, 2308 (1997)
C. K. R. T. Jones, *Instability of standing waves for non-linear schrödinger-type equations*, Ergodic Theory and Dynamical Systems **8\*** (1988), 119–138.
C. K. R. T. Jones, R. Marangell, and H. Susanto, *Localized standing waves in inhomogeneous schrödinger equations*, Nonlinearity **23** (2010), no. 2059.
P.G. Kevrekidis, *The discrete nonlinear Schrödinger equation: mathematical analysis, numerical computations and physical perspectives*, Volume 232 (Springer, 2009).
P.G. Kevrekidis, D.J. Frantzeskakis, R. Carretero-González (Eds.), *Emergent nonlinear phenomena in Bose-Einstein condensates: theory and experiment*, Volume 45 (Springer, 2008).
Y. Kominis, *Analytical solitary wave solutions of the nonlinear Kronig–Penney model in photonic structures*, Phys. Rev. E 73, 066619 (2006).
Y. Kominis and K. Hizanidis, *Lattice solitons in self-defocusing optical media: analytical solutions of the nonlinear Kronig–Penney model*, Opt. Lett. 31, 2888-2890 (2006).
Y. Kominis, A. Papadopoulos, and K. Hizanidis, *Surface solitons in waveguide arrays: Analytical solutions*, Opt. Express 15, 10041-10051 (2007).
B.A. Malomed and R.S. Tasgal, *Vibration modes of a gap soliton in a nonlinear optical medium*, Phys. Rev. E 49, 5787 (1994).
D. Mandelik, H. S. Eisenberg, Y. Silberberg, R. Morandotti, and J. S. Aitchison, *Band-Gap Structure of Waveguide Arrays and Excitation of Floquet-Bloch Solitons*, Phys. Rev. Lett. 90, 053902 (2003).
D. Mandelik, R. Morandotti, J. S. Aitchison, and Y. Silberberg, *Gap Solitons in Waveguide Arrays*, Phys. Rev. Lett. 92, 093904 (2004).
D.E. Pelinovsky, P.G. Kevrekidis and D.J. Frantzeskakis, *Stability of discrete solitons in Nonlinear Schrodinger Lattices*, Physica D 212, 1-19 (2005).
D. Pelinovsky and G. Schneider, *Justification of the coupled-mode approximation for a nonlinear elliptic problem with a periodic potential*, Applicable Analysis 86, 10171036 (2007).
D. Pelinovsky and G. Schneider, *Moving gap solitons in periodic potentials*, Mathematical Methods in the Applied Sciences 31, 17391760 (2008).
D.E. Pelinovsky, A.A. Sukhorukov, and Yu.S. Kivshar, *Bifurcations and stability of gap solitons in periodic potentials*, Phys. Rev. E 70, 036618 (2004).
L. Perko, *Differential equations and dynamical systems*, 3rd ed., Texts in Applied Mathematics, no. 7, Springer, 2001.
J. Robbin, and D. Salamon. *The Maslov index for paths.*Topology. Volume 32 Number 4. pp 827–844 (1993).
C.R. Rosberg, D.N. Neshev, W. Krolikowski, A. Mitchell, R.A. Vicencio, M. I. Molina, and Yu. S. Kivshar, *Observation of Surface Gap Solitons in Semi-Infinite Waveguide Arrays*, Phys. Rev. Lett. 97, 083901 (2006)
E. Smirnov, M. Stepic, C. E. Ruter, D. Kip, and V. Shandarov, *Observation of staggered surface solitary waves in one-dimensional waveguide arrays*, Opt. Lett. 31, 2338-2340 (2006).
Yu. V. Volovshchenko, Yu. N. Ryzhov, and V. E. Sotin, Zh. Tekh. Fiz. 51, 902 (1981) (in Russian) \[Sov. Tech. Phys. Lett. 26, 541 (1981)\].
J. Yang, *Nonlinear Waves in Integrable and Nonintegrable Systems* (SIAM, 2010).
|
---
bibliography:
- 'tree\_bayes\_est.bib'
---
BAYES ESTIMATORS 0.2in
[**Bayes estimators for phylogenetic reconstruction**]{} 0.2in
[P.M. Huggins$^1$, W. Li$^{2}$, D. Haws$^{3}$, T. Friedrich$^{3}$, J. Liu$^{2}$, and R. Yoshida$^{3}$\
$^1$[*Lane Center for Computational Biology (Carnegie Mellon University)*]{}\
[*Mellon Institute Building 4400 Fifth Avenue Pittsburgh, PA 15213*]{}\
$^2$[*Department of Computer Science, The University of Kentucky, Lexington, KY, 40506-0046237*]{}\
$^3$[*Department of Statistics, University of Kentucky, Lexington, KY 40526-0027*]{}\
PMH, WL, and RY contributed equally to this work]{}
Corresponding author: Ruriko Yoshida,\
Department of Statistics, University of Kentucky, Lexington, KY 40526-0027\
phone:(859) 257-5698, Fax:(859) 323-1973\
email:[ruriko.yoshida@uky.edu](ruriko.yoshida@uky.edu),\
0.5in [*Abstract.–*]{} Tree reconstruction methods are often judged by their accuracy, measured by how close they get to the true tree. Yet most reconstruction methods like ML do not explicitly maximize this accuracy. To address this problem, we propose a Bayesian solution. Given tree samples, we propose finding the tree estimate which is closest on average to the samples. This “median” tree is known as the Bayes estimator (BE). The BE literally maximizes posterior expected accuracy, measured in terms of closeness (distance) to the true tree. We discuss a unified framework of BE trees, focusing especially on tree distances which are expressible as squared euclidean distances. Notable examples include Robinson–Foulds distance, quartet distance, and squared path difference. Using simulated data, we show Bayes estimators can be efficiently computed in practice by hill climbing. We also show that Bayes estimators achieve higher accuracy, compared to maximum likelihood and neighbor joining.
0.2in
[*key words*]{}: Bayes estimator, consensus tree, path difference metric, phylogenetic inference.
0.8cm
[<span style="font-variant:small-caps;">Introduction</span>]{}
0.5in
When a large phylogeny is reconstructed from sequence data, it is typically expected that the reconstructed tree is at least slightly wrong, i.e. slightly different than the true tree. We refer to the difficulty in accurately reconstructing phylogenies as [*tree uncertainty*]{}.
0.5in Tree uncertainty is a pervasive issue in phylogenetics. To help cope with tree uncertainty, bootstrapping and Bayesian sampling methods provide a collection of possible trees instead of a single tree estimate. Using bootstrapping or Bayesian sampling, one common practice is to identify highly supported tree features (e.g. splits) which occur in almost all the tree samples. Highly supported features are regarded as likely features of the true tree.
0.5in Similarly, in simulation studies it is common to judge reconstruction methods based on how [*close*]{} they get to the true tree ([@Desper2003]). Closeness to the true tree can be measured in many different ways. One popular measure of closeness is the Robinson–Foulds (RF) distance (also known as symmetric difference).
0.5in These customary practices reflect a common view that when tree uncertainty is likely, a good reconstruction method ought to at least find a tree which is close to the true tree. For example, if multiple trees have high likelihood, then a good tree estimate should be an “accurate representative” of the high likelihood trees. Yet, reconstruction methods like maximum likelihood (ML) are not directly designed to achieve this goal. This leads us to ask whether reconstruction accuracy (i.e. closeness to the true tree) can be improved, by attempting to directly optimize accuracy instead of likelihood.
0.5in Even though the true tree is unknown, we can still optimize reconstruction accuracy using a Bayesian approach. In the Bayesian view, the true tree is a random variable $T$ distributed according to the posterior distribution $P(T \, | \, D)$, where $D$ is input data such as sequence data. If $d()$ measures distance between trees, and $T'$ is a tree estimate, then the expected distance between $T'$ and the true tree is ${\mathbb E}_{T \sim P(T \, | \, D)} d(T,T')$. Thus, to maximize reconstruction accuracy, we should choose our tree estimate to be $T^* = \hbox{argmin}_{T'} {\mathbb E}_{T \sim P(T \, | \, D)} d(T,T')$ where $T^*$ is known as a [*Bayes estimator*]{}.
0.5in Many popular distances between trees can be easily expressed as a squared euclidean distance, after embedding trees in an appropriately chosen vector space. Important examples include Robinson–Foulds distance (symmetric difference), quartet distance, and the squared path difference. In this paper, we focus on squared euclidean distances.
0.5in In statistical decision theory, Bayes estimators under squared euclidean distance are well understood and have nice properties. For example, under a squared euclidean distance, the Bayes estimator minimizes the distance to the mean of the posterior. This gives the result in [@Holder2008]: The majority-rule consensus tree is the Bayes estimator, if closeness between trees is defined by Robinson-Foulds distance. We also derive a closely related result for quartet distance: Under quartet distance, the Bayes estimator tree is equivalent to a weighted quartet puzzling problem.
0.5in In general, computing Bayes estimators is at least as hard as computing ML trees. Hill climbing techniques are popular and effective heuristics for hard tree optimization problems such as ML. Thus we propose hill climbing to compute Bayes estimator trees as well. For squared euclidean distances, each hill climbing step is quite fast, comparable to a traditional ML hill climbing step.
0.5in We provide a simulation study of Bayes estimators using the path difference metric. We use hill climbing with nearest neighbor interchange (NNI) moves to find Bayes estimators. We observe that hill climbing is fast in practice, after the preprocessing step of sampling the posterior on trees. More importantly, we observe that Bayes estimator trees are more accurate on average, compared to ML and neighbor joining (NJ). These results comprise an encouraging pilot study of Bayes estimators. We conclude by discussing improvements and directions for future work developing Bayes estimators for phylogeny.
[<span style="font-variant:small-caps;">Bayes estimators and squared euclidean distance</span>]{}
0.5in Let $D$ denote a collection of homologous sequences from $n$ species. Many evolutionary models exist which express $P(D \, | \,T, \theta)$ in terms of an underlying phylogenetic tree $T$ on the n species, and evolutionary rate parameters $\theta$. Given such a model, and observed sequence data $D$, there are two main methods for sampling trees $T$ which could have generated $D$:
- The Bayesian method, which declares a prior $P(T)$ on tree topologies, and uses sampling techniques such as Monte Carlo Markov Chain (MCMC) to approximately sample from $P(T \, | \, D) \varpropto P(T) P(D \, | T)$,
- The bootstrap method, which creates hypothetical datasets $D_i$ by bootstrapping columns from an alignment of $D$, and then computes a tree $T_i = T(D_i)$ for each $D_i$ by applying a tree reconstruction method such as ML or NJ.
0.5in The notation $P(T | D)$ is not entirely appropriate for the distribution on trees obtained by the bootstrap method. Nevertheless, for convenience we will use the notation $P(T | D)$ for the obtained distribution, regardless of whether the Bayesian or bootstrap method is used.
0.5in Given a measure of dissimilarity (or [*distance*]{}) $d(T, T')$ between phylogenetic trees on $n$ taxa, the [*(posterior) expected loss*]{} associated with a tree $T'$ is ${\mathbb E} d(T, T')$, where the expectation is taken over $T$, distributed as $P(T | D)$. We write $\rho(T')$ for the expected loss. The [*Bayes estimator*]{} $T^*$ minimizes the expected loss: $$T^* = \hbox{argmin}_{T'} \, \rho(T')$$ In other words, regarding the true tree $T$ as a random variable distributed as $P(T | D)$, the Bayes estimator is the tree $T^*$ which is closest to $T$ on average.
0.5in Bayes estimators are a common tool in statistical optimization and decision theory [@Berger1985]. Given a finite sample $T_1, \ldots, T_N$ from $P(T | D)$, the [*empirical expected loss*]{} is $\hat{\rho}(T') = \frac{1}{N} \sum_{i=1}^N d(T',T_i)$, and the empirical Bayes estimator is the tree that minimizes the empirical expected loss. In this paper we will focus on empirical Bayes estimators for a given sample, and so we will simply say “Bayes estimator” when we mean the empirical Bayes estimator.
0.8cm
[[*Squared euclidean distances between trees*]{}]{}
0.5in Let $\mathcal{T}_n$ be the space of trees on $n$ taxa. We call $d(\cdot,\cdot)$ a [*squared euclidean distance*]{} if there is a function $v:\mathcal{T}_n \to {\mathbb R}^m$ for some $m$, such that $$d(T,T') = || v(T) - v(T') ||^2.$$ We call $v()$ a (vector space) embedding. Recall that for two vectors $a = (a_1, \ldots, a_m)$, $b = (b_1, \ldots, b_m)$ in ${\mathbb R}^m$, we have $||a||^2 = \sum_{i=1}^m a_i^2$, and $|| a - b ||^2 = ||a||^2 + ||b||^2 - 2(a \cdot b)$ where $a \cdot b$ denotes the dot product $\sum_{i=1}^m a_i b_i$.
0.5in Many popular distances between trees are squared euclidean distances. Below we list several such distances, all of which were studied in [@Steel1993]. For each distance, we illustrate the vector space embedding and the distance using the two trees $T_1$ and $T_2$ shown in Figure \[fig3\] (no branch lengths) and Figure \[fig4\] (branch lengths).
Let $S(T)$ denote the set of splits induced by a tree $T$. The (normalized) Robinson-Foulds distance [@Robinson1981] $d_{RF}(T,T')$ is half the size of the symmetric difference $(S(T) - S(T')) \cup (S(T') - S(T))$. The Robinson–Foulds distance can also be realized as the squared euclidean distance $$d_{RF}(T',T) = \frac{1}{2} || v_{RF}(T) - v_{RF}(T') ||^2$$ where $v_{RF}: {\mathcal T}_n \to {\mathbb R}^{2^{n-1}-1}$ maps tree $T$ to the 0/1 vector $v_{RF}(T)$ whose nonzero entries correspond to splits in $T$. For example, for the trees $T_1$ and $T_2$ in Figure \[fig3\], we have $$v_{RF}(T_1) = (1,1,1,1,1,1,0,0,0,0,0,1,0,0,0),$$ $$v_{RF}(T_2) = (1,1,1,1,1,1,0,0,1,0,0,0,0,0,0),$$ and $$d_{RF}(T_1,T_2) = \frac{1}{2} || v_{RF}(T_1) - v_{RF}(T_2) ||^2 = 1.$$ Here the coordinates of $v_{RF}(T_1)$ and $v_{RF}(T_2)$ are given by $$\begin{gathered}
\Big(\{A\},\{B\},\{C\},\{D\},\{E\},\{A,B\},\{B,C\},\{A,C\},\{C,D\},\\
\{B,D\},\{A,D\},\{D,E\},\{C,E\},\{B,E\},\{A,E\}\Big)\end{gathered}$$ where for example $\{B,D\}$ corresponds to the partition $\{\,\{B,D\},\{A,C,E\}\,\}$.
Let $Q(T)$ denote the set of quartets induced by a tree $T$. The quartet distance [@Estabrook1985] $d_Q(T,T')$ is half the size of the symmetric difference $(Q(T) - Q(T')) \cup (Q(T') - Q(T))$. Analogous to the Robinson–Foulds distance, $d_Q$ can be realized as a squared euclidean distance, $$d_{Q}(T',T) = \frac{1}{2} || v_{Q}(T) - v_{Q}(T') ||^2$$ where $v_{Q}: {\mathcal T}_n \to {\mathbb R}^{3 {n \choose 4}}$ maps tree $T$ to the 0/1 vector $v_{Q}(T)$ whose nonzero entries correspond to quartets in $T$. For example, for the trees $T_1$ and $T_2$ in Figure \[fig3\], we have $$v_{Q}(T_1) = (1,0,0,1,0,0,1,0,0,1,0,0,1,0,0),$$ $$v_{Q}(T_2) = (1,0,0,0,0,1,1,0,0,0,0,1,1,0,0),$$ and $$d_{Q}(T_1,T_2) = \frac{1}{2} || v_{Q}(T_1) - v_{Q}(T_2) ||^2 = 2.$$ Here the coordinates of $v_{Q}(T_1)$ and $v_{Q}(T_2)$ are given by following cherry groupings (two leaves with the same parent node) [$$\begin{gathered}
\Big(\, \{AB,CD\}, \{AC,BD\}, \{AD,BC\}, \{BC,DE\}, \{BD,CE\}, \{BE,CD\},\{AB,CE\}, \{AC,BE\}, \\
\{AE,BC\}, \{AC,DE\}, \{AD,CE\}, \{AE,CD\}, \{AB,DE\}, \{AD,BE\}, \{AE,BD\}\, \Big).\end{gathered}$$ ]{}
For $T \in {\mathcal T}_n$, let $D_T \in {\mathbb R}^{n \choose 2}$ be the matrix of pairwise distances between leaves in $T$. The [*squared dissimilarity map distance*]{} is defined as $d_{D}(T',T) = ||D_T - D_{T'}||^2$. The dissimilarity map distance is perhaps one of the oldest studied, see e.g. [@Buneman1971]. For example, for the trees $T_1$ and $T_2$ in Figure \[fig4\], we have $$D_{T_1} = (5.3,9.0,15.2,12.4,6.1,12.3,9.5,10.8,8.0,8.0),$$ $$D_{T_2} = (3.5,11.3,13.2,10.9,12.0,13.9,11.6,7.1,7.0,8.9),$$ and $$d_{D}(T_1,T_2) = || D_{T_1} - D_{T_2} ||^2 = 72.06.$$ Here the coordinates of $D_{T_1}$ and $D_{T_2}$ are given by $$\Big( D_{1,2}, D_{1,3,}, D_{1,4}, D_{2,3}, D_{2,4}, \ldots, D_{4,5} \Big),$$ where $D_{i,j}$ is the sum of branch lengths of the path from leaf $i$ to $j$.
\[ex2\] The distances $d_{RF}(T,T')$ and $d_Q(T,T')$ are [*topological*]{} distances, i.e. they only depend on the topologies of $T,T'$, and not edge lengths. The dissimilarity map distance does depend on edge lengths, but it has a natural topological analog called the [*path difference metric.*]{} The squared path difference is $$d_{p}(T',T) = || v_{p}(T) - v_{p}(T') ||^2$$ where $v_{p}(T) \in {\mathbb R}^{n \choose 2}$ is the integer vector whose $ij$th entry counts the number of edges between leaves $i$ and $j$ in $T$. Path difference was studied in [@Steel1993]. Note that in our notation, we have squared the norm, whereas [@Steel1993] defined $d_{p}(T',T) = || v_{p}(T) - v_{p}(T') ||$.
For example, for the trees $T_1$ and $T_2$ in Figure \[fig3\], we have $$v_{p}(T_1) = (2,3,4,4,3,4,4,3,3,2),$$ $$v_{p}(T_2) = (2,4,4,3,4,4,3,2,3,3),$$ and $$d_{p}(T_1,T_2) = || v_{p}(T_1) - v_{p}(T_2) ||^2 = 6.$$ Here the coordinates of $v_{p}(T_1)$ and $v_{p}(T_2)$ are given by $$\Big( v_{1,2}, v_{1,3,}, v_{1,4}, v_{2,3}, v_{2,4}, \ldots, v_{4,5} \Big),$$ where $v_{i,j}$ is the number of edges between leaf $i$ and $j$.
0.5in The above examples highlight the fact that many combinatorial distances can be interpreted as squared euclidean distances. Under a squared euclidean distance, the Bayes estimator is the projection of the mean onto the nearest tree. More specifically, if $d(T,T') = || v(T) - v(T') ||^2$ is a squared euclidean distance, then evidently $${\rho}(T') = ||v(T') - {\mu}||^2 + Var$$ where ${\mu} = {\mathbb E} [v(T)]$ and ${\mu}_2 = {\mathbb E}[ \, ||v(T)||^2 \, ]$, and $Var = {\mu}_2 - ||{\mu}||^2$ does not depend on $T'$.
0.5in For example, under the Robinson–Foulds distance, the Bayes estimator is obtained by projecting the vector of split frequencies $\mu_{RF} = {\mathbb E} v_{RF}(T)$ onto the nearest 0/1 vector $v_{RF}(T^*) \in \{ v_{RF}(T')\}_{T'} \subset \{0,1\}^{2^{n-1}-1}$. If we relax this problem, and simply project $\mu_{RF}$ onto the nearest 0/1 vector $v^* \in \{0,1\}^{2^{n-1}-1}$, then we see $v^*$ is obtained by rounding all entries in $\mu_{RF}$ to the nearest integer 0 or 1. In other words $v^* = v_{RF}(T^*)$ where $T^*$ is the consensus tree. Thus we have the result in [@Holder2008]: the consensus tree is the Bayes estimator for Robinson-Foulds distance.
0.5in In our view, projecting a point (e.g. input dissimilarity map) to a nearby tree is a geometric analog of a Bayes estimator. Indeed, distance-based tree reconstruction methods can be loosely regarded as “projections” of an input dissimilarity map $D \in {\mathbb R}^{n \choose 2}$ onto a tree metric $D_T = D - \epsilon$, where $\epsilon$ is “small” according to some norm. The geometry of distance-based tree reconstruction methods has been studied before, see [@kord2009; @yoshida2008; @Mihaescu2007]. 0.8cm
[<span style="font-variant:small-caps;">Relation between Bayes estimators and existing reconstruction methods</span>]{}
[[*Quartet puzzling*]{}]{}
0.5in Under the quartet distance $d_Q(T,T') = ||v_Q(T) - v_Q(T')||^2$, the Bayes estimator is the tree $T^*$ which minimizes $|| v_Q(T) - \mu_Q ||^2$, where $\mu_Q = {\mathbb E} v_Q(T)$ is the vector of posterior quartet frequencies. Since $||v_Q(T)||^2 = {n \choose 4}$ for all trees on $n$ taxa, we have $$|| v_Q(T) - \mu_Q ||^2 = {n \choose 4} + ||\mu_Q||^2 - 2 v_Q(T) \cdot \mu_Q = (constant) - 2 v_Q(T) \cdot \mu_Q$$ and so the Bayes estimator $T^*$ can be equivalently defined as $T^* = \hbox{argmax}_T \, \mu_Q \cdot v_Q(T)$. Maximizing $\mu_Q \cdot v_Q(T)$ is a [*weighted quartet puzzling*]{} problem: Given a set of weights $\mu_Q$ on quartets, find a compatible set of quartets of maximal weight. If all quartet weights are 0/1, then we obtain the traditional quartet puzzling problem [@Strimmer1996]. 0.5in Analogous to split frequencies and the consensus tree, we can use a sample of trees to estimate quartet frequencies, and then apply weighted quartet puzzling to find the Bayes estimator tree. In general though, quartet puzzling (and hence weighted quartet puzzling) is NP-hard [@Steel1992]. However, there has been considerable progress toward solving large instances: see [@warnow; @Snir2009] for example. In our case, the weights $\mu_Q$ have special structure since they are realizable by a collection of trees; this might make the weighted quartet puzzling we are considering here easier.
[[*Ordinary Least Squares (OLS) minimum evolution (ME)*]{}]{}
0.5in For a squared distance dissimilarity map, there is a striking similarity between Bayes estimators and the minimum evolution (ME) approach to phylogenetic reconstruction. ME methods are distance-based methods that have been extensively studied [@Holder2003; @Rzhetsky1993]. One of the earliest examples is Ordinary Least Square (OLS) ME [@EDWARDS1963; @Desper02fastand]. OLS ME first estimates the branch lengths for each tree topology $T$ by minimizing $||D_T - D||^2$, where $D$ is the input dissimilarity map. Then the outputted tree topology $T^*$ is the topology whose sum of estimated branch lengths is minimal. If $D = D_T + \epsilon,$ where $D_T$ is a tree metric and $\epsilon$ comprises $i.i.d.$ errors with mean $0$, then OLS ME is statistically consistent as a method to recover $D_T$.
0.5in There is however a key difference between OLS ME and minimizing the expected squared dissimilarity map distance. The input to OLS ME is a dissimilarity map presumed to be of the form $D = D_{T} + \epsilon$. In sharp contrast, the mean $\mu$ summarizes the posterior distribution on $D_T$, given input such as sequence data. Although $\mu$ could be viewed as a random variable whose distribution is governed by the true underlying tree $T$, the form of this distribution $P(\mu \, | T)$ is opaque and depends on the model of sequence evolution being used. Thus, while directly minimizing $||D_T - \mu ||^2$ produces the Bayes estimator $T^*$, it is not clear whether the minimum evolution approach (treating $\mu$ as a “perturbed tree metric”) is a sensible alternative.
0.8cm
[<span style="font-variant:small-caps;">Hill climbing optimization</span>]{}
0.5in Since the number of tree topologies on $n$ taxa grows exponentially in $n$, computing the Bayes estimator $T^*$ under a general distance function can be computationally hard. However, hill climbing techniques such as those used in ML methods [@phyml] often work quite well in practice for tree reconstruction. Hill climbing techniques can similarly be used to find local minima of the empirical expected loss. 0.5in Hill climbing requires a way to move from one tree topology to another. Three types of combinatorial tree moves are often used for this purpose; [*Nearest Neighbor Interchange (NNI)*]{}, [*Subtree-Prune-and-Regraft (SPR)*]{}, and [*Tree-Bisection-Reconnect (TBR)*]{} [@Semple2003]. SPR and TBR moves are more general than NNI, but every SPR and TBR move is a composition of at most two NNI moves. SPR and TBR moves endow each tree with $O(n^2)$ neighbors. NNI moves produce a smaller set of $O(n)$ neighbors. See ([@Allen2001]) for details. [PHYML]{} uses NNI moves when hill climbing to quickly search for a ML tree [@phyml]. We follow their example and choose NNI moves to apply hill climbing.
0.5in For each proposed move $T^{current} \to T^{new}$ during hill climbing, $\hat{\rho}(T^{new})$ must be computed. A straightforward evaluation using the definition $\hat{\rho}(T^{new} )= $ $\frac{1}{N} \sum_{i=1}^N d(T^{new}, T_i)$ requires $N$ evaluations of $d()$, where $N$ is the sample size. For squared euclidean distances the situation is often much better since $\hat{\rho}(T^{new})$ can be re-expressed (up to an additive constant), as simply $\hat{\rho}(T^{new}) = d(\hat{\mu}, T^{new})$, where $\hat{\mu} = \frac{1}{N} \sum_{i=1}^N v(T_i)$ is the sample mean. Note $\hat{\mu}$ does not depend on the tree $T^{new}$, thus it can be computed once at the beginning of hill climbing. Consequently, at each step we need only evaluate $d()$ once. The computational expense to calculate $d()$ depends on the choice of vector space embedding.
[<span style="font-variant:small-caps;">Simulation study: Methods</span>]{}
0.5in For studying Bayes estimators, a natural first choice for distance between trees is Robinson–Foulds distance. But then the Bayes estimator is the consensus tree, which has been extensively studied, and is easy to compute from samples. We thus sought out other important distances besides Robinson–Foulds.
0.5in The dissimilarity map distance is one of the oldest distances for the comparison of trees, and lies at the foundation of distance-based reconstruction methods. Thus, dissimilarity map and related distances are a natural choice for case-study of Bayes estimators. We specifically chose the (squared) path difference metric. The path difference metric $||v_{p}(T) - v_{p}(T')||$ is precisely the dissimilarity map distance $||D_T - D_{T'}||$, if all edge lengths in $T,T'$ are redefined to be 1. Setting all edge lengths to 1 prevents deemphasis of the shorter (presumably uncertain) edges. Intuitively, this emphasizes topological accuracy in the Bayes estimator. We believe this is a desirable property, and we are not the first to suggest its importance. The conclusion of [@Steel1993] states
> “The path difference metric, $d_p$, has several interesting features that suggest that it merits more study and consideration for use when studying evolutionary trees. These features will make it particularly attractive when studying large trees. $\cdots$ The $d_p$ metric may be the method of choice when trees are more dissimilar than expected by chance.”
0.5in Thus we chose the squared path difference as a case study. We think quartet distance would also be interesting, but believe that a study of Bayes estimators under quartet distance should include quartet puzzling methods, given the close connections outlined in the Quartet Puzzling section. We have therefore deferred study of quartet distance.
0.5in Under the path difference metric, trees are embedded in ${\mathbb R}^{n \choose 2}$. Using depth-first search on a tree $T$, the embedding vector $v(T)$ can computed in $O(n^2)$ time. Euclidean distance in ${\mathbb R}^{n \choose 2}$ can also be computed in $O(n^2)$ time.
[[*Simulated data*]{}]{}
0.5in For simulated data, we used the first $1000$ examples from the data set presented in [@phyml]. We briefly review the details of the data set. Trees on $40$ taxa were generated according to a Markov process. For each generated tree, 40 homologous sequences (no indels) of length $500$ were generated, under the Kumura two-parameter (K2P) model [@Kimura1980], with a transition/transversion ratio of $2.0$. Specifically the Seq-Gen program [@seqgen] was used to generate the sequences. The data is available from the website <http://www.atgc-montpellier.fr/phyml/datasets.php>.
[*Reconstruction methods*]{}
0.5in For each set of homologous sequences $D$ in the simulated data, we used the software [MrBayes]{} [@Mrbayes] to obtain $15000$ samples from the posterior distribution $P(T | D)$. Specifically, we ran [MrBayes]{} under the K2P model, discarded the initial $25\%$ of samples as a burn-in, used a $50$ generation sample rate, and ran for $1,000,000$ generations in total. 0.5in We computed a ML tree estimate for each data set, using the hill climbing software [PHYML]{} [@phyml] as described in the paper. We also computed a NJ tree using the software [PHYLIP]{} [@Felsenstein1989], using pairwise distances computed by [PHYLIP]{}. 0.5in We then used our in-house software to minimize the expected path difference squared euclidean distance by hill climbing. We performed hill climbing using NNI moves, along with various choices of starting trees. For starting trees we used the NJ tree, the ML tree, and five samples from $P(T | D)$ (NJ and ML trees were computed as described above). We also used the [MrBayes]{} tree sample which had the highest likelihood, which we call the “empirical MAP” tree.
0.5in We now briefly describe our hill climbing implementation. The input for the algorithm is a list of trees $T_1, \ldots, T_N$ sampled from $P(T \, | \, D)$, and an initial starting tree $T^0$. The pseudo-code is as follows:
-
<!-- -->
- INPUT: Samples $T_1, \ldots, T_N$, and an initial tree $T^0$.
- OUTPUT: Local minimum $T^*$ of the empirical expected loss.
- PROCEDURE:
- BEGIN
- Compute and store $\hat{\mu}_{p} = \sum_i v_{p}(T_i)$.
- Initialize $T^{*} = T^0$, and $\rho_{p}^* = || v_{p}(T^0) - \hat{\mu}_p ||^2$.
- DO:
- Pick an NNI neighbor $T^{new}$ of $T^{*}$.
- Compute $\rho_{p}^{new} = || v_{p}(T^{new}) - \hat{\mu}_p ||^2$.
- IF $\rho_{p}^{new} < \rho_{p}^*$:
- Set $T^* = T^{new}$ and $\rho_{p}^* = \rho_{p}^{new}$.
- END IF
- UNTIL $\rho_{p}^* < \rho_{p}^{new}$ is satisfied for all neighbors $T^{new}$ of $T^*$.
- Output $T^*$
- END
0.5in In practice, allowing the hill climbing algorithm to run until complete convergence might take too long. Thus, we included several alternative stopping criteria in the $UNTIL$ statement. (For example, halt if a maximum number of loop iterations is reached.) In our simulation study, the algorithm always found a local maximum before halting. The source code, written in java, is available at <http://cophylogeny.net/research.php>.
0.8cm
[<span style="font-variant:small-caps;">Simulation study: Results</span>]{}
[[*Comparing objective functions for tree reconstruction* ]{}]{}
0.5in In our distance-based framework, the canonical measure of reconstruction accuracy is the distance, $d_p(T^*, T^{true}) = || v_{p}(T^*) - v_{p}(T^{true})||$, between the true tree $T^{true}$ and the estimated tree $T^*$. When reconstructing a tree, ideally we would like to directly use distance to the true tree as the objective function. But obviously this is impossible unless $T^{true}$ is known. One obvious question is: How good are other objective functions, such as likelihood and $\hat{\rho}_{p}$, as proxies for $d_{p}(\cdot , T^{true})$? The relationships among objective functions are particularly important for nearly optimal trees.
0.5in We explored this question using the simulated data. For each of the $1,000$ data sets, we computed three scores for each of the $15,000$ [MrBayes]{} samples $T_i$, $i = 1, \ldots, 15000$. The three scores we investigated are 1) The observed frequency of the tree topology in [MrBayes]{} samples, 2) The empirical expected loss: $\hat{\rho}_{p}(T_i)$ $= ||v_{p}(T_i) - \frac{1}{15000} \sum_j v_{p}(T_j)||^2$, and 3) The actual distance to the true tree: $d_{p}(T_i, T^{true})$ $= || v_{p}(T_i) - v_{p}(T^{true})||^2$.
0.5in For each data set, we restricted our attention to the $25$ most frequent tree topologies. The number of samples $15,000$ was large enough so that the frequencies of the $25$ most probable tree topologies could be estimated fairly well in most cases. For the $25$ most probable topologies, we computed the Kendall-tau correlations between the three scores and recorded the results in Table \[table3\].
0.5in If there are no ties among the $25$ topologies under any of the scores, then the Kendall-tau has a natural interpretation: If $P(s_2(T) < s_2(T') | s_1(T) < s_1(T') = p$ for a randomly drawn pair $T,T'$ of the $25$ topologies, then the Kendall-tau correlation is $2p - 1$ between the scores $s_1,s_2$. As Table \[table3\] shows, our proposed empirical expected loss $\hat{\rho}_{p}$ outperforms likelihood, as a proxy for the distance to the true tree.
[[*Performance of tree reconstruction methods* ]{}]{}
0.5in As described in (Simulation Study: Methods), for each simulated data set we computed NJ, ML, and empirical MAP trees. We then performed NNI-based hill climbing to optimize $\hat{\rho}_{p}$, using NJ/ML/MAP as starting trees as well as starts chosen randomly from [MrBayes]{} samples. We estimated the Bayes estimator (BE) tree by taking the best of five random starts.
0.5in Following [@phyml], we plotted the inaccuracy (path difference to true tree) of the NJ, ML, empirical MAP, and BE trees (Figure \[fig1\]). Notice we have reported the inaccuracy between trees $T,T'$ as the norm $||v_{p}(T) - v_{p}(T')||$, instead of the square $||v_{p}(T) - v_{p}(T')||^2$. We chose to do this so that the inaccuracy can be loosely interpreted as “average difference of number of edges between a typical pair of leaves.” In the plot, inaccuracy is plotted against the maximum unadjusted pairwise divergence in the sequence data. The unadjusted pairwise divergence between two sequences is the proportion of sites where both sequences differ.
0.5in We also give an analogous plot (Figure \[fig2\]), plotting the empirical expected loss $\hat{\rho}_{p}(T)$ for the various tree estimators. Note the true tree might not be the global optimum of $\hat{\rho}_{p}(T)$. Thus we included the true tree in the plot as well.
0.5in Tables (\[table1\]) and (\[table2\]) summarize the results of our NNI-based hill climbing when ML/NJ/empirical MAP trees are used as the starting tree. Note the ML tree (computed by phyML) was obtained by NNI hill climbing optimizing the likelihood. Our hill climbing optimizes $\hat{\rho}_{p}$ instead, so it is possible an NNI move can improve the phyML tree.
0.5in We indeed observed that NJ, ML, and empirical MAP trees can be improved by hill climbing. (Table \[table2\]) and (Table \[table1\]) give summary information. In particular, (Table \[table1\]) shows that our hill climbing algorithm improves the distance to the true tree. We find this particularly encouraging.
0.5in Using a Pentium dual core system running Red Hat Linux 4, each run of our hill climbing programs required between $1$ minute and $1.5$ minutes on average per example, depending on the starting tree. Using the NJ tree as the initial tree took longer on average, because more hill climbing steps were required to find a local optimum.
[<span style="font-variant:small-caps;">Discussion</span>]{}
0.5in For phylogenetic reconstruction, the Bayes estimator is a natural choice when recovering the true tree is unlikely, and one is content to find a tree which is “close” to the true tree. Here “close” is defined by a choice of distance between trees, e.g. Robinson–Foulds distance. The Bayes estimator directly maximizes its expected accuracy, measured in terms of closeness to the true tree. In contrast, ML optimizes likelihood instead of accuracy.
0.5in As observed in [@Holder2008], the popular consensus tree has a natural interpretation as the Bayes estimator which minimizes the expected Robinson–Foulds distance to the true tree. Thus, for the special case of Robinson–Foulds distance, Bayes estimators have actually been studied for quite some time.
0.5in As part of an exploratory simulation study, we showed that hill climbing can be used to find an empirical Bayes estimator in practice, given a sample of trees from the posterior distribution. In particular we used the squared [*path difference metric*]{} described in [@Steel1993]. Hill climbing optimization produced tree estimates which were closer to the true tree, outperforming NJ and ML. And in the majority of cases, hill climbing improved distance to the true tree, even when the initial tree was obtained by hill climbing optimization of the likelihood. We consider this very encouraging for future work on hill climbing approaches for Bayes estimators.
0.5in Systematists are best qualified to help choose which types of distances should be used to compare trees. On the theoretical front, some interesting new distances are being studied such as the geodesic distance [@citeulike:3063901; @Owen2008; @Owen2009; @Owen2009b]. We believe Bayes estimators (or “median trees”) under novel distances comprise an interesting direction for future mathematical research. We also think Bayes estimators under the classical quartet distance might be interesting, in light of the close connection to quartet puzzling.
0.5in In this paper we used NNI moves to apply the hill climbing algorithm. One could also try more general tree moves such as SPR or TBR, analogous to [@Gascuel2005a]. It would be interesting to study which tree moves give faster hill climbing convergence for Bayes estimators in practice. Similarly, exploration strategies such as Tabu search [@Glover1986Future-Paths-fo] or simulated annealing may give better performance.
0.5in 0.5in For some vector space embeddings (e.g. quartet embedding $v_Q()$), the embedding vectors for trees may be rather high-dimensional and non-sparse. Then it may be faster to use the naive definition $\hat{\rho}(T) = \frac{1}{N} \sum_{i=1}^N d(T,T_i)$ directly. Indeed, quartet distance $d_Q(T,T')$ can be computed in $O(n \log n)$ time for two trees on $n$ taxa [@Pedersen2001], which is much faster than operations on the vectors $v_Q(T), v_Q(T')$ which have dimension $O(n^4)$.
0.5in In this paper, we have focused on different types of tree [*features*]{} that can be used to define distance, e.g. splits or quartets. Systematists are particularly interested in splits. Thus, one could also study different ways to define a distance based on splits. For example, [@Holder2008] considered a generalized Robinson–Foulds distance that allows a specificity/sensitivity trade-off. We think another interesting way to modify Robinson–Foulds distance would be to make the distance more “local”. For example, one could define a transformed distance $d(T,T') = \min (d_{RF}(T,T'), K)$ for a given “ceiling” constant $K > 0$. Then the Bayes estimator could be interpreted as a “smoothed” ML tree, i.e. the ML tree after the likelihood has been smoothed by a local convolution. This smoothed ML tree could provide a nice compromise between ML trees and consensus trees. 0.5in Finally, we note that in our simulation study, ML trees were quite accurate. In fact, the ML tree was typically quite close to the Bayes estimator, in terms of NNI moves. Thus an ML (or approximate ML) tree might be quite useful as an initial guess for a Bayes estimator tree. Then, one could “polish” the MLE by using hill-climbing optimization of the expected loss.
[<span style="font-variant:small-caps;">Acknowledgments</span>]{}
0.5in The authors would like to thank D. Weisrock for the many useful comments which improved this paper. The second, the third, the fourth, and last authors are supported by NIH Research Project Grant Program (R01) from the Joint DMS/BIO/NIGMS Math/Bio Program (1R01GM086888-01 and 5R01GM086888-02).
[lccc]{} Initial tree & Hill climbing improves & Hill climbing worsens & Avg drop in\
& distance to $T^{true}$? & distance to $T^{true}$? & distance to $T^{true}$\
ML tree & 380 & 253 & $5.9$%\
Empirical MAP tree & 508 & 185 & $17.9$%\
NJ tree & 693 & 229 & $39.6$%\
\
[lccc]{}
Initial tree & Hill climbing improves & Hill climbing worsens & Avg drop in\
& $\hat{\rho}_{p}$? & $\hat{\rho}_{p}$? & $\hat{\rho}_{p}$\
ML tree & 690 & 0 & $5.9$%\
Empirical MAP tree & 870 & 0 & $8.6$%\
NJ tree & 961 & 0 & $20.3$%\
\
0.2in
$P(T_i)$ $\hat{\rho}_{p}(T_i)$ $d_{p}(T_i, T^{true})$
------------------------ ------------- ----------------------- ------------------------
$P(T_i)$ $\cdot \, $ 0.352 0.148
$\hat{\rho}_{p}(T_i)$ $\cdot$ 0.270
$d_{p}(T_i, T^{true})$ $\cdot$
: For each of the $1,000$ data sets, we computed three scores for each of the $15,000$ [MrBayes]{} samples $T_i$, $i = 1, \ldots, 15000$. The three scores we investigated are 1) The observed frequency of the tree topology in [MrBayes]{} samples, 2) The empirical expected distance $\hat{\rho}_{p}(T_i) = ||v_{p}(T_i) - \frac{1}{15000} \sum_j v_{p}(T_j)||^2$, and 3) The actual distance $d_{p}(T_i, T^{true}) = || v_{p}(T_i) - v_{p}(T^{true})||^2$.[]{data-label="table3"}
[[*Legends to Figures*]{}]{}
[**Figure 1.**]{}
[**Figure 2.**]{}
[**Figure 3.**]{}

[**Figure 4.**]{}

|
---
abstract: 'Self-similar properties of the ribosome in terms of the mass fractal dimension are investigated. We find that both the 30S subunit and the 16S rRNA have fractal dimensions of 2.58 and 2.82, respectively; while the 50S subunit as well as the 23S rRNA has the mass fractal dimension close to 3, implying a compact three dimensional macromolecule. This finding supports the dynamic and active role of the 30S subunit in the protein synthesis, in contrast to the pass role of the 50S subunit.'
author:
- 'Chang-Yong Lee'
title: Mass Fractal Dimension of the Ribosome and Implication of its Dynamic Characteristics
---
The structure of biomolecules is important because not only the structure dictates its biological function, but it is the target of antibiotics. In this sense, finding characteristics of the three-dimensional shape of biomolecules is important for a better understanding of their biological functions and associated applications in medicine. In the case of the ribosome [@ribosome], as a large protein-RNA complex, in contrast to most cellular machines, it has been known that the ribosomal function heavily rely on the ribosomal RNA (rNRA), as a ribozyme, rather than protein components [@function]. In particular, the protein synthesis is closely related to the dynamic structure of the ribosome, which is too complicated for direct studies by physical methods. However, careful study on the static structure from a quantitative perspective may reveal an important aspect of its dynamic properties.
The quantitative investigation on the structure of the ribosome is relatively less studied than that of protein, mainly due to the difficulty of the highly resolved structure conformation. In contrast to the ribosome, the geometric and self-similar properties of the proteins have been studied extensively. It was reported that the relation between the average radius and the mass of the protein chains can be described by a fractal dimension [@moret]. The mass fractal dimension of the protein was shown to lie near 3, suggesting a compact three-dimensional object [@elber]. However, more recent and extensive studies, including a statistical analysis in estimating the dimension, argue smaller values that are consistent with the result of the vibrational analysis of proteins [@enright]. These results suggest that the fractal dimension of less than 3 may be intrinsic and universal characteristics of the protein chain.
Since the ribosome, as a tightly packed macromolecule, was considered too large for a high-resolution structural analysis, quantitative studies toward an understanding of the structure proved difficult until recent progress in the high resolution crystallography has been made. In fact, the ribosome and its subunits are the largest asymmetric molecules that have been resolved at the atomic level so far by the crystallography. The 2.4 $\mathring{A}$ high-resolution of the 50S subunit from the [*Haloarcula marismortuii*]{} [@ban] and the 3.05 $\mathring{A}$ structure of the 30S subunit from the [ *Thermus thermophilus*]{} [@wimberly] provided the first detailed views of the structure of both ribosomal subunits; the intact 70S ribosome from a [*Escherichia coli*]{} of 3.5 $\mathring{A}$ resolution [@schuwirth] revealed the features of the inter-subunit bridges. In addition to these, there are other X-ray crystal structures available for the ribosome as well as its subunits [@harms; @schluenzen; @cate; @yusupov]. With these considerable advances in the ribosome structures at the atomic level, we are now able to investigate quantitative characteristics of the ribosome from the statistical physics perspective.
In bacteria, the ribosome is a particle of size about 250 $\mathring{A}$ in diameter and consists mainly of two subunits: 30S and 50S subunits, together forming the 70S. The unit “S” stands for Svedberg, which is a measure of the sedimentation rate. The 30S subunit plays a crucial role in decoding mRNA by monitoring base pairing between codon and anticodon; whereas the 50S subunit catalyzes peptide bond formation between the incoming amino acid and the nascent peptide chain [@ramakrishnan]. The 30S subunit, in turn, contains the 16S rRNA molecule in addition to about 20 different proteins, and the 50S subunit consists of the 5S and the 23S rRNAs besides about 30 different proteins. The 16S and 23S rRNAs are composed of approximately 1500 and 3000 nucleotides respectively, each of which is composed of one of four different bases (denoted as A, C, G, and U), and sugar-phosphate backbones.
With this structural information of the ribosome at the atomic level, we investigate the self-similar property of the ribosome structure and its biological implication. We especially focus on the scale invariance, by estimating the mass fractal dimension, for structures of [*Thermus thermophilus*]{} 30S subunit including the 16S rRNA, and [*Haloarcula marismortui*]{} 50S subunit including the 23S rRNA [@data]. It has been known that the protein synthesis occurs in the context of the intact ribosome and the moving part of the ribosome enables the dynamic process of the translation. In this sense, the ribosome function is closely related to its spatial conformation in the physiological medium. Thus, the mass fractal dimension analysis may help to reveal any characteristics of the ribosome, especially the dynamics of the ribosome.
The mass fractal dimension, which can be used as a measure of the compactness, is defined as the number of monomers (atoms in our case), $N$, enclosed in a sphere of radius $R$. When a molecule has a fractal structure, it is expected that $$N \propto R^{D_{M}}~,$$ where $D_{M}$ is the mass fractal dimension. It can be estimated by plotting the number of all atoms contained inside concentric spheres of varying radius $R$ on a log-log scale.
To test the sensitivity of the result to the choice of the origin, we set the origin at the geometric center of each molecule, and vary the origin by $\pm 2~\mathring{A}$ in the XYZ directions (thus 27 different origins) with respect to the coordinate system adopted from the PDB data. We ignore hydrogen atoms since the X-ray crystal structures do not contain the geometric information of hydrogen atoms, which cannot be seen but in very high resolution. Incidentally, this is also true for the protein case. Thus, most of descriptions of the ribosome focus on the position of the heavy atoms, such as C, N, O, and P. Note also that the PDB for the 30S subunit does not contain water molecules; where as that for the 50S does. For a fair comparison, we exclude water molecules from the mass fractal calculation. Due to the finite-size effect, there are both upper and lower size limits beyond which a macromolecule is no longer fractal.
molecule number of atoms $D_{M}$
------------- ----------------- -------------
16S rRNA 32514 2.58 (0.06)
30S subunit 51742 2.82 (0.07)
23S rRNA 59017 3.11 (0.07)
50S subunit 62673 3.07 (0.08)
: The number of atoms, the average, and its standard deviation of the mass fractal dimension for each molecule. For the 16S rRNA and the 30S subunit, the scaling property emerges for all 27 measurements; while for the 23S rRNA and the 50S subunit, respectively 17 and 19 out of 27 measurements show scaling properties. The parentheses are the standard deviation of the corresponding number of estimations.
We estimate the mass fractal dimension for the 16S rRNA, the 30S subunit, the 23S rRNA, and the 50S subunit with 27 different origins. The result is summarized in Table 1, and Fig. 1 and 2 present a typical log-log plot of the enclosed “mass” $N$ as a function of radius $R$. Note that the geometric origin and the center of mass for all molecules are almost identical, and evaluating the corresponding physical mass rather than the number of atoms does not affect the result. For the 23S rRNA and the 30S subunit, we are able to estimate the fractal dimensions for all 27 cases; while for the 23S rRNA and the 50S subunit, respectively 17 and 19 cases out of 27 shows the scaling behavior.
The mass fractal dimension exhibits the molecule’s space-filling ability: the larger is $D_{M}$, the more atoms are in the sphere. When fractal dimension is less than 3, the structure has “empty” or “void” space. From the above result, we see that the mass fractal dimensions for both the 23S rRNA and the 50S subunit are close to 3, implying that these are compact three-dimensional collapsed objects. On the other hand, the average $D_{M}$ for the 16S rRNA and the 30S subunit are found to be 2.58 and 2.82, respectively, smaller than that of a completely compact three-dimensional collapsed polymer. This indicates that the mass inside a radius $R$ does not increase with the Euclidean dimension as an exponent but with some lesser power. Thus, we find that both the 23S rRNA and 50S subunit are more compact than either the 16S rRNA or the 30S subunit.
The fact that the 16S rRNA and the 30S subunit have the mass fractal dimension less than 3 leaves a room for the 16S rRNA to make any movement during the protein synthesis, and can be related to the rigid body motion of domains within the subunit. It has been found from many studies that it is the 30S subunit that makes a motion in translocation. It was reported that the 30S subunit makes a ratchet-like rotations relative to the large 50S subunit [@schuwirth; @frank], particularly, the rotational rigid body motions of the “head” domain [@domain] within the 30S subunit [@schuwirth]. This reveals a high degree of flexibility between the head and the rest of the 30S subunit. Furthermore, not only domains for the entrance channel such as the “shoulder” and the head domains in the 30S subunit are dynamic during decoding, but the exit channel formed between the “platform” and the head domains are known to be variable [@schuwirth; @frank; @spirin; @serdyuk]. Thus, it is the 30S subunit that is either dynamic or variable, playing an active role, which is possible due to the fractal characteristics of the 30S subunit.
The 50S subunit, on the other hand, cannot make movements due to a highly compact structure except peripheral regions. As an example, the L1 stalk, located in a peripheral region of the 50S subunit, makes a bifurcation [@frank] and moves toward the inter-subunit space playing a pivotal role in the transcription process [@valle].
In this paper, we investigated a symmetry embedded in the ribosome structure under a change of the length scale via the mass fractal dimension. We found the 30S and the 50S subunit (also the 16S and the 23S rRNAs) differs in their fractal dimensions: the 30S subunit and the 16S rRNA have fractal dimensions, while the 50S subunit and the 23S rRNA can be regarded as three-dimensional compact molecules. The fractality of both the 16S rRNA and the 30S subunit supports the dynamic nature of the ribosome in the protein synthesis.
Although the power of the self-similarity approach to the ribosome structure is in its simplicity and generality, it also true that their detail dynamic properties and realization are not obvious because detailed properties of the ribosome which determine their function are averaged out. Nevertheless, the fractal property of the 30S subunit (also the 16S rRNA) provides a partial, if not whole, evidence of its movement during the transcription.
This work was supported by the Korea Research Foundation Grant funded by the Korean Government (MOEHRD) (KRF-2005-041-H00052).
[99]{} For a general reference of the ribosome and its function, see, for example, [*Protein Synthesis and Ribosome Structure: Translating the Genome*]{}, edited by Knud H. Nierhaus and Daniel N. Wilson (Wiley-VCH, Weinheim, 2004); [*The Ribosome: Structure, Function, Antibiotics, and Cellular Interactions*]{}, edited by R. Garrett, S. Douthwaite, A. Liljas, A. Matheson, P. Moore, and H. Noller (ASM Press, Washington, DC, 2000). H. Noller, Science [**309**]{}, 1508 (2005); P. Nissen, J. Hansen, N. Ban, P. B. Moore, T. A. Steitz, Science [ **289**]{}, 920 (2000). M. A. Moret, J. G. V. Miranda, E. Nogueira, Jr., M. C. Santana, and G. F. Zebende, Phys. Rev. E [**71**]{}, 012901 (2005). R. Elber, in [*Fractal analysis of protein in The Fractal Approach to Heterogeneous Chemistry*]{}, edited by D. Avnir (John Wiley $\&$ Sons, New York, 1989), p. 407. Matthew B. Enright and David M. Leitner, Phys. Rev. E [**71**]{}, 011912 (2005); X. Yu and D. M. Leitner, J. Chem. Phys. [**119**]{}, 12673 (2003). N. Ban, P. Nissen, J. Hansen, P. Moore, T. Steitz, Science [**289**]{}, 905 (2000). B. Wimberly, D. Brodersen, W. Claemons, R. Morgan-Warren, A. Carter, C. Vonhein, T. Hartsch, and V. Ramakrishnan, Nature [**407**]{}, 327 (2000). B. Schuwirth, M. Borovinskaya, C. Hau, W. Zhang, A. Vila-Sanjurjo, J. Holton, J. Cate, Science [**310**]{}, 827 (2005). J. Harms, F. Schluenzen, R. Zarivach, A. Bashan, S. Gat, I. Agmon, H. Bartels, F. Franceschi, and A. Yonath, Cell [**107**]{}, 679 (2001). F. Schluenzen, A. Tocilj, R. Zarivach, J. Harms, M. Gluehmann, D. Janell, A. Bashan, H. Bartels, I. Agmon, F. Franceschi, and A. Yonath, Cell [**102**]{}, 615 (2000). J. H. Cate, M. M. Yusupov, G. Z. Yusupova, T. N. Earnest, H. F. Noller, Science [**285**]{}, 2095 (1999). M. Yusupov [*et al*]{}, Science [**292**]{}, 883 (2001). V. Ramakrishnan, Cell [**108**]{}, 557 (2002). These are the highest resolution results for each subunit. Anything worse than about 3.5 $\mathring{A}$ would normally not be possible to construct an accurate model of a macromolecule. We also exclude the 5S subunit from the analysis because it is too small to perform any statistical analysis. The structure information for the subunits can be found in the Protein Data Bank (PDB) at <http://www.rcsb.org/pdb/>. The access numbers are 1J5E for [*Thermus thermophilus*]{} 30S subunit including the 16S rRNA, and 1JJ2 for [*Haloarcula marismortui*]{} 50S including the 23S rRNA. J. Frank and R. Agrawal, Nature [**406**]{}, 318 (2000). The structure of the 16S rRNA is commonly organized into four domains of a few hundred nucleotides each: the head, the body, and the platform, and $3^{\prime}$ minor domains. A. Spirin, V. Baranov, G. Polubesov, I. Serdyuk, and R. May, J. Mol. Biol. [**194**]{}, 119 (1987). I. Serdyuk, V. Baranov, T. Tsalkova, D. Gulyamova, M. Pavlov, A. Spirin, and R. May, Biochimie [**74**]{}, 299-306 (1992). M. Valle, A. Zavialov, J. Sengupta, U. Rawat, M. Ehrenberg, and J. Frank, Cell [**114**]{}, 123 (2003).


|
---
abstract: 'The computation of interfacial free energies between coexisting phases (e.g. saturated vapor and liquid) by computer simulation methods is still a challenging problem due to the difficulty of an atomistic identification of an interface, and due to interfacial fluctuations on all length scales. The approach to estimate the interfacial tension from the free energy excess of a system with interfaces relative to corresponding single-phase systems does not suffer from the first problem but still suffers from the latter. Considering $d$-dimensional systems with interfacial area $L^{d-1}$ and linear dimension $L_z$ in the direction perpendicular to the interface, it is argued that the interfacial fluctuations cause logarithmic finite-size effects of order $\ln (L) / L^{d-1}$ and order $\ln (L_z)/L ^{d-1}$, in addition to regular corrections (with leading order $\operatorname{const}/L^{d-1}$). A phenomenological theory predicts that the prefactors of the logarithmic terms are universal (but depend on the applied boundary conditions and the considered statistical ensemble). The physical origin of these corrections are the translational entropy of the interface as a whole, “domain breathing” (coupling of interfacial fluctuations to the bulk order parameter fluctuations of the coexisting domains), and capillary waves. Using a new variant of the ensemble switch method, interfacial tensions are found from Monte Carlo simulations of $d=2$ and $d=3$ Ising models and a Lennard Jones fluid. The simulation results are fully consistent with the theoretical predictions.'
author:
- Fabian Schmitz
- Peter Virnau
- Kurt Binder
title: 'Logarithmic Finite-Size Effects on Interfacial Free Energies: Phenomenological Theory and Monte Carlo Studies'
---
Introduction {#sec: Introduction}
============
Interfacial phenomena are ubiquitous in the physics of condensed matter and materials science: nucleation of droplets [@1; @2; @3; @4; @5; @6; @7; @8; @9] in a supersaturated vapor (or nucleation of bubbles in an undersaturated liquid) is controlled by a competition between the free energy cost of forming an interface and gain in free energy (resulting from the fact that the stable phase has a lower free energy than the metastable one). Of course, related phenomena occur in more complex systems (crystal nucleation from the melt, formation of nematic or smectic droplets in fluids which can form liquid crystal phases etc.) and in various solid phases (nucleation of ferroelectric or ferromagnetic domains driven by appropriate fields, etc.). In complex fluids and biosystems heterogeneous structures (such as mesophases of strongly segregated block copolymers [@10]) are often maintained in thermal equilibrium, due to an interplay of various free energy contributions, one of them being an interfacial tension. Stable heterogeneous structures can also be stabilized in fluids due to the effect of confining walls, e.g. wetting layers [@11; @12; @13] and nanosystems [@14].
Thus, the prediction of the excess free energy due to an interface between coexisting phases is a basic task of statistical mechanics [@15; @16; @17; @18]. Although this has been recognized since a long time [@19], and mean-field type approaches have been developed and are widely used e.g. [@20; @21; @22; @23], such theories are not based on a firm ground: the existence of a well-defined “intrinsic interfacial profile” is doubtful [@15; @16; @17; @24; @25; @26]; an inevitable input is the free energy density of homogeneous states [@27] throughout the two-phase coexistence region: this is again a concept valid for systems with long range forces [@5; @28; @29; @30], but ill-defined in the short-range case [@5; @9; @30; @31]. While the bulk phase behavior often can be accounted for rather well by mean-field type theories (apart from the neighborhood of critical points, of course, but there the neglected long wave length fluctuations and the effects caused by them can be well accounted for by renormalization group theory [@32]), this is not the case for interfacial phenomena. Interfaces (between fluid phases) have fluctuations on all length scales, and although their long wavelength part (capillary waves [@33; @34; @35; @36; @37; @38]) is well understood, the interplay of short wavelengths with fluctuations in the bulk is not yet well understood [@26; @36; @37; @38]. Thus one cannot improve the mean-field results by fluctuation corrections systematically.
In view of this dilemma, the prediction of interfacial free energies by computer simulation methods [@18; @39; @40; @41; @42; @43; @44; @45; @46; @47; @48; @49; @50; @51; @52; @53; @54; @55; @56; @57; @58; @59; @60; @61; @62; @63; @64; @65; @66; @67; @68; @69; @70; @71; @72; @73; @74; @75; @76; @77; @78] is very important. For many model systems of statistical mechanics, computer simulation methods can very accurately predict the equation of state, and thermodynamic properties derivable from it [@79; @80; @81]. Of course, computer simulations deal with systems of finite size, and hence finite-size effects need to be carefully considered [@82; @83; @84; @85], in particular near critical points or if dealing with phase coexistence. However, finite-size scaling concepts for such problems have been established since a long time [@81; @82; @83; @84; @85] and are very successful [@79; @81].
Unfortunately, with respect to finite-size effects on interfacial phenomena the situation is less satisfactory, although the problem has also been considered since a long time [@84; @86; @87; @88; @89; @90; @121]. Therefore, the present paper takes up this task again, reconsidering the finite-size effects on interfacial tensions for archetypical model systems, such as the Ising model in $d=2$ and $d=3$ dimensions, and the Lennard-Jones fluid. Our work is based on several ingredients:
- By further adaptation of the recently developed ensemble switch method [@91; @92; @93], a computationally very efficient alternative to existing approaches has become available.
- The computational power of recently available computer hardware exceeds the power that was available 20 to 30 years ago, when most previous studies of this problem were done but led to less conclusive results, by many orders of magnitude.
- Unlike most previous work, we vary both the linear dimension $L$ parallel to the interface and the linear dimension $L_z$ perpendicular to it systematically. We find that this aspect is crucial to unambiguously identify the sources of the various effects.
- We compare systematically the results obtained choosing different boundary conditions (e.g., periodic versus antiperiodic in the Ising model) and different ensembles (conserved or nonconserved density when we interpret the Ising system as lattice gas).
Due to these ingredients (i)-(iv), we have been able to discover a new mechanism of interfacial fluctuations (“domain breathing”), which has not been mentioned in the previous literature. Apart from the domain breathing mechanism, known effects like the translational entropy of the interface and capillary wave effects play a major role for our study.
As a disclaimer, we emphasize that some important aspects will not be studied in this work: we will not address the interesting crossover [@90] of these finite-size effects towards those associated with the critical point, where the interfacial tension vanishes; we also ignore the anisotropy of the interfacial tension (which is present also in the Ising model [@44; @63; @115], and very important when approaching (in $d=3$) the roughening transition [@94] (or zero temperature, in $d=2$). Of course, this anisotropy must not be ignored when one considers crystal-fluid interfaces [@65; @66; @67; @68; @69; @70; @71; @72; @73; @74; @75; @76; @77; @78]. We plan to study the latter in future work.
The outline of this paper is as follows: in Sec. \[sec: PhenomenologicalTheory\], we describe in detail (a brief summary was already presented in a Letter [@95]) the phenomenological theory of the logarithmic finite-size effects on interfacial tensions. In Sec. \[sec: ModelsAndSimulationMethods\], we briefly characterize the models that are studied, and describe the ensemble switch method that is used in the Monte Carlo simulations. Sec. \[sec: NumericalResults\] describes our numerical results for the $d=2$ and $d=3$ Ising model and a $d=3$ Lennard-Jones fluid, by which our theoretical predictions are tested. Sec. \[sec: Conclusion\] gives a summary and an outlook on open problems.
Phenomenological Theory of Finite-Size Effects on Interfacial Free Energies {#sec: PhenomenologicalTheory}
===========================================================================
System Geometry and Boundary Conditions {#sec: SystemGeometryAndBoundaryConditions}
---------------------------------------
For simplicity, in most of our discussions we shall focus on the ferromagnetic Ising system with nearest neighbor interactions of strength $J$, i.e. described by the Hamiltonian on a square or simple cubic lattice, $$\label{eq: IsingHamiltonian} \mathcal{H}=-J \sum\limits_{\langle i,j \rangle} \, S_i S_j - H\sum\limits_{i} S_i , \quad S_i =\pm 1,$$ where $\langle i, j\rangle $ denotes the sum over all nearest neighbor pairs, and $H$ is the magnetic field, which is set to zero throughout this work. We focus on coexisting phases, described for temperatures $T$ less than the critical temperature $T_c$ by states with positive or negative spontaneous magnetization, $\pm m_\text{coex}$. Motivated by the interpretation of the Ising magnet as a lattice gas model (where $S_i=-1$ means that the lattice site $i$ is empty, while $S_i=+1$ means that the lattice site $i$ is occupied by a particle), we denote the $(T,H)$ ensemble as “grandcanonical” (gc) and the $(T,m)$ ensemble as “canonical” (c). Here $m$ is defined as the magnetization per spin, $$\label{eq2}
m=\frac{1}{L_z L^{d-1}} \sum_i S_i \; ,$$ where we have already anticipated that we take a lattice of linear dimension $L_z$ in the $z$-direction, while the linear dimension in the other direction(s) is taken to be $L$. Remember that in the lattice gas version of the model, the density $\rho=(1 + m)/2$, and $H$ is related to the chemical potential difference relative to the chemical potential $\mu_\text{coex}$ where phase coexistence occurs, $H=(\mu-\mu_\text{coex})/2$.
Next we discuss the boundary conditions that are used to stabilize one or two interfaces between coexisting phases in the system. A very natural choice is the use of free surfaces with neighboring fixed spins in the $z$-direction: using the lattice spacing $a$ as unit of length, all spins in the plane (or row in $d=2$) $n=1$ are fixed at $S_i=+1$ and the spins in the plane $n=L_z$ are fixed at $S_i=-1$ (Fig. \[fig: BoundaryConditionsFreeSurfaces\]). Alternatively, we may use boundary magnetic fields $H_1 > 0$ in the plane (row) $n=1$ and $H_{L_z} =-H_1$ in the plane $n=L_z$, and spins in the planes $n=0$, $L_z +1$ are missing. In the remaining direction(s), periodic boundary conditions are used. This choice of boundary conditions is straightforwardly generalized to off-lattice systems which lack the special symmetry against spin reversal of the Ising model. E.g., for a Lennard-Jones fluid (or a polymer solution where the solvent is treated implicitly only [@55]), instead of the free surfaces with fixed spins, one uses two hard walls, where one wall is purely repulsive, favoring the vapor (or solvent-rich phase, in the case of the polymer solution) while the other wall has an attractive potential. Similar choices also apply when one studies systems containing a single solid-liquid interface [@96].
It is clear that the properties of the system near these free surfaces or walls differ from the bulk properties over some range, and so $L_z$ has to be chosen large enough so that the effect of an effective potential that the wall exerts on the interface becomes negligible. The effect of this potential becomes appreciable under conditions where the system in the thermodynamic limit would undergo a wetting transition, while for $L_z$ finite but $L \rightarrow \infty$ interface localization/delocalization transitions can occur [@97; @98; @99]. One must then make sure to work under conditions deep inside the phase where the interface is preferentially in the center of the system, near $z=L_z/2$, and never close to the walls.
This problem can be avoided for the Ising model (and other symmetric systems, e.g. a symmetric binary Lennard Jones mixture [@64]) by using the antiperiodic boundary condition (APBC), Fig. \[fig: BoundaryConditionsAPBC\], which is equivalent to the choice that spins in the planes $n=1$ and $n=L_z$ interact antiferromagnetically. Then the system retains its translational invariance in the $z$-direction.
However, perhaps the most frequently used choice is to use periodic boundary conditions in all directions, and focus on states of the system where both coexisting phases are present in the system, separated by two domain walls (Fig. \[fig: BoundaryConditionsPBC\]).
Note that we normally use $L_z$ larger than $L$ (sometimes it is advantageous to use $L_z \gg L$) but one has to be careful in not using a too large value of $L_z$: We wish to have a situation where in the grandcanonical ensemble systems with APBC (or with fixed spin boundary conditions) are dominated by states with two domains separated by a single interface (as anticipated in Fig. \[fig: BoundaryConditions\]) rather than by a larger even number of domains and hence a larger odd number of interfaces. Likewise, in the PBC case (Fig. \[fig: BoundaryConditionsPBC\]) the system in the grandcanonical ensemble will in fact be dominated by the pure phases $(m_+, m_-)$ without any interfaces, and the shown state with two interfaces (Fig. \[fig: BoundaryConditionsPBC\]) occurs as a rare fluctuation, but states with 4, 6 or more interfaces are comparatively negligible. In fact, for $L_z \rightarrow \infty$ at fixed $L$, the resulting quasi-one dimensional system splits into a sequence of infinitely many domains, the typical distance between domain walls (which is the correlation length of spin correlations in $z$-direction) is given by [@86] $$\label{eq3}
\xi_\parallel \propto w_L \exp (\gamma_\infty L^{d-1}) \quad ,$$ with $$\label{eq4}
w_L \propto \left\{
\begin{array}{ll}
\gamma^{-1/2}_\infty L ^{(3-d)/2} & d<3 \\
\gamma^{-1/2}_\infty \sqrt{\ln L} & d=3 \end{array}
\right.$$ where the length $w_L$ is the width of an interface with lateral dimension(s) $L$, and $\gamma_\infty$ is the interfacial tension in the limit $L \rightarrow \infty$. Here and in the following, the interfacial tension is always normalized by the thermal energy ${k_\text{B}}T$, ${k_\text{B}}$ being Boltzmann’s constant, and is therefore given in units of inverse ($d-1$)-dimensional area. In Eq. the results from capillary wave broadening of the interface (see e.g. [@100]) have been anticipated. Strictly speaking, the prefactor in Eq. for lattice systems is not $\gamma^{-1/2}_\infty$ but rather $\Gamma^{-1/2}$, where $\Gamma$ is the “interfacial stiffness” [@100], but this difference is not of interest here. We shall discuss Eqs. , in later subsections; here we only emphasize that the simulations need to be carried out in the regime $L_z \ll \xi_\parallel$ in order to ensure that only states with one interface (Figs. \[fig: BoundaryConditionsFreeSurfaces\] and \[fig: BoundaryConditionsAPBC\]) or at most two interfaces (Fig. \[fig: BoundaryConditionsPBC\]) are sampled. Apart from the critical region (remember that $\gamma_\infty \rightarrow 0$ as $T \rightarrow T_c$ [@15; @17]), the exponential variation of $\xi_\parallel$ with the interfacial area $L ^{d-1}$ ensures that for reasonably large $L$ the length $\xi_\parallel$ is extremely large, and so the condition $L_z \ll \xi_\parallel$ is easily fulfilled. When one approaches the critical region, it is necessary to choose $L \gg \xi_b$, $\xi_b$ being the correlation length of order parameter fluctuations in the bulk. We also observe that sampling the order parameter distribution $P_{L, L_z}(m)$ in the grandcanonical ensemble using PBC (Fig. \[fig: BoundaryConditionsPBC\]) can also serve as a check that one works in the proper regime of $L$ and $L_z$ (Fig. \[fig: ProbDistributions\]). For studies of the interfacial tension, the distribution must have two sharp peaks at $m=\pm m_\text{coex}$ and a flat (essentially horizontal) minimum near $m=0$, with $P_{L,L_z} (m\approx 0)$ many orders of magnitude smaller than $P_{L, L_z}(\pm m_\text{coex})$; note the logarithmic scale of the ordinate in Fig. \[fig: ProbDistributions\]: If the minimum is shallow and rounded, we can conclude that $L$ is not large enough; if instead of a minimum we observe a broad maximum near $m=0$, we can conclude that for the chosen value of $L$ the perpendicular linear dimension $L_z$ is too large, and states with more than two domain walls contribute [@101; @102]. In Fig. \[fig: ProbDistributionsLargeLz\], where we have deliberately chosen a small value of $L$ ($L=6$), one can recognize that already for $L_z=48$, there is a flat local maximum at $\rho=0.5$, rather than a minimum, due to the fact that the sampling is “contaminated" by states with 4 (rather than only 2) interfaces; for $L_z=96$ and $192$, this effect is so pronounced, that the method based on the analysis of $P_{L,L_z}(\rho=0.5)$ is inapplicable. For $L_z=384$, we have multi-domain states. As will be discussed below, the actual dependence of $P_{L, L_z}(m \approx 0)$ on $L$ and $L_z$ contains the desired information on the interfacial tension [@18; @39; @45; @46; @49; @51; @52; @53; @54; @56; @57; @58; @59; @62; @64], but only if states with more than two domains make negligible contributions.
Translational entropy of the whole interface
--------------------------------------------
When we consider an Ising chain at low temperatures, the correlation length is very large, $\xi \approx \exp (2J/{k_\text{B}}T)/2$ [@103], and the associated free energy per spin is $F \approx -J-{k_\text{B}}T \exp (- 2 J/{k_\text{B}}T)$. The state of the system can be characterized by a sequence of large domains of parallel spins, with an average size [@104] $2 \xi$, separated by “interfaces” where the spin orientation changes. Thus, the system can be viewed as a dilute gas of randomly distributed interfaces. The cost of energy to create such an interface is $2J$, and the gain in (translational) entropy is ${k_\text{B}}\exp (-2 J/{k_\text{B}}T)$.
As is well known, and can be shown explicitly by transfer matrix methods [@83], this picture carries over to two-dimensional Ising strips of width $L$ (with PBC in the direction across the strip), where one finds $$\label{eq5}
\xi_\parallel \propto L^{1/2} \exp (L \gamma^{(d=2)}_\infty)$$ with $\gamma^{(d=2)} _\infty$ being the exactly known [@105] interface tension of the two-dimensional Ising model, normalized by ${k_\text{B}}T$ (and hence having the dimension of inverse length, the unit of length being the lattice spacing $a$) $$\label{eq6}
\gamma_\infty^{(d=2)} =\frac{2J}{{k_\text{B}}T} - \ln\left( \frac{1+\exp (-2J/{k_\text{B}}T)}{1-\exp(-2J/{k_\text{B}}T)} \right) \; .$$ Eq. coincides with the field-theoretic result Eq. in the case of $d=2$, as it should be. While the free energy cost of an interface in the Ising chain is $2J$, in the Ising strip it is $$\label{eq7}
F^\text{eff}_\text{int} = {k_\text{B}}T \gamma^{(d=2)}_\infty L + \frac{{k_\text{B}}T}{2} \ln\left( \frac{L}{\operatorname{const}} \right)$$ The logarithmic correction in this expression was interpreted by Fisher [@106] as a result of an effective repulsive interaction between interfaces due to their capillary wave excitations.
If we again view the Ising strip at low temperatures as a dilute gas of domain walls separating large domains of opposite order parameter, it is natural to ask what the free energy difference $\Delta F$ between a system with one domain wall on a length $L_z$ and a system in a monodomain configuration on the same length scale is. Taking the entropy gain of putting the interface anywhere on this scale $L_z$ into account, we conclude [@107; @108] $$\label{eq8}
\Delta F =F_\text{int} - {k_\text{B}}T \ln \left( \frac{L_z}{l_\text{int}} \right), \quad F_\text{int}={k_\text{B}}T \gamma ^{(d=2)}_\infty L$$ where we have normalized $L_z$ with some intrinsic length $l_\text{int}$ of the system, such that the ratio $L_z/l_\text{int}$ “counts” the number of distinct configurations containing one (coarse-grained) interface on the scale $L_z$. In the one-dimensional Ising chain, where no internal degrees of freedom are associated with the “kink” separating a domain of up spins from a domain of down spins, and the kink can appear between any two neighboring lattice sites, the length $l_\text{int}$ simply is the lattice spacing $a(=1)$. However, all the configurational degrees of freedom associated with an interface in higher dimensions are already included in $F_\text{int}$, and must not be included again in the translational entropy term in Eq. , to avoid double counting; thus we expect that $l_\text{int}$ will be much larger than the lattice spacing, and a plausible assumption is to identify $l_\text{int}$ with the interfacial width $w_L$, as written in Eqs. , , see also Fig. \[fig: TranslationalEntropy\]. From Eq. we conclude that $\Delta F=0$ for $L_z=L_{z,0}$, with $$\begin{gathered}
\label{eq9}
L_{z,0}=l_\text{int} \exp (F_\text{int}/{k_\text{B}}T)= l_\text{int} \exp (\gamma^{(d=2)}_\infty L) \\
=\exp (F^\text{eff}_\text{int}/{k_\text{B}}T)\end{gathered}$$ Thus, when we have a single interface in the system, an interpretation of correction terms as being due to repulsive interactions between interfaces lacks plausibility. If we rather use the interpretation used in Fig. \[fig: TranslationalEntropySketch\], that we can work with non-interacting interfaces where an interface needs a space of extent $l_\text{int}=w_L$ in $z$-direction, any such problems are avoided, and Eq. is interpreted via Eq. as resulting from the translational entropy of the interface. We also note that Eq. is valid for is readily generalized to arbitrary dimension, by stating that the translational entropy gain of an interface in a $L^{d-1} \times L_z$ geometry causes a correction term to the interfacial tension $\gamma$ ($\gamma=\Delta F/L^{d-1}$), namely $$\label{eq10}
\Delta \gamma = - \frac{1}{L^{d-1}} \ln \left( \frac{L_z}{w_L} \right) \quad.$$ Recall that in the classical limit of quantum systems the length used for counting the states for the translational entropy is the thermal de Broglie wavelength. Here, we deal with purely classical statistical mechanics, hence the use of another physical length of the system, such as $w_L$, is more appealing. In $d=2$, the exact transfer matrix results show that in geometries such as Fig. \[fig: BoundaryConditionsFreeSurfaces\] and \[fig: BoundaryConditionsAPBC\], for large $L_z$ and large $L$ the interfacial tension can be written as $\gamma = \gamma_\infty + \Delta \gamma = \gamma_\infty - L^{-1} \ln(L_z/w_L)$, which implies that capillary wave effects are already fully accounted for through $w_L$ in Eq. .
Capillary wave effects continued {#sec: CapillaryWaveEffects}
--------------------------------
For the sake of completeness, we briefly recall what is known on the finite-size effects on the interfacial tension due to capillary waves [@84; @87; @88; @89; @90]. Ignoring the intrinsic interfacial structure, the interface is described by a function $z=h(x)$ in $d=2$ or $z=h(x,y)$ in $d=3$, respectively, that characterizes the dividing surface between the phases with opposite order parameter. Since overhangs are forbidden, a coarse-graining as implied in Fig. \[fig: TranslationalEntropySketch\] is anticipated. If one assumes additionally that $|{\text{d}}h(x)/{\text{d}}x|$ and $|\nabla h(x,y)|$ are very small, the Hamiltonian describing the capillary wave fluctuations is [@100] (again in units of the thermal energy ${k_\text{B}}T$ and ignoring the distinction between interfacial tension $\gamma_\infty$ and interfacial stiffness [@100])
$$\begin{aligned}
\mathcal{H}_\text{cw} &=\frac{\gamma_\infty}{2} \int {\text{d}}x \left|\frac{{\text{d}}h}{{\text{d}}x}\right|^2 &&(d=2) \label{eq11a} \\
\mathcal{H}_\text{cw} &= \frac{\gamma_\infty}{2} \int {\text{d}}x \int {\text{d}}y \left|\nabla h (x,y)\right|^2 &&(d=3) \;, \label{eq11b}\end{aligned}$$
respectively. Note that here the total interface tension $\gamma_\infty$ (that results in the thermodynamic limit) is taken [@88; @89], rather than some renormalized quantity. Introducing Fourier transforms $h_q$ of these height variables $h(x)$ or $h(x,y)$, one finds $$\label{eq12}
\mathcal{H}_\text{cw} =\frac{\gamma_\infty}{2} \frac{1}{(2 \pi)^{d-1}} \int {\text{d}}^{d-1} q \; q^2 |h_q|^2$$ and the resulting contribution to the free energy can be written in terms of path integrals $$\begin{gathered}
\label{eq13}
\Delta F =-{k_\text{B}}T \ln \int D h_q \int D h_q^* \\
\exp \left(-\frac{\gamma_\infty}{2} \frac{1}{(2 \pi)^{d-1}} \int {\text{d}}^{d-1} q \; q^2 |h_q|^2 \right)\end{gathered}$$ We now take into account that in a finite geometry with PBC in $x$, (or $x$ and $y$, respectively) directions reciprocal space is discrete, and hence Eq. becomes (in $d=2$) $$\begin{aligned}
\Delta F_\text{cw} &=-{k_\text{B}}T \ln \prod_\nu \int\limits_{ - \infty}^{+\infty} d h_\nu \int\limits_{ - \infty}^{+\infty} d h^*_\nu \exp \left(\frac{-\gamma_\infty}{2}\; q^2_\nu h ^*_\nu h_\nu\right) \nonumber \\
&=- {k_\text{B}}T \ln \prod_\nu \left(\frac{2 \pi}{\gamma_\infty q_\nu^2}\right) \label{eq14}\end{aligned}$$ where $q_\nu = \pm \nu \pi a/L, \, \nu=1, \ldots, N=L/a$. Of course, the term $\nu=0$ (corresponding to a uniform translation of the interface) needs to be omitted here. One can show that for large $L$ the resulting finite-size behavior is ($\Delta \gamma_\text{cw}=\Delta F_\text{cw}/L$) $$\label{eq15}
\Delta \gamma_\text{cw} = A + \frac{B}{L} \ln \left( \frac{L}{a} \right) + \frac{C}{L} \quad,$$ where the regular terms in $1/L$, namely $A$ and $C/L$, are dominated by the large $q$ behavior, while the singular logarithmic term is due to small wave numbers and its prefactor $B=1/2$ agrees with transfer matrix results quoted in Eq. . Since the capillary wave description is no longer reliable at large $q$, however, no conclusion on the value of the leading term (A) and the coefficient $C$ of the regular finite correction $(C/L)$ can be made. The situation is worse in $d=3$, however, where in an analogous calculation no singular term due to long wavelength capillary waves can be identified. Capillary wave corrections are then expected to have the form, to leading order, $$\label{eq16}
\Delta \gamma_\text{cw} =\frac{\operatorname{const}}{L ^{d-1}}$$ but the constant is not expected to be universal. We recall, however, that from the equipartition theorem one can conclude from Eq. that [@100] $$\label{eq17}
\langle |h_q |^2 \rangle =(\gamma_\infty q ^2 )^{-1}$$ and hence Eq. readily follows, since (in $d=2$) $$\label{eq18}
w^2_L =\langle h^2 (x) \rangle - \langle h (x) \rangle^2 \propto \frac{a}{\gamma_\infty} \int\limits^{2 \pi /a}_{2 \pi/L} \frac{{\text{d}}q}{q^2} \propto \frac{aL}{\gamma_\infty} \;,$$ while in $d=3$ one finds $$\label{eq19}
w^2_L \propto \frac{a^2}{\gamma_\infty} \int\limits^{2 \pi/a}_{ 2 \pi/L} \frac{{\text{d}}q}{q} \propto \frac{a^2}{\gamma_\infty} \ln \left(\frac{L}{a}\right) \quad .$$
Domain breathing {#sec: DomainBreathing}
----------------
We first consider a situation with APBC, so we have a single interface, but with conserved magnetization $m=0$. Then on average we have two equally large domains, with linear dimensions $L_z/2$ in $z$-direction each, of opposite magnetization. However, the magnetization densities $m_+$, $m_-$ in both domains still can fluctuate and also the position of the interface is not fixed but can fluctuate somewhat as well. We denote this shift of the interface due to a fluctuation by $\Delta$, and note the constraint that the total magnetization in the system is strictly fixed at $m=0$, to find $$\label{eq20}
\begin{split}
0 &= m L^{d-1} L_z \\
&= m_+ L ^{d-1} \left(\frac{L_z}{2} - \Delta \right) + m_-L^{d-1} \left(\frac{L_z}{2} +\Delta \right)
\end{split}$$ and hence $$\label{eq21}
\Delta= \frac{L_z}{2} \left(\frac{m_+ + m_-}{m_+ -m_-}\right) \approx \frac{\delta m_+ + \delta m_-}{2 m_\text{coex}}
\; \frac{L_z}{2}$$ where we used that the fluctuations $\delta m_+ = m_+ - m_\text{coex}$, $\delta m_-=m_-+ m_\text{coex}$ are very small. From general statistical thermodynamics we know that these fluctuations of the magnetization density in the bulk obey Gaussian distributions [@107] $$\label{eq22}
P_{L, L_z/2} (\delta m) \propto \exp \left[-\frac{1}{2} \frac{(\delta m)^2 L_z L ^{d-1}}{2 {k_\text{B}}T \chi_\text{coex}} \right]\quad,$$ where $\chi_\text{coex}$ is the susceptibility at the coexistence curve. Eq. is true both for $\delta m_+$ and $\delta m_-$, and these fluctuations in the two subvolumes of the system can occur independently of each other, so $\langle \delta m_+ \delta m_- \rangle =0$, while $ \langle \delta m^2_+ \rangle = \langle \delta m^2_- \rangle = {k_\text{B}}T\chi_\text{coex}/(L^{d-1} L_z/2)$. Hence we conclude from Eq. that $$\label{eq23}
\begin{split}
\left\langle \Delta ^2 \right\rangle &= \frac{L_z^2}{16 m^2_\text{coex}} \left[\left\langle\delta m_+^2 \right\rangle + \left\langle\delta m_-^2 \right\rangle \right] \\
&= \frac{{k_\text{B}}T \chi_\text{coex}}{4 m^2_\text{coex}} \frac{L_z}{L^{d-1}} \quad.
\end{split}$$ Thus, the typical length over which the interface position fluctuates is $$\label{eq24}
\sqrt{\langle \Delta^2 \rangle } =L_z ^{1/2} L^{-(d-1)/2} \frac{\sqrt{{k_\text{B}}T \chi_\text{coex}}}{2m_\text{coex}}$$ From this motion of the interface over a width $\sqrt{\langle \Delta ^2 \rangle}$, which we call “domain breathing”, we again get an entropy contribution, resulting in a correction of the interfacial tension $$\label{eq25}
\begin{split}
\Delta \gamma_\text{db} &= - \frac{1}{L ^{d-1}} \ln \left( \frac{\sqrt{\langle \Delta^2 \rangle}}{w_L}\right) \\
&= - \frac12 \frac{\ln L_z}{L ^{d-1}} + \frac{d-1}{2} \frac{\ln L}{L^{d-1}} + \frac{3-d}{2} \frac{\ln L}{L^{d-1}}+\frac{\operatorname{const}}{L^{d-1}} \;.
\end{split}$$ To simplify the notation, we assume here (and in the following) that the lengths $L, L_z$ are measured in some natural units (e.g. the lattice spacing $a$, in case of the Ising model) and hence dimensionless. Note that there is some ambiguity of interpretation possible. In our previous publication [@95], the length to normalize $\sqrt{\langle \Delta^2\rangle}$ was taken as the lattice spacing $a$, and then the capillary wave contribution $(3-d) \ln L/(2L^{d-1})$ must be added as an explicit further correction. However, when we use $w_L$ (as computed in Eq. or and , respectively) rather than $a$ to normalize $\sqrt{\langle \Delta^2\rangle}$, then the capillary wave effects are already fully taken care of. Fig. \[fig: DomainBreathing\] illustrates the occurrence of this “domain breathing” effect by configuration snapshots.
A special situation occurs in the case of the canonical ensemble for PBC. This is a very common situation, since then no symmetry between the coexisting phases is required, and the system exhibits translational invariance, the domains separated by the two walls can be translated along the $z$-axis as a whole. For this degree of freedom, a correction $-\ln L_z/L^{d-1}$ to the interfacial tension arises. In addition, the distance between the domain walls can fluctuate, according to the domain breathing effect, as described above, yielding an additional entropic term $-\frac{1}{2} \ln L_z/L^{d-1}$. Since there are two interfaces present in the system, the total correction $-(3/2) \ln L_z/L^{d-1}$ yields a correction of $-(3/4) \ln L_z/L^{d-1}$ per interface.
We also note that it is not necessary to fix the magnetization exactly at $m=0$ (or, in the case of a fluid that possibly lacks any symmetry between the coexisting liquid $(l)$ and the vapor $(v)$ phases, at a density $\rho=(\rho_l + \rho_v)/2)$. Rather it suffices to choose a state point where in the simulation box we have a clear slab configuration of phase coexistence. Also, in a system lacking symmetries between the coexisting phases, the distributions around $m_+$, $m_-$ are characterized by different “susceptibilities” $\chi^+_\text{coex}$, $\chi^-_\text{coex}$, but for the exponents of $L_z$ and $L$ in Eq. , this does not matter.
At this point, let us summarize the various logarithmic corrections found for the different choices of boundary conditions and ensembles: for the APBC(gc) case, we have a single interface that can freely translate (Fig. \[fig: TranslationalEntropy\], Eq. ). This yielded $$\Delta \gamma_{L,L_z}= - \frac{\ln L_z}{L ^{d-1}} + \frac{3-d}{2} \frac{\ln L}{L^{d-1}} + \frac{\operatorname{const}}{L^{d-1}} \;.$$ Due to the lack of conservation laws, there is no coupling of the bulk domain fluctuations and interfacial fluctuations via the domain breathing effect in this case, unlike the APBC(c) case, for which Eq. implies $$\Delta \gamma_{L,L_z} = - \frac{1}{2} \frac{\ln L_z}{L ^{d-1}} + \frac{\ln L}{L^{d-1}} + \frac{\operatorname{const}}{L^{d-1}} \;.$$ In the PBC(c), we have two interfaces, and we have both the above translational entropy contribution (when we translate the domains as a whole) and the domain breathing effect (considering the relative motion of the two domain walls against each other), and normalized per single interface this yields $$\Delta \gamma_{L,L_z} = - \frac{3}{4} \frac{\ln L_z}{L ^{d-1}} + \frac{5-d}{4}\frac{\ln L}{L^{d-1}} + \frac{\operatorname{const}}{L^{d-1}} \;.$$ Note that by normalizing domain wall motions consistently by $w_L$ rather than by $a$, capillary wave effects are automatically included.
Taking all logarithmic finite-size corrections (due to translational entropy, domain breathing, and capillary waves) together, it makes sense to write the result for the interfacial tension in the following general form $$\label{eq26}
\gamma_{L,L_z} =\gamma_\infty - x_\perp \frac{\ln L_z}{L^{d-1}} + x_\parallel \frac{\ln L}{L^{d-1}} + \frac{C}{L^{d-1}}$$ with some constant $C$ and two universal exponents $x_\perp$, $x_\parallel$ that depend on dimensionality $d$, type of boundary conditions (PBC, APBC) and statistical ensemble (grand canonical versus canonical). We present these constants $x_\perp$, $x_\parallel$ in Table \[tab: ScalingConstants\].
$d$ BC ensemble $x_\perp$ $x_\|$
----- -------------- ---------------- ----------- --------
2 antiperiodic grandcanonical $1$ $1/2$
3 antiperiodic grandcanonical $1$ $0$
2 antiperiodic canonical $1/2$ $1$
3 antiperiodic canonical $1/2$ $1$
2 periodic canonical $3/4$ $3/4$
3 periodic canonical $3/4$ $1/2$
: The universal constants $x_\perp$ and $x_\parallel$ in Eq. do not depend on details of the model such as particle interactions, but they rather depend on the dimensionality $d$, the boundary conditions (periodic or antiperiodic) and the ensemble (canonical or grandcanonical).[]{data-label="tab: ScalingConstants"}
Models and simulation methods {#sec: ModelsAndSimulationMethods}
=============================
As stated already in Sec. \[sec: PhenomenologicalTheory\], the main emphasis of this study is on the Ising model {Eq. }, since (i) there is no source of inaccuracy due to insufficient knowledge of the conditions for which phase coexistence in the bulk occurs, symmetry requires phase coexistence to occur for $H=0$, and (ii) in the case $d=2$ the surface tension is known exactly, Eq. , and so the concepts described in Sec. \[sec: PhenomenologicalTheory\], in particular Eq. , can be very stringently tested. In $d=2$, we have typically used $L=10, 20, 30$ and $40$, varying $L_z$ from $L_z=20$ to $L_z=200$ in order to test the $L_z$-dependence at fixed $L$ (Eq. ). In addition, at fixed $L_z=60$ and $120$ runs were made varying $L$ from $L=10$ to $L=L_z$ to test the $L$-dependence in Eq. ). In $d=3$, we have used $L=6,8,10,12,$ and 14 and varying $L_z$ from $L_z=20$ to $L_z=100$ for the test of Eq. , as well as using $L_z=20,40$ and 80 varying $L$ from $L=10$ to $L=40$ for the test of Eq. . Using the grandcanonical ensemble, all runs were performed simply using the standard single-spin flip Metropolis algorithm [@81]. Since the simulations are performed far below the critical point (${k_\text{B}}T/J=1.2, 1.6$ and $2.0$ in $d=2$; ${k_\text{B}}T/J=3$ in $d=3$), the use of cluster algorithms [@81] would not provide any advantage. The canonical ensemble is realized via a spin exchange algorithm; choosing two spins at random from the whole simulation box, rather than choosing a pair of spins which are nearest neighbors, as in the standard spin exchange algorithm [@81], we avoid slow relaxation of long wavelength magnetization fluctuations.
Special techniques are required when one wishes to sample the probability distribution $P_{L,L_z} (\rho)$, Fig. \[fig: ProbDistributions\], since it varies over many orders of magnitude. Straightforward use of the Metropolis algorithm (as originally attempted [@39]) would not give any useful data for our purposes. While previous work [@45; @46; @49] relied on the multicanonical Monte Carlo method, we found it here more convenient to use successive umbrella sampling [@109] which is more straightforward to implement. We recall that from $P_{L,L_z}(\rho)$ one can extract an estimate for the interfacial tension $\gamma_{L, L_z}$ as follows [@39] $$\label{eq27}
\gamma_{L,L_z}= \frac{1}{2 L^{d-1}} \ln \left(\frac{P_{L, L_z} (\rho_\text{coex})}{P_{L, L_z} (\rho_\text{min})}\right)\quad.$$ Here we use a notation which applies both to the lattice gas (where the density $\rho_\text{min}$ where the minimum of $P_{L, L_z}(\rho)$ occurs corresponds to a magnetization $m=0$ in the magnetic interpretation of the Ising model) and to fluids which may lack particular symmetries (then the minimum occurs for the density of the “rectilinear diameter”, $\rho_\text{min}=\rho_d=(\rho_v + \rho_l )/2$, $\rho_v$ and $\rho_l$ being the densities of the coexisting vapor and liquid phases). The physical interpretation of Eq. simply is that the probability to observe a state at $\rho_\text{min}$, in comparison to the probability to observe one of the pure phases at coexistence ($\rho_v$ or $\rho_l$, respectively) is down by a factor $\exp(-2 L^{d-1} \gamma_{L, L_z})$, due to the fact that we must have 2 interfaces of area $L^{d-1}$ (Fig. \[fig: BoundaryConditionsPBC\]). Note that although $P_{L,L_z} (\rho)$ is generated by carrying out a sampling (multicanonical or umbrella sampling) in the grandcanonical ensemble (at magnetic field $H=0$ or chemical potential $\mu=\mu_\text{coex}$, respectively), by taking out the probability strictly at $\rho=\rho_\text{min}$ the extracted interfacial tension $\gamma_{L,L_z}$ in Eq. corresponds to observations sampled in a canonical ensemble.
As a second model, representative for off-lattice fluids, we study the Lennard-Jones model in $d=3$ dimensions, where point particles interact with a potential $U_{LJ} (r)$, $r$ being the distance between the particles, $$\label{eq28}
U_{LJ} (r) = 4 \varepsilon\left[\left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^6 + Y \right], \quad r< r_c \;,$$ while $U_{LJ}(r > r_c)\equiv 0$. Here $\varepsilon$ is the strength and $\sigma$ the range of this potential, and the constant $Y$ is chosen such that $U_{LJ} (r)$ is continuous at the cutoff $r_c=2^{1/6} \cdot 2\sigma$. For this model, we choose units such that $\varepsilon=1$ and $\sigma=1$. A single temperature $T=0.78 T_c$ is used, for which $\gamma_\infty = 0.375(1)$ was already estimated in previous work [@110], using Eq. .
In order to be able to study also other choices of boundary conditions, as shown in Fig. \[fig: BoundaryConditionsFreeSurfaces\] and \[fig: BoundaryConditionsAPBC\], we have developed a new variant of the ensemble switch method [@91; @92; @93]. In this previous work [@91; @92; @93], a “mixed” system was created from a system confined between two parallel walls and a system with no walls, to extract the excess free energy due to the walls. In the present work, we extend this method by creating a mixed system from two systems at coexistence without interfaces and a system formed from these separate systems but now having interfaces (Fig. \[fig: EnsembleSwitchMethodSketch\]). The two separate systems have linear dimension $L_z/2$ in $z$-direction each, and are chosen such that one of them is in the state corresponding to $+ m_\text{coex}$, the other in the state corresponding to $-m_\text{coex}$. Both systems have periodic boundary conditions individually, and hence for this state (denoted as $\kappa =0$) there are no interfaces present. The system denoted as $\kappa=1$ has exactly the same degrees of freedom as the system denoted as $\kappa =0$, namely the $N=L^{d-1}L_z$ Ising spins which may take values $S_i=\pm 1$, and we work at the same thermodynamic conditions (e.g. total magnetization fixed at $m=0$ in the canonical ensemble, and same temperature $T$). The systems denoted as $\kappa=0$ and $\kappa=1$ differ only with respect to their boundary conditions: in both halves of the system $\kappa =0$ we have PBC over a distance $L_z/2$ already, while in the system $\kappa=1$ the two halves are joined, and a single PBC over the distance $L_z$ remains (in the $z$-direction). So the difference in free energies between both systems is related to the interface tension, $$\label{eq29}
\gamma_{L,L_z} = \frac{F(\kappa=1) - F (\kappa=0)}{2 L^{d-1} {k_\text{B}}T} \quad .$$ In order to find this free energy difference, it is useful to define a mixed system by $$\label{eq30}
\mathcal{H}(\kappa)= \kappa \mathcal{H}_1 + (1 - \kappa) \mathcal{H}_0 \, \quad 0 \leq \kappa \leq 1 \quad,$$ which is a perfectly permissible Hamiltonian for a Monte Carlo simulation (although clearly such a system can never be created by an experimentalist in his laboratory).
The free energy $F(\kappa)$ of the mixed system is defined by the standard relation from the Hamiltonian, $$\label{eq31}
F(\kappa)=-{k_\text{B}}T \ln \left( \operatorname{Tr}\left\{\exp[-\mathcal{H}(\kappa)/{k_\text{B}}T]\right\}\right),$$ but it is clear that for large $L$ the normalized free energy difference $[F(\kappa=1) - F(\kappa=0)]/{k_\text{B}}T$ can be huge, since we expect $\gamma_{L, L_z}$ to be of order unity. Such large free energy differences can be computed with sufficient accuracy by thermodynamic integration. In practice, the interval $0 \leq \kappa \leq 1$ is divided into $n_\kappa$ subintervals, separated by discrete values $\kappa_i$. In this work, we use $n_\kappa=1024$. Then the free energy difference $\Delta F_i=F(\kappa_{i+1} ) - F(\kappa_i)$ is obtained from a parallelized version of successive umbrella sampling, considering Monte Carlo moves $\kappa_i \rightarrow \kappa_{i+1}$ or vice versa, in addition to the sampling of the spin configuration. On each core, the system can switch between two adjacent values $\kappa_i$, $\kappa_{i+1}$ only, so one needs to use $n_\kappa$ cores. The desired free energy difference $\Delta F_i$ is simply determined by estimating the probabilities that the states with $\kappa_i$ or $\kappa_{i+1}$ are observed, $\Delta F_i={k_\text{B}}T \ln [P(\kappa_i)/P(\kappa_{i+1})]$.
An important technical aspect is that the set of points $\{\kappa_i\}$ need not be chosen equidistantly in the interval from zero to unity, but the location of these points can be chosen in a way which optimizes the accuracy of the thermodynamic integration. For the Ising model we have found it useful to choose $\kappa_i=\sin^2 (\pi i / (2n_\kappa))$. Note that this function yields more points $\kappa_i$ near $\kappa=0$ and $\kappa=1$, and this clearly is useful since the states for intermediate values of $\kappa$ only are needed for the thermodynamic integration, but have no direct physical significance. Figure \[fig: SketchKappaFunctions\] shows various choices for the mapping $i\to\kappa_i$.
A typical example of the free energy function $\Delta F(\kappa)$ is given in Fig. \[fig: betaFvsKappaNearKappa1\], comparing for the $d=2$ and $3$ Ising model three cases, namely APBC in the canonical and grandcanonical ensemble, as well as the PBC case (canonical ensembles). One sees that in general, the variation with $\kappa$ is slightly non-monotonic. However, since the height of this maximum of $\Delta F(\kappa)$ exceeds the final result ($\Delta F (\kappa=1))$ only by at most a few ${k_\text{B}}T $ (which is the unit of the ordinate scale), we do not think that entropic barriers for intermediate values of $\kappa$ provide a problem here. Of course, this aspect needs to be carefully checked for other models.
We have verified for the Ising model that this method, with the choice of PBC as indicated in Fig. \[fig: EnsembleSwitchMethodSketch\] yields results that are completely equivalent to the standard method of Eq. , as expected. But the advantage of the ensemble switch method (Fig. \[fig: EnsembleSwitchMethodSketch\]) is that it is not restricted to simple Ising systems, but can be applied to cases such as liquid-solid interfaces, for which an approach such as Eq. is difficult to apply: In fact one cannot easily construct convenient reversible paths connecting the two pure phases (liquid and crystal in this case) in a simulation of a single system, where just the volume fraction of the crystal is continuously increased, unlike the case of the Ising model, where starting out at $m=-m_\text{coex}$ the volume fraction of the state with $m= +m_\text{coex}$ is gradually increased and hence $P_{L, L_z}(m)$ is sampled (Fig. \[fig: ProbDistributions\]). At this point, we mention that also in the Ising model entropic barriers associated with the droplet evaporation-condensation transition and the transition from circular droplets (in $d=2$) to slabs, in principle, are also a problem when one aims at very high accuracy [@111], but for the data in the present paper this problem was not yet important; nevertheless it is useful to have an alternative method. Moreover, the ensemble switch method can also straightforwardly be applied when we use APBC in the $z$-direction: then the state with $\kappa=1$ has a single interface rather than two interfaces. In the APBC case, both canonical and grandcanonical ensembles can be implemented. Of course, the limiting behavior for $L \rightarrow \infty$ and $L_z \rightarrow \infty$ always must yield the same interfacial tensions, but since the nature of the finite-size corrections differ, it is useful to carry out simulations in different ensembles and or different choices of boundary conditions, and verify that in practice one indeed converges to the same result. This will be the strategy that we will follow in the next section.
For the computations presented in this paper, the total computing effort was of the order of 40 million single core hours of the Interlagos Opteron 6272 processor at the high-performance computer Mogon of the University of Mainz.
We emphasize that additional methods to estimate interfacial tensions from simulations, of course, exist. E.g. for off-lattice fluids a popular approach is based on the anisotropy of the pressure tensor $p_{\alpha \beta} (z)$ ($ \alpha,\beta=x,y,z)$ across an interface [@16; @41], $$\label{eq32}
\gamma_{L,L_z} =\frac{1}{2} \int\limits^{L_z/2}_{-L_z/2} {\text{d}}z \left[p_{zz} (z) - \frac{p_{xx} (z) + p_{yy} (z)}{2} \right]$$ where we have assumed a system with linear dimension $L_z$ and PBC in all directions, so that two interfaces contribute. Such simulations normally are done in the canonical ensemble, and we expect that the finite-size effects are of the same character as for the method based on Eq. . For temperatures close to the critical temperature, Eq. is computationally inconvenient, since the integrand is very small, and very accurate sampling is required. We expect that Eq. has an advantage at rather low temperatures, where the grandcanonical sampling of $P_{L, L_z} (\rho)$ becomes less efficient. Note, however, that for computing the pressure tensor $p_{\alpha \beta}(z)$ from the virial theorem one should avoid the sharp cutoff of the potential, as done in Eq. , and apply a smoothened cutoff to avoid jumps of the force at $r=r_c$.
A difficult issue are finite-size effects associated with the use of Eq. or Eq. , respectively: one either observes the dependence of $\langle|h_q|^2\rangle$ on $q^2$ (Eq. ) or of $w^2_L$ on $\ln L$ {Eq. } and estimates $\gamma_\infty$ from fitting the prefactor. Finite-size effects make the set of possible wave numbers $q$ discrete, of course: in addition one must note that Eq. is believed to hold in the long wave length limit only, while at shorter wave lengths (corresponding to large $q$) systematic deviations are expected (sometimes a wave vector-dependent interfacial tension $\gamma(q)$ is discussed [@26; @38]). However, this problem is out of focus here.
Numerical Results for Finite-Size Effects on Interfacial Tensions {#sec: NumericalResults}
=================================================================
Two-Dimensional Ising Model {#sec: TwoDimensionalIsingModel}
---------------------------
As a starting point of the discussion, we use data for $L \times L$ systems with PBC obtained with the help of Eq. , including both the previous results by Berg et al. [@45], and results taken by us including also additional choices for $L$, and compare them to the results from the ensemble switch method for the PBC case. The traditional use of such data is to plot the estimates for $\gamma_{L}$ linearly versus $1/L$ and try an extrapolation towards $1/L \rightarrow 0$ (Fig. \[fig: Ising2d\_Scaling\_Ratio1\]). Indeed such an extrapolation seems to be compatible with the exact result (from Eq. [@105]), highlighted by a horizontal straight line, but one can also clearly recognize the problems of the approach: (i) even for relatively large $L$, such as $L=50$, the relative deviation is still of the order of 10%. (ii) Over the whole range of $1/L$, there is a distinct curvature of the data visible, indicating that it is unclear whether or not the asymptotic regime of the extrapolation has actually been reached. In cases of real interest, of course, the exact answer is not known beforehand, and it is also very difficult (and may need orders of magnitude more computational resources) to obtain data of the same statistical quality as shown in Fig. \[fig: Ising2d\_Scaling\_Ratio1\]. Thus, in general it will be very helpful to understand the origin of the finite-size effects, and - if possible - to combine different variants of the method where the finite-size effects differ, but the resulting estimate for $\gamma_\infty$ must be the same.
In order to identify the sources of the various finite-size effects in the problem, it is useful to choose $L_z$ different from $L$ and vary $L_z$ at fixed $L$: Executing this with the ensemble switch method for the three different choices APBC(gc), APBC(c) and PBC(c), we see from Eq. that we must get a result of the form $$\label{eq33}
\gamma_{L, L_z}=\operatorname{const}- x_\perp \frac{\ln L_z}{L},$$ where all the terms depending on $L$ only (and $\gamma_\infty$) have been combined in the constant on the right-hand side of this equation, and the prefactor $x_{\perp}$ of the $(1/L)\ln L_z$ term is 1/2, 3/4 or 1, for the three choices APBC(c), PBC(c) and APBC(gc), respectively (cf. Table \[tab: ScalingConstants\]). Figure \[fig: Ising2d\_ScalingZ\_All\] verifies this behavior, focusing on two examples, namely ${k_\text{B}}T/J=1.2$, $L=10$ and ${k_\text{B}}T/J=1.6$, $L=10, 20$ and $30$. The straight lines have precisely these theoretical values for $x_{\perp}$, and fit the simulated data rather perfectly. We recall that in the case APBC(gc) where we have a single mobile interface, we test the simple translational entropy of the interface $x_\perp=1$, while in the case APBC(c) we just test the “domain breathing” contribution to the interface $(x_\perp=1/2)$. In the PBC(c) case, two interfaces are present, and both these mechanisms contribute once, yielding $x_\perp=(1 + 1/2)/2=3/4$ per interface. Fig. \[fig: Ising2d\_ScalingZ\_PBCcan\] verifies that the latter exponent indeed is found at all temperatures and all $L$.
Of course, varying $L_z$ at fixed finite $L$ does not yield the desired information on $\gamma_\infty$; thus both $L$ and $L_z$ need to be varied and the limit that both $L$ and $L_z$ tend to infinity needs to be considered. As a first step to also test that the quoted results for $x_\parallel$ (Table \[tab: ScalingConstants\]) are compatible with the simulation results as well, we have fitted $\gamma_{L, L_z}$ to Eq. , using the theoretical values for $x_\parallel$, $x_\perp$ and $\gamma_\infty$ so that a single fit parameter remains, namely the coefficient $C$ of the $C/L$ term in Eq. . Fig. \[fig: Ising2d\_ScalingX\_T2-0\_L\] shows that indeed an excellent fit of the data results, giving further credence to our assertion that the finite-size effects are under control. However, in the general case $\gamma_\infty$ is not known in beforehand, of course, but rather should be an output of the computation. Then a very natural strategy is to subtract the theoretical contributions $[x_\parallel \ln (L) - x_\perp \ln (L_z)]/L$ from $\gamma_{L, L_z}$, so that Eq. reduces to (in $d=2$) $$\label{eq34}
\tilde{\gamma} \equiv \gamma_{L, L_z} + \frac{x_\perp \ln L_z-x_\parallel \ln L}{L}=\gamma_\infty + \frac{C}{L}$$ and estimate both constants $\gamma_\infty$ and $C$ from a fit of Eq. to the data. The results of this procedure are shown in Fig. \[fig: Ising2d\_ScalingX\]. It is seen that the theoretical values $\gamma_\infty (T=1.2)=1.284$, $\gamma_\infty (T=1.6) =0.660$ and $\gamma_\infty (T=2.0)=0.228$ are almost perfectly reproduced! We also note that the constant $C$, which is expected to depend on both temperature and boundary conditions and the type of ensemble, since not the same fluctuations are probed, takes in each case roughly the same value for both choices of $L_z$: in the asymptotic limit, this parameter $C$ should no longer depend on $L_z$ at all, and the fact that this is not strictly true indicates that presumably there is some residual effect of higher order corrections, that were neglected in our analysis. When we try to improve the estimation of this parameter $C$ by imposing the theoretical value of $\gamma_\infty$ in the analysis, the differences between the two estimates for $C$ obtained are still slightly affected by statistical errors. Nevertheless, we judge the quality of the straight line fits in Figs. \[fig: Ising2d\_ScalingX\], as rather gratifying. In particular, the coincidence of the estimates for $\gamma_\infty$ for the 6 cases shown at every temperature shows that the possibility of the ensemble switch method to apply it for different boundary conditions (and/or ensemble) is most valuable for ensuring that the desired accuracy really has been reached.
From the fits in Figs. \[fig: Ising2d\_ScalingX\_T1-2\_sublogs\_1dL\], \[fig: Ising2d\_ScalingX\_T1-6\_sublogs\_1dL\] and \[fig: Ising2d\_ScalingX\_T2-0\_sublogs\_1dL\], we see that the constant $C$ is of order unity but temperature-dependent, and it is of interest, of course, to ask where this temperature dependence comes from. The easiest case to discuss is the case of APBC(gc), where we have argued that the singular size effects solely reflect the translational entropy contribution, Eq. . The capillary wave effects are already included if for the “counting” of states where the interface can be placed (Fig. \[fig: TranslationalEntropySketch\]), the length $L_z$ is measured in units of $w_L$. Of course, an additional regular contribution $c/L$ with some coefficient $c$ can also occur; this is already seen from Eqs. , , which in $d=2$ can be written as $\xi_\parallel = A w_L \exp(\gamma_\infty L)$, where $A$ is another constant, and putting (in the spirit of Eq. ) $\xi_\parallel = L_{z,0}$, where $\gamma_{L,L_z} = \gamma_\infty - \frac{1}{L} \ln(L_z/w_L) + c/L$ vanishes, we conclude $c=\ln A$. However, another contribution to this regular term comes from the prefactor in the relation $w_L\propto L^{1/2}$ in Eq. . In the $d=2$ Ising model it is known exactly [@100] that $w_L^2/L = (2 \sinh(\gamma_\infty))^{-1} \equiv l_0$ (recall that lengths are measured in units of the lattice spacing $a$). Using Eq. to evaluate this term for the three temperatures ${k_\text{B}}T/J=1.2, 1.6$ and $2.0$ considered in Fig. \[fig: Ising2d\_ScalingX\], we find that the remaining constant $c$, as defined above, is almost temperature independent (namely 1.94, 1.98 and 1.99, respectively, for the three mentioned temperatures). So the increase of the parameter $C$ with temperature in Fig. \[fig: Ising2d\_ScalingX\] simply reflects the increase of the length $l_0$ (which also is measured in units of the lattice spacing and hence dimensionless) with temperature, since $C=(\ln (l_0) + c)/2$.
Three-Dimensional Ising Model {#sec: ThreeDimensionalIsingModel}
-----------------------------
Since the computational effort in $d=3$ is substantially larger, we restrict attention here to a thorough study of a single temperature only, ${k_\text{B}}T/J=3.0$, where the correlation length in the bulk still is very small (recall that the critical temperature occurs at about ${k_\text{B}}T_c/J \approx 4.51$ [@81]) but this temperature is sufficiently distant from the roughening transition temperature ${k_\text{B}}T_R / J \approx 2.45$ [@114], and hence the anisotropy effects on the interfacial free energy of flat interfaces are already small [@44; @47; @115].
Again, we begin by asserting that the effects demonstrated to be important in the $d=2$ case, such as the translational entropy of the interface and “domain breathing” fluctuations, have a significant impact in three dimensions, too. Fig. \[fig: Ising3d\_Scaling\_ScalingZ\] is the counterpart of Fig. \[fig: Ising2d\_ScalingZ\], demonstrating the presence of a correction $-x_\perp (1/L^2) \ln (L_z)$, due to the translational entropy of the interface(s) and domain breathing, when $L_z$ is varied at fixed $L$. Fig. \[fig: Ising3d\_Scaling\_ScalingX\] is the counterpart of Fig. \[fig: Ising2d\_ScalingX\_T2-0\_L\], where we fit the data to the full Eq. when $L$ is varied for several choices of $L_z$, using the known value [@48] $\gamma_\infty=0.434$ and the theoretical values of $x_\perp$, $x_\parallel$ from Table \[tab: ScalingConstants\], so that a single parameter (the prefactor of the $1/L^2$ term in Eq. ) is fitted. As in the case of $d=2$ the quality of the fit is excellent. Thus, in order to estimate $\gamma_\infty$, we proceed in analogy with Eq. , reducing the data with the known theoretical corrections (using Eq. and Table \[tab: ScalingConstants\]) $$\label{eq35}
\widetilde{\gamma} \equiv \gamma_{L,L_z} + \frac{x_\perp \ln L_z - x_\parallel \ln L}{L^2} = \gamma_\infty + \frac{C_1}{L} +\frac{C_2}{L^2}$$ Here we have made an important phenomenological modification, not suggested by our theoretical considerations of Sec. \[sec: PhenomenologicalTheory\]: there must be the theoretically expected term of order $1/L^2$, which is strictly required because the arguments of the logarithms in Eq. must have the form $\ln (L_z/ l')$, $\ln (L/l'')$ with some lengths $l '$, $l''$, to make the arguments dimensionless, and so the unspecified constant in the last term on the right hand side of Eq. must contain a factor $x_\perp \ln l' - x_\parallel \ln l ''$. We have written this theoretically expected term then in the form $C_2 / L^2$, where $C_2$ is some effective parameter. However, in addition we have allowed for a term $C_1/L$, where $C_1$ is another (hypothetical) effective parameter. Fig. \[fig: Ising3d\_ScalingSublogs\_Param2\] shows the result of such an analysis: we see that the parameter $C_1$, if it exists, is very small (of order 10$^{-2}$ lattice spacings), while the parameter $C_2$ is of order unity (and almost independent of $L_z$: the weak variation of this parameter with $L_z$ is surely due to residual statistical errors, and possible higher order corrections which were disregarded from the start). The value of $\gamma_\infty$ estimated from such a fit is in excellent agreement with the value known from a completely different method [@47]. Thus, it is tempting to require that the parameter $C_1$, that was introduced phenomenologically in Eq. , actually must be zero. Fig. \[fig: Ising3d\_ScalingSublogs\_Param1\] shows that the data are fully compatible with this assumption, the random spread in the estimates for $\gamma_\infty$ and $C_2$ is now distinctly smaller than before, and no evidence for some systematic error is detected. We also emphasize that for $L=10$ the deviation of $\tilde{\gamma}$ still is about 3%, for $L=20$ it is almost 1%, and so it is clear that finite-size extrapolations are needed for a very precise estimate.
In fact, the non-existence of a term $C_1/L$ in Eq. is desirable in view of a completely different argument. Consider the situation that in the directions parallel to the interface we do not use a PBC but rather use free boundaries. Then we expect that the interfacial tension must contain a correction of order $2 \gamma_\text{line}/L$ where $\gamma_\text{line}$ is the line tension [@16; @116; @117] of the contact line of the interface at such a boundary. This geometry in fact has been suggested (and used) to obtain estimates for the line tension [@118; @119]. Such an approach would not make sense if it would be spoiled by “intrinsic” finite-size effects that are of the same order (see also [@120]).
In view of this conclusion that the parameter $C_1$ for the $d=3$ Ising model does not exist, the reader may wonder why we present this discussion in such detail. However, as we shall see in the next section, the situation may be more subtle: previous work on LJ fluids and LJ mixtures [@110] in fact assumed that the leading corrections are of order $1/L$.
The Lennard-Jones Fluid {#sec: LennardJonesFluid}
-----------------------
We now study the interfacial tension of a generic off-lattice system, namely the (truncated and shifted) Lennard Jones fluid of point particles with a pairwise interaction potential $U(r)$ as defined in Eq. . It is known that this model has a vapor-liquid phase separation for temperature $T$ below the critical temperature of ${k_\text{B}}T_c/\varepsilon=0.999$ [@57]. Here we shall only analyze data at temperature ${k_\text{B}}T/\varepsilon = 0.78$. For this temperature Eq. was already used previously [@110] to estimate $\gamma_\infty= 0.375(1)$ (choosing units $\varepsilon=1$ and $\sigma=1$, as mentioned in Sec. \[sec: ModelsAndSimulationMethods\].
For the off-lattice LJ fluid, an analogue of the APBC is not known. Therefore, we restrict attention to the PBC(c) case. We apply here only the ensemble switch method, using standard local displacements as the elementary Monte Carlo move for the particles [@80; @81].
We proceed as in the last subsection, testing first the variation of $\gamma_{L, L_z}$ with $L_z$, for several choices of cross sectional area $A=L^2$ (Fig. \[fig: LJSCT0-78\_ScalingZ\]). Indeed the predicted logarithmic variation (again due to the translational entropy of the interface and the domain breathing effect) is verified. But we have to make a caveat: Due to the use of a local algorithm for moving particles (unlike the Ising model, where in the conserved case spins at arbitrary distances from each other were interchanged) the relaxation of the particle configurations is very slow. Basically, in order to actually observe the logarithmic contributions quantitatively correct, the simulation runs must be long enough that the interface in Fig. \[fig: TranslationalEntropySketch\] can explore the available volume. If the runs are too short, and the liquid domain diffuses only over a length $L_\text{diff} \ll L_z$, we expect that the contribution to the entropy that is “measured” by such a simulation is only $-L^{-2} \ln (L_\text{diff})$ rather than $-L^{-2} \ln (L_z)$. Since diffusive displacements only increase with the square root of time, we expect that a simulation time $\tau_\text{sim} \approx L^2_z/D$ would be needed to observe the correct entropic effect on the interfacial tension where $D$ is the effective domain diffusion constant. Since the diffusion constant $D$ with which the liquid domain can move in the simulation box is expected to be very small, for our local Monte Carlo algorithm, for large $L_z$ the simulation time will not suffice to sample the full equilibrium result, and we rather observe a result which is independent of $L_z$ but depends on the simulation time $\tau_\text{sim}$ via the equation $\tau_\text{sim} \approx L^2_\text{diff}/D.$ So we see the correct logarithmic variation only for $L_z < L_\text{diff}$ in Fig. \[fig: LJSCT0-78\_ScalingZ\], while for $L_z > L_\text{diff}$ there is no longer a systematic decrease of $\gamma_{L, L_z}$ with $L_z$ (data from too short runs are shown by circles), rather the data fluctuate randomly around a value that was dictated by the choice of $\tau_\text{sim}$.
In view of this problem, it is in fact desirable to use also grandcanonical particle insertion and deletion moves for the Lennard-Jones fluid as well, as we did in the Ising model. A simulation in the canonical ensemble then is realized by trial moves where one attempts both to randomly delete a particle somewhere in the box and also insert a particle at a randomly selected position simultaneously. The trial move is accepted and executed only if both parts of the move together are accepted in the Metropolis test. It is clear that such nonlocal displacements of particles will fulfill detailed balance and have a reasonably high acceptance probability at the temperatures where grandcanonical ensemble simulations of the considered model are still feasible. For the LJ fluid studied here, this is the case for ${k_\text{B}}T/\varepsilon=0.78$.
Figure \[fig: LJSCT0-78\_ScalingXsublogsforgotten\] plots then data for $\gamma_{L,L_z}$ at two fixed choices for $L_z$ versus $L^{-2}$, comparing results obtained using only local moves (which we believe are insufficiently equilibrated) with the results based on the nonlocal moves. One can see two features:
- The data based on the local moves are systematically off, but they are not visibly irregular, and so without the availability of the better data based on the nonlocal algorithm, it would not be obvious that the data based on local moves are unreliable.
- Fitting either set of data in the traditional way, i.e. assuming a variation $\gamma_{L,L_z} = \gamma_\infty + C'_1/L + C'_2/L^2$, both fit parameters $C'_1$ and $C'_2$ clearly are nonzero, as is visually obvious from the curvature of this plot. The constant $C'_1$ is larger for the unreliable data. Omitting data for smaller values of $L$, one can get off with the simpler variation $\gamma_{L,L_z} = \gamma_\infty + C'_1/L$ as done in the literature [@110], but we now know that such a fit is meaningless, a parameter $C'_1$ should not occur, and hence the resulting estimate for $\gamma_\infty$ would be inaccurate.
Of course, a naive data analysis as shown in Fig. \[fig: LJSCT0-78\_ScalingXsublogsforgotten\] ignores all the knowledge on the logarithmic corrections derived in the present paper. In fact, if we use this knowledge, subtracting the logarithmic correction via Eq. , and fit only the reduced surface tension $\widetilde{\gamma}$, as we did in the $d=3$ Ising model, the picture becomes much clearer (Fig. \[fig: LJSCT0-78\_ScalingX\]). The reliable nonlocal data yield very small values for $C_1$ again, hence giving evidence that $C_1=0$, and if we require $C_1=0$ from the outset, a very good fit with $\gamma_\infty \approx 0.3745\pm0.0005$ is in fact obtained (Fig. \[fig: LJSCT0-78\_ScalingX\_Param2\]). The less reliable data based on the local algorithm are in fact compatible with this conclusion, if we omit the data for $L_z=26.94$ for the two largest choices of $L$, which fall systematically below the straight lines in Fig. \[fig: LJSCT0-78\_ScalingX\_Param2\]. In Fig. \[fig: LJSCT0-78\_ScalingX\_Param3\], where $C_1$ was not forced to be zero, a systematically too small value for $\gamma_\infty$ would result from the unreliable data with the local algorithm, but it is clear that this is an artifact due to the combined effect of unreliable data and an inappropriate fitting formula (allowing for a nonzero $C_1$).
We have given this detailed discussion to show that in cases of practical interest, the knowledge of the logarithmic corrections indeed is very valuable to extract reliable estimates for $\gamma_\infty$; but high quality well equilibrated “raw data” for $\gamma_{L,L_z}$ are an indispensable input in the analysis.
As a final example, we present a re-analysis of the data for the symmetrical binary (AB) Lennard-Jones mixture at ${k_\text{B}}T/\varepsilon=1.0$ presented in [@110]. The original data (resulting from semi-grandcanonical exchange moves between the particles) were extrapolated against $1/L$, yielding $\gamma_\infty\approx 0.722$. Using again Eq. , we see that the data are compatible with the absence of a term $C_1/L$ as well (Fig. \[fig: SymmLJ\_DataFromDas\]), and the final estimate for $\gamma_\infty$ ($\approx 0.717$) is only slightly off from the original estimate.
Summary & Outlook {#sec: Conclusion}
=================
In this paper we have discussed the estimation of interfacial free energies associated with planar interfaces between coexisting phases in thermal equilibrium, emphasizing the need to carefully address the finite-size effects when one employs a computer simulation approach. We have focused on the use of a simulation geometry where the linear dimension $(L)$ of the simulation box in the direction(s) parallel to the interface differs from the linear dimension perpendicular to the interface $(L_z)$. Using periodic boundary conditions in all (two or three) space directions, the situation that is normally considered (Fig. \[fig: BoundaryConditionsPBC\]) is a “slab geometry”, where (for a fluid system) a domain of the liquid phase is separated by two planar interfaces (that are connected in themselves via the periodic boundary conditions (PBC) in the direction(s) parallel to the interface) from the vapor phase (the two vapor regions on the left and on the right of the liquid slab are connected by the periodic boundary condition in $z$-direction). This choice of geometry also applies to other systems (fluid binary mixture, Ising magnets, etc). For systems exhibiting a strict symmetry between both coexisting phases (Ising model, symmetrical binary Lennard-Jones mixture, etc.) also a simpler choice with a single interface is useful, where the boundary condition in the $z$-direction is antiperodic (APBC) rather than periodic. While for the situation with the PBC in $z$-direction we consider here only the canonical ensemble (conserved density of the fluid, or conserved relative concentration of the binary mixture, or conserved magnetization in the Ising magnet, respectively), for the systems with APBC it is instructive to study both the case of the canonical (c) ensemble and the case of the grandcanonical (gc) ensemble (where the respective order parameter, i.e., density, concentration, or magnetization, respectively, is not conserved, and the variable that is thermodynamically conjugate to this order parameter is fixed at the value that is appropriate for bulk two-phase coexistence). While in this APBC(c) case the position of the interface on average is fixed (by the chosen value of the order parameter), for the APBC(gc) case it is not, and the statistical fluctuation associated with this degree of freedom needs to be carefully considered. As discussed in Sec. \[sec: SystemGeometryAndBoundaryConditions\], this translational degree of freedom of the interface gives rise to an entropic correction to the interfacial tension. Likewise, in the PBC case the liquid slab can be translated in the system as a whole, and this also shows up as a logarithmic correction.
But additional corrections arise as a consequence of the coupling between fluctuations of the bulk order parameter in the coexisting domains and the interface location (created by the constraint that there cannot be a net fluctuation of the total order parameter in the canonical ensemble, and so the individual fluctuations of the order parameter densities in both domains must be compensated by a suitable interface displacement). This so-called “domain breathing” effect causes an entropic correction for both the PBC and APBC(c) cases. We have given detailed evidence for these effects both in the case of the two-dimensional $(d=2)$ and three-dimensional $(d=3)$ Ising model. Note that for the $d=2$ Ising model capillary-wave type fluctuations of the interface affect these interfacial entropy corrections strongly as well, since the root mean squared interfacial width $\sqrt{\langle w_L^2 \rangle}$ scales like $L^{1/2}$ {Eq. }, and via the normalization of the translational entropy this gives rise to an additional $\ln (L)/(2L)$ correction to the interfacial tension.
All the methods that we discuss here rely on the consideration of the free energy difference between one of the systems discussed above and a system with the same linear dimensions but PBC throughout, so that no interfaces occur. Hence, it is necessary neither to locate where the interface is in the system, nor to characterize its microscopic structure. This free energy difference can either be found from sampling the order parameter distribution function (Fig. \[fig: ProbDistributions\]) across the two-phase coexistence region (which is a standard approach used since more than three decades) or from a new variant of the “ensemble switch” method (Fig. \[fig: EnsembleSwitchMethodSketch\]), described here. In this method, two bulk systems of size $L^{d-1} \times L_z/2$, with PBC containing the two coexisting phases, are connected in phase space via a continuous path to a system of size $L^{d-1} \times L_z$, where now the two phases coexist in one box, being separated by two interfaces.
We stress that these techniques by no means are the only methods from which interfacial tensions can be found: It is also possible to study the $L^{d-1} \times L_z$ system in the grandcanonical ensemble, and analyze the correlation function along the $z$-direction very precisely. Most of the time the system will reside in one of the pure phases, but the rare fluctuation where the system explores slab configurations gives rise to a nontrivial behavior of the correlation function, from which the interfacial tension can be extracted [@50; @120]. This method is out of our scope here, as well as the possibility to extract the interfacial stiffness from an analysis of the capillary wave spectrum or from the size-dependence of the interfacial broadening. In both these methods the error estimation is a very subtle problem. Alternative algorithms due to Mon [@42] and Caselle et al. [@50; @60; @61] which are particularly valuable to study the interfacial tension near the bulk critical point, have also been out of our consideration.
However, also for the methods described here the assessment of errors is rather difficult. Referring to Fig. \[fig: TranslationalEntropy\], it is clear that the translational entropy of the interface is only sampled correctly if the simulation has lasted long enough that the slowly diffusing interface has in fact sampled the full extension $L_z$ of the sample. We have seen in the last section that in particular for off-lattice models of fluids (such as the Lennard-Jones system) this is difficult to achieve. In analytical theories [@88; @89], this problem is avoided by putting the system into a potential that localizes the interface. The price to be paid is that a correlation length $\xi_z$ is created, that characterizes the extent of interfacial motions around its average position in $z$-direction [@88; @89]. While the theory from the outset is based on the concept of an effective interfacial Hamiltonian, it is desirable to avoid this concept in a simulation context. Of course, using the PBC(c) method where a liquid slab occurs, we can “localize” the whole slab, e.g. by using a weak harmonic potential, centered around the center of mass position of the liquid slab. But one needs to carefully check that this potential does not affect other properties, apart from eliminating the translational motion of the slab as a whole.
Such ideas probably are indispensable, when one considers the extension of the method to liquid-solid interfaces, where it is simply too time-consuming to sample the translational motion of the crystalline slab. As a caveat we note, however, that we do not see an obvious recipe to suppress the “domain breathing” mechanism. Of course, if one uses a model based on the effective interface Hamiltonian concept, this mechanism has been disregarded from the outset; but the step linking explicitly atomistic Hamiltonians to effective interface Hamiltonians is problematic as well.
An extension that would also be very interesting to consider already for the Ising systems is the consideration of interfaces that are inclined relative to the simple (100) or (001) lattice planes: this extension would allow to study the anisotropy of the interface tension, which is well understood in $d=2$ [@123] but not explicitly known in $d=3$, apart from special cases [@44; @63]. Such inclined interfaces naturally arise in the context of heterogeneous nucleation at walls [@118; @119], for instance. Another aspect of interest are finite-size effects on the interface tension of spherical droplets. We plan to report on such extensions in the future.
[999]{} J. Frenkel, [*Kinetic Theory of Liquids*]{} (Dover, New York, 1955)
A.C. Zettlemoyer, [*Nucleation*]{} (Dekker, New York, 1969)
F.F. Abraham, [*Homogeneous Nucleation Theory*]{} (Academic Press, New York, 1974)
K. Binder and D. Stauffer, Adv. Phys. [**25**]{}, 343 (1976)
K. Binder, Rep. Progr. [**50**]{}, 783 (1987)
P. Debenedetti, [*Metastable Liquids*]{} (Princeton Univ. Press, Princeton, 1997)
D. Kashchiev, [*Nucleation Basic Theory with Applications*]{} (Butterworth-Heinemann, Oxford, 2000)
S. Balibar and J. Villain (eds.) [*Nucleation*]{} (Special Issue, C. R. Phys. [**7**]{} (2006))
K. Binder, in [*Kinetics of Phase Transitions*]{} (S. Puri and V. Wadhavan, eds.) p. 63 (CRC Press, Boca Raton, 2009)
I.W. Hamley, [*The Physics of Block Copolymers*]{} (Oxford Univ. Press, New York, 1998)
S. Dietrich, in [*Phase Transitions and Critical Phenomena, Vol. 12*]{} (C. Domb and J.L. Lebowitz, eds.) p. 1 (London, Academic, 1988)
D. Bonn and D. Ross, Rep. Progr. Phys. [**64**]{}, 1085 (2001)
P.G. de Gennes, F. Brochard-Wyart, and D. Queré, [*Capillarity and Wetting Phenomena*]{} (New York, Springer, 2003)
S. Dietrich, M. Rauscher, and M. Napiorkowski, in [*Nanoscale Liquid Interfaces: Wetting, Patterning and Force Microscopy at the Molecular Scale*]{} (Pan Stanford Publ., Stanford, 2013)
B. Widom, in [*Phase Transitions and Critical Phenomena, Vol 2*]{} (C. Domb and M.S. Green, eds.) p. 73 (London, Academic, 1972)
J.S. Rowlinson and B. Widom, [*Molecular Theory of Capillarity*]{} (Clarendon Press, Oxford, 1982)
D. Jasnow, Rep. Progr. Phys. [**47**]{}, 1059 (1984)
K. Binder, B. Block, S.K. Das, P. Virnau, and D. Winter, J. Stat. Phys. [**144**]{}, 690 (2011)
J.D. Van der Waals, [*Verhandl. Koningh. Akad. van Wetenschappen*]{} (Amsterdam, 1893; engl. translation in J. Stat. Phys. [**20**]{}, 197 (1979))
J.W. Cahn and J.E. Hilliard, J. Chem. Phys. [**28**]{}, 258 (1958)
R.J. Evans, Adv. Phys. [**28**]{}, 143 (1979)
D.W. Oxtoby, J. Phys.: Condens. Matter [**4**]{}, 7627 (1992)
M. Bier, L. Harnau, and S. Dietrich, J. Chem. Phys. [**123**]{}, 114906 (2005)
K. Binder and M. Müller, Int. J. Mod. Phys. C[**11**]{}, 1093 (2000)
K. Binder, M. Müller, F. Schmid and A. Werner, Adv. Colloid Interface Sci. [**94**]{}, 237 (2001)
R.L.C. Vink, J. Horbach and K. Binder, J. Chem. Phys. [**122**]{}, 134905 (2005)
K. Binder, B.J. Block, P. Virnau and A. Troester, Am. J. Phys. [**80**]{}, 1099 (2012)
J.L. Lebowitz and O. Penrose, J. Math. Phys. [**7**]{}, 98 (1966)
O. Penrose and J.L. Lebowitz, J. Stat. Phys. [**3**]{}, 211 (1971)
K. Binder, Phys. Rev. A[**29**]{}, 341 (1984)
J.S. Langer, Physica [**73**]{}, 61 (1974)
C. Domb and M.S. Green (eds.) [*Phase Transitions and Critical Phenomena, Vol 6*]{} (Academic Press, London, 1976)
F.P. Buff, R.A. Lovett, and F.H. Stillinger, Phys. Rev. Lett. [**15**]{}, 621 (1965)
J.D. Weeks, J. Chem. Phys. [**67**]{}, 3106 (1977)
J.D. Weeks, W. van Saarloos, D. Bedeaux, and E. Blokhuis, J. Chem. Phys. [**91**]{}, 6494 (1989)
A.O. Parry and C.J. Boulter, J. Phys.: Condens. Matter [**6**]{}, 7199 (1994)
K.R. Mecke and S. Dietrich, Phys. Rev. E[**59**]{}, 6766 (1999)
A. Milchev and K. Binder, Europhys. Lett. [**59**]{}, 81 (2002)
K. Binder, Phys. Rev. A[**25**]{}, 1699 (1982)
E. Bürkner and D. Stauffer, Z. Physik B[**53**]{}, 241 (1983)
J.P.R.B. Walton, D.J. Tildesley, J.S.Rowlinson, and J.R. Henderson, Mol. Phys. [**48**]{}, 1357 (1983)
K.K. Mon, Phys. Rev. Lett. [**60**]{}, 2749 (1988); K. K. Mon and D. Jasnow, Phys. Rev. A**30**, 670 (1984) M.J.P. Nijmeijer, A.F. Bakker, C. Bruin and J.H. Sikkenk, J. Chem. Phys. [**89**]{}, 3789 (1988)
K.K. Mon, S. Wansleben, D.P. Landau and K. Binder, Phys. Rev. B[**39**]{}, 7089 (1989)
B.A. Berg, U. Hansmann, and T. Neuhaus, Z. Phys. B[**90**]{}, 229 (1993)
B.A. Berg, U. Hansmann, and T. Neuhaus, Phys. Rev. B[**47**]{}, 497 (1993)
M. Hasenbusch and K. Pinn, Physica A[**192**]{}, 342 (1993)
M. Hasenbusch and K. Pinn, Physica A[**203**]{}, 189 (1993)
A. Billoire, T. Neuhaus, and B.A. Berg, Nucl. Phys. B[**413**]{}, 795 (1994)
M. Caselle, R. Fiore, F. Gliozzi, M. Hasenbusch, K. Pinn and S. Vinti, Nucl. Phys, B[**432**]{}, 590 (1994)
M. Müller, K. Binder and W. Oed, J. Chem. Soc. Faraday Trans. [**91**]{}, 2369 (1995)
J.E. Hunter and W:P. Reinhardt, J. Chem. Phys. [**103**]{}, 8627 (1995)
A. Werner, F. Schmid, M. Müller, and K. Binder, Phys. Rev. E[**59**]{}, 728 (1999)
J.J. Potoff and A.Z. Panagiotopoulos, J. Chem. Phys. [**112**]{}, 6411 (2000)
A. Milchev and K. Binder, J. Chem. Phys. [**114**]{}, 8610 (2001)
J.R. Errington, Phys. Rev. E[**67**]{}, 012102 (2003)
P. Virnau, M. Müller, L.G. MacDowell, and K. Binder, J. Chem. Phys. [**121**]{}, 2169 (2004)
R.L.C. Vink and T. Schilling, Phys. Rev. E[**71**]{}, 051716 (2005)
R.L.C. Vink, J. Horbach, and K. Binder, Phys. Rev. E[**71**]{}, 011401 (2005)
M. Caselle, M. Hasenbusch, and M. Panero, JHEP 2006(03), 84 (2006)
M. Caselle, M. Hasenbusch, and M. Paneo, JHEP 2007(09), 117 (2007)
B.M. Mognetti, L. Yelash, P. Virnau, W. Paul, K. Binder, M. Müller, and L.G. MacDowell, J. Chem. Phys. [**128**]{}, 104501 (2008)
E. Bittner, A. Nussbaumer, and W. Janke, Nucl. Phys. B[**820**]{}, 694 (2009)
S.K. Das and K. Binder, Molec. Phys. [**109**]{}, 1043 (2011)
J.G. Broughton and G.H. Gilmer, J. Chem. Phys. [**84**]{}, 5759 (1986)
R.L. Davidchack and B.B. Laird, Phys. Rev. Lett. [**85**]{}, 4751 (2000)
J.J. Hoyt, M. Asta, and A. Karma, Phys. Rev. Lett. [**86**]{}, 5530 (2001)
M. Asta, J.J. Hoyt, and A. Karma, Phys. Rev. B[**66**]{}, 100101 (R) (2002)
J.R. Morris, Phys. Rev. B[**66**]{}, 144104 (2002)
R.L. Davidchack and B.B. Laird, J. Chem. Phys. [**118**]{}, 7651 (2003)
R.L. Davidchack and B.B. Laird, Phys. Rev. Lett. [**94**]{}, 086102 (2005)
Y. Mu, A. Houck and X. Song, J. Phys. Chem. B[**109**]{}, 6500 (2005)
R. Davidchack, J.R. Morris and B.B. Laird, J. Chem. Phys. [**125**]{}, 094710 (2006)
T. Zykova-Timan, R.E. Rozas, J. Horbach, and K. Binder, J. Phys.: Condens. Matter [**21**]{}, 464102 (2009)
T. Zykova-Timan, J. Horbach, and K. Binder, J. Chem. Phys. [**133**]{}, 014705 (2010)
R.L. Davidchack, J. Chem. Phys. [**133**]{}, 234701 (2010)
R.E. Rozas and J. Horbach, EPL [**93**]{}, 26006 (2011)
A. Härtel, M. Oettel, R.E. Rozas, S.U. Egelhaaf, J. Horbach and H. Löwen, Phys. Rev. Lett. [**108**]{}, 226101 (2012)
K. Binder, Rep. Progr. Phys. [**60**]{}, 487 (1997)
D. Frenkel and B. Smit, [*Understanding Molecular Simulations: From Algorithms to Applications 2$^\text{nd}$ ed.*]{} (Academic Press, San Diego, 2001)
D.P. Landau and K. Binder, [*A Guide to Monte Carlo Simulations in Statistical Physics, 3$^\text{rd}$ ed.*]{} (Cambridge Univ. Press, Cambridge, 2009)
M.E. Fisher, in [*Critical Phenomena*]{} (M.S. Green, ed.) p. 3 (Academic, London, 1971)
M.N. Barber, in [*Phase Transitions and Critical Phenomena, Vol 8*]{} (C. Domb and J.L. Lebowitz, eds.) Chapter 2 (Academic, London, 1983)
V. Privman (ed.) [*Finite Size Scaling and Numerical Simulation of Statistical Systems*]{} (World Scientific, Singapore, 1990)
K. Binder, in [*Computational Methods in Field Theory*]{} (H. Gausterer and C.B. Lang, eds.) p. 59 (Springer, Berlin, 1992)
E. Brézin and J. Zinn-Justin, Nucl. Phys. B[**257**]{}, 867 (1985)
D.B. Abraham and N.M. Svrakic, Phys. Rev. Lett. [**56**]{}, 1172 (1986); N. M. Svrakic, V. Privman and D. B. Abraham, J. Stat. Phys. 53,1041 (1988) V. Privman, Phys. Rev. Lett. [**61**]{}, 183 (1988)
M.P. Gelfand and M.E. Fisher, Int. J. Thermophys. [**9**]{}, 713 (1988)
M.P. Gelfand and M.E. Fisher, Physica A [**166**]{}, 1 (1990)
J.J. Morris, J. Stat. Phys. [**69**]{}, 539 (1992)
D. Deb, D. Wilms, A. Winkler, P. Virnau and K. Binder, Int. J. Mod. Phys. C[**23**]{}, 1240011 (2012)
A. Statt, A. Winkler, P. Virnau, and K. Binder, J. Phys.: Cond. Matter [**24**]{}, 464122 (2012)
A. Winkler, A. Statt, P. Virnau, and K. Binder, Phys. Rev. E[**87**]{}, 032307 (2013)
H. van Beijeren and I. Nolden, in [*Structure and Dynamics of Surfaces II*]{} (W. Schommers and P. Blanckenhagen, eds.) p 259 (Springer, Berlin, 1987)
F. Schmitz, P. Virnau, and K. Binder, Phys. Rev. Lett. [**112**]{}, 125701 (2014)
D. Deb, A. Winkler, P. Virnau, and K. Binder, J. Chem. Phys. [**136**]{}, 134710 (2013)
A.O. Parry and R. Evans, Physica A[**181**]{}, 250 (1992)
K. Binder, R. Evans, D.P. Landau, and A.M. Ferrenberg, Phys. Rev. E[**53**]{}, 5023 (1996)
K. Binder, D.P. Landau, and M. Müller, J. Stat. Phys. [**110**]{}, 1411 (2003)
V.P. Privman, Int. J. Mod. Phys. C[**3**]{}, 857 (1992); D. B. Abraham, Phys. Rev. Lett. 47, 545 (1981); J. De Coninck and J. Ruiz, J. Phys. A: Math. Gen. 21, L147 (1988) M.P.A. Fisher, D.S. Fisher, and J.D. Weeks, Phys. Rev. Lett. [**48**]{}, 368 (1982)
A. Winkler, D. Wilms, P. Virnau, and K. Binder, J. Chem. Phys. [**133**]{}, 164702 (2010)
D. Wilms, A. Winkler, P. Virnau, and K. Binder, Phys. Rev. Lett. [**105**]{}, 045701 (2010)
D. Chowdhury and D. Stauffer, [*Principles of Equilibrium Statistical Mechanics*]{} (Wiley-VCH, Weinheim - New York, 2000), p. 323.
B.U. Felderhof, Physica [**58**]{}, 470 (1972)
L. Onsager, Phys. Rev. [**65**]{}, 117 (1944)
M.E. Fisher, J. Stat. Phys. [**34**]{}, 667 (1984)
L.D. Landau and E.M. Lifshitz, [*Statistical Physics*]{} 3$^\text{rd}$ ed. (Pergamon, Oxford, 1959)
M.E. Fisher, J. Phys. Soc. Jpn. [**26**]{}, 87 (1969)
P. Virnau and M. Müller, J. Chem. Phys. [**120**]{}, 10925 (2004)
B.J. Block, S.K. Das, M. Oettel, P. Virnau and K. Binder, J. Chem. Phys. [**133**]{}, 154702 (2010)
T. Neuhaus and L.S. Hager, J. Stat. Phys. [**113**]{}, 1 (2003)
M. Hasenbusch, S. Meyer, and M. Putz, J. Stat. Phys. **85**, 383 (1996)
F. Schmitz, P. Virnau, and K. Binder, Phys. Rev. E[**87**]{}, 053302 (2013)
J.W. Gibbs, [*The Collected Works of J. Willard Gibbs*]{} (Yale Univ. Press, London, 1957) p. 288
J.O. Indekeu, Int. J. Mod. Phys. B[**38**]{}, 309 (1994)
D. Winter, P. Virnau, and K. Binder, Phys. Rev. Lett. [**103**]{}, 225703 (2009)
D. Winter, P. Virnau, and K. Binder, J. Phys.: Condens. Matter [**21**]{}, 464118 (2009)
L. Schimmele, M. Napiorkowski, and S. Dietrich, J. Chem. Phys. [**127**]{}, 164715 (2007)
D. B. Abraham and P. Reed, J. Phys. A: Math. Gen. 10, L121 (1977)
S. Klessinger and G. Münster, Nucl. Phys. **B**386, 791 (1992)
|
---
author:
- 'Yosuke <span style="font-variant:small-caps;">Imamura</span>[^1]'
title: |
Large $N$ vector quantum mechanics\
and bubbling supertube solutions
---
Introduction
============
Recently, many BPS solutions in various supergravity theories have been constructed for the purpose of obtaining new examples of AdS/CFT correspondence. Lin, Lunin and Maldacena[@LLM] constructed a large class of smooth solutions to type IIB supergravity and M-theory, which are called bubbling solutions. Each of them is characterized by a two-dimensional plane consisting of black and white regions, which is called a ‘droplet.’ In the type IIB case, these solutions are dual to a free fermion system, which describes the dynamics of the BPS sector of N=4 Yang-Mills theory in ${\bf S}^3$[@LLM; @Corley:2001zk; @toy; @Caldarelli:2004ig; @buchel; @suryanarayana; @Caldarelli:2004mz; @bbfh; @sjt; @mandal; @mini; @tt; @qg; @mr; @Silva:2005fa]. We can identify the droplet with the phase space structure of the fermion system. The M-theory bubbling solutions are constructed in Refs. and , and it is shown that they are related to BPS sectors of several different gauge theories. Bubbling solutions in M-theory are also studied in Refs. . Different droplets with the same asymptotic forms give different classical supergravity solutions with the same boundary conditions. The existence of many smooth solutions sharing the same asymptotic behavior sheds new light on the black hole information problem[@mathur]. Some generalizations of bubbling solutions are given in Refs. .
Another major advance in the study of black holes is represented by the theoretical discovery of black rings[@EmpRea]. They are classical solutions in five-dimensional gravity with horizons of topology ${\bf S}^2\times{\bf S}^1$. These solutions are important, because they constitute counterexamples to the conjecture of black hole uniqueness. Black rings were soon generalized to supersymmetric ones[@EEMR; @EEMS2; @GauGut]. More general $1/2$ BPS solutions in ${\cal N}=1$ five-dimensional supergravity, which include supersymmetric black rings as special cases, were subsequently constructed in Refs. and .
Based on these results, a new kind of bubbling solution, which resolves the singularity of the black ring solutions, is proposed in Refs. and . In Ref. the solutions are called “bubbling supertube solutions”. Bubbling supertube solutions are the subject of this paper. Before explaining them, we need to understand the structure of the solutions constructed in Refs. and .
Let us consider $5$-dimensional ${\cal N}=1$ supergravity with $n_v$ $\U(1)$ vector multiplets. The metric of the general $1/2$ BPS solution has the form $$ds_5^2=-\frac{1}{Z^2}(dt+k)^2+Zds_4^2,$$ where $ds_4^2$ is a four-dimensional hyper Kähler base manifold. When the base manifold is of Gibbons-Hawking (GH) type, the solution can be explicitly represented by $2n_v+4$ harmonic functions[@BKW0504]. In this paper, we consider only the $n_v=2$ case with the specific Chern-Simons coefficient $C_{IJK}=|\epsilon_{IJK}|$. This theory is obtained by ${\bf T}^6$ compactification and an appropriate truncation of M-theory. The GH metric of the base manifold is determined by one of these harmonic functions, which is referred to as $V$ in Refs. and and in this paper, and is given by $$ds_4=\frac{1}{V}(d\psi+A)^2+Vd\vec y^2.
\label{GHmetric}$$ The differential $A$ is the magnetic dual of the function $V$ satisfying $dA=*dV$, where $*$ is the Hodge dual in the flat $3$-dimensional space parameterized by $\vec y=(y_1,y_2,y_3)$. For concreteness, we choose the function $V$ for an $n$-center solution as $$V=V_0+\sum_{i=1}^n\frac{N_i}{4\pi|\vec y-\vec y_i|}.
\label{Vintro}$$ When we deal with a metric in the form (\[GHmetric\]), we ordinarily assume that all the coefficients $N_i/4\pi$ and the constant part $V_0$ are non-negative in order to guarantee the positive definiteness of the hyper Kähler metric (\[GHmetric\]). In Refs. and , however, it is shown that this restriction is in fact not necessary for the solution in Refs. and . We may choose any harmonic function $V$ of the form (\[Vintro\]) as long as the coefficients of the poles are appropriately quantized.
Using such generalized GH base spaces, smooth solutions that resolve the singularities of supertube solutions are constructed in Refs. and . In this paper, we call them ‘bubbling supertube solutions’, following Ref. . In these solutions, each supertube singularity in the original solutions is replaced by a pair of GH centers. Because of the NUT-charge conservation law, both positive and negative NUT charges are needed to construct resolved solutions, and for this reason, the harmonic function $V$ inevitably takes positive and negative values depending on the coordinates. Despite the superficial singularity on the submanifold $V=0$, actually there are no singularities in either the metric or the gauge fields if other harmonic functions are chosen appropriately. Instead, the submanifold $V=0$ turns out to be ergospheres[@Berglund:2005vb] on which the world line of a stationary point particle is light-like. (We admit orbifold singularities at centers, because they are not singular in string theory and are harmless.)
By treating the coordinate $\psi$ in (\[GHmetric\]) as the $11$-th coordinate, these solutions can be regarded as classical solutions in ${\cal N}=2$ four-dimensional supergravity, which is the ${\bf T}^6$ compactification of type IIA string theory with a certain truncation. From this point of view, BPS particles in uncompactified four-dimensional spacetime can be regarded as D-branes wrapped on different holomorphic cycles in the ${\bf T}^6$. Such four-dimensional solutions has been constructed by Denef et al.[@Denef; @DenefGR; @DenefB] independently of the five-dimensional solutions. The relation between four-dimensional solutions and five-dimensional solutions is discussed in Ref. .
The distinguishing properties of the bubbling supertube solutions (or corresponding four-dimensional solutions) is that the particles[^2] in a system interact with one another through a non-trivial potential and form bound states. In a static bound state, the positions of particles are restricted by the so-called “bubble equation”[@BW0505; @Berglund:2005vb]. Because the particles in this system are D-branes wrapped on internal cycles, it is naturally conjectured that the bound states can be studied by computing the potential energy with boundary states by using the relation $V\sim\int dt\langle B|e^{-tL_0}|B\rangle$. This, however, is not the case, because this computation takes account of only the linear part of the gravitational interaction and gives just a Newtonian potential. There is a well-known theorem (Ernshaw’s theorem) which states that particles interacting through a Newtonian potential cannot form static stable bound states. This implies that the non-linear nature of gravity plays an important role in the formation of bound states.
The purpose of this paper is to seek a quantum mechanics that describes the dynamics of these particles. Unfortunately, we have not succeeded in constructing such a quantum mechanics for an entire system of bound particles. In this paper, we focus on one of the particles in the system and construct a quantum mechanics describing this particle. We treat the particle that we chose as a probe and the other particles as the background. Furthermore, we assume that the probe particle carries only D-particle charge for simplicity.
If a D-particle is placed at a generic point in a bubbling supertube solution, it will be caused to move by gravitational and RR forces. There are, however, loci on which D-particle can remain still. These stability loci are in fact the ergospheres mentioned above[@Berglund:2005vb]. The existence of such stability loci enables us to consider the theory on a probe D-particle in the backgrounds. As mentioned above, the non-linear nature of gravity is essential for the existence of D-particle stability loci. From the viewpoint of quantum mechanics, this implies that the $1$-loop effect, which corresponds to the string cylinder amplitude and the exchange of free gravitons, is not sufficient to explain the stability of the D-particle. Actually, we show that the two-loop quantum correction plays an important role in the emergence of stable vacua.
The rest of this paper is organized as follows.
In the next section, we study the action of a D-particle in bubbling supertube solutions. We first restrict our attention to a special class of solutions in which the function $V$ has only one positive pole and the other seven harmonic functions are constant. Although these solutions cannot be regarded as a resolved version of any singular solution, because of the absence of negative poles in $V$, they have the distinguishing feature of bubbling supertube solutions that the function $V$ takes both positive and negative values if the constant part is negative.
This special solution is actually the supergravity description of coincident D6-branes in a constant $B$-field background. By quantizing open strings, we obtain supersymmetric $\U(1)$ gauge theory with $N$ chiral multiplets, where $N$ denotes the number of D6-branes. When $N$ is large, the quantum mechanics becomes large $N$ vector quantum mechanics. In §\[vqm.sec\] we find nice agreement between the D-particle action and the effective action of this quantum mechanics in a certain decoupling limit. We also find that the angular momentum of fluxes induced by the probe is reproduced as a quantum correction to the $\SU(2)_R$ current.
In §\[gen.sec\] we investigate the generalization to multicenter solutions which include both positive and negative GH centers. We propose a quantum mechanics which reproduces the D-particle action as the effective action. Finally, we conclude in §\[conc.sec\].
We use the following conventions in this paper.
- $2\pi\alpha'^{1/2}=1$ is chosen to be our unit of length. With this convention, the string tension is $2\pi$, and the mass and the charge of a D-particle are $2\pi/g_{\rm str}$ and $2\pi$, respectively.
- We normalize gauge fields in such a way that fluxes obtained by integration over closed surfaces are quantized as integers.
- The period of the ${\bf S}^1$ coordinate $\psi$ on the GH base space (\[GHmetric\]) is $1$. In this case, NUT charges $N_i$ in (\[Vintro\]) must be integers.
[**Note Added:**]{}
After submitting this paper to the arXiv, I was informed of Ref. , in which a quantum mechanical description for BPS black hole bound states in ${\cal N}=2$ supergravity is studied. In particular, the coincidence of the quantum moduli-space of the quantum mechanical system and the stability loci for particles in supergravity classical solutions is demonstrated there for sets of charged particles that are more general than those we consider in this paper. In this paper, we show that not only the stability loci but also the D-particle potential in a specific class of solutions is reproduced as the effective potential of the quantum mechanics in a certain decoupling limit.
D-particle probe in bubbling supertube solutions {#sg.sec}
================================================
$1/2$ BPS solutions
-------------------
The most general $1/2$ BPS solutions in $5$-dimensional ${\cal N}=1$ supergravity with GH base spaces are constructed in Ref. . They are described with eight harmonic functions, $V$, $K^I$, $L_I$, $M$, associated with the NUT charge, three M5 charges, three M2 charges, and the KK momentum, respectively. It is convenient for our purposes to rewrite the solutions in terms of type IIA string language. The $11$-dimensional metric $ds_{11}^2$ and the $3$-form potential $A_3$ are related to the string metric $ds_{10}^2$, the dilaton $\phi$, the NS $2$-form $B_2$, the RR $1$-form $C_1$ and the RR $3$-form $C_3$ as $$ds_{11}^2=e^{-(2/3)\phi}ds_{10}^2+e^{(4/3)\phi}(d\psi+C_1)^2,\quad
A_3=C_3+B_2\wedge(d\psi+C_1).
\label{general11iia}$$ According to these relations, the solutions in Ref. can be rewritten as $$\begin{aligned}
ds_{10}^2&=&-\frac{1}{\sqrt{Q}}(dt+\omega)^2+\sqrt{Q}d\vec y^2
+\sum_{I=1}^3\frac{\sqrt{Q}}{Z_IV}(dz_{2I-1}^2+dz_{2I}^2),\\
C_1&=&A-\frac{V^2\mu}{Q}(dt+\omega),\\
C_3&=&
\sum_{I=1}^3
\left[\frac{V}{Q}\left(\mu K^I-\frac{Z^3}{Z_I}\right)(dt+\omega)+\xi^I\right]
\wedge dz_{2I-1}\wedge dz_{2I},\\
B_2&=&\sum_{I=1}^3
\left(\frac{K^I}{V}-\frac{\mu}{Z_I}\right)dz_{2I-1}\wedge dz_{2I},
\label{b2harmo}\\
e^\phi&=&\frac{Q^{3/4}}{(ZV)^{3/2}},\end{aligned}$$ where $z_k$ denotes coordinates in ${\bf T}^6$. The quantities $Z_I$, $\mu$ and $Q$ are rational functions of the harmonic functions defined by $$\begin{aligned}
Z_I&=&L_I+\frac{1}{2}C_{IJK}\frac{K^JK^K}{V},\\
\mu&=&M+\frac{K^IL_I}{2V}+\frac{K^1K^2K^3}{V^2},\\
Q&=&Z^3V-\mu^2V^2,\end{aligned}$$ and $Z$ is the geometric average $Z=(Z_1Z_2Z_3)^{1/3}$. The differentials $\omega$ and $\xi^I$ are obtained by solving certain linear differential equations[@BKW0504; @BW0505; @Berglund:2005vb].
For simplicity, let us first consider solutions in which the harmonic function $V$ has only one positive pole and the other harmonic functions are constant. We study general solutions in §\[gen.sec\]. The constant $M$ must be zero for the solution to be regular. To fix the other constants, $K^I$ and $L_I$, we impose the boundary conditions $$\lim_{r\rightarrow\infty}
Q=1,\quad
\lim_{r\rightarrow\infty}
VZ_I=g_{\rm str}^{-2/3},$$ where $r\equiv |\vec y|$. These two imply that the four-dimensional part of the metric becomes flat Minkowski, $-dt^2+d\vec y^2$, and $e^\phi$ goes to $g_{\rm str}$ at infinity. The following choice of the harmonic functions satisfies these conditions: $$V=\frac{v_0}{g_{\rm str}}+\frac{N}{4\pi r},
\label{eq15}$$ $$K^I=g_{\rm str}^{-1/3}k^I,\quad
L_I=\frac{g_{\rm str}^{1/3}}{v_0}\left(1-\frac{1}{2}C_{IJK}k^Jk^K\right),\quad
M=0,
\label{hamo}$$ where $v_0$ and $k^I$ are parameters satisfying $$v_0^2=1-\frac{1}{4}(k_1+k_2+k_3-k_1k_2k_3)^2.
\label{v0kkk}$$ With this choice of the functions, the asymptotic form of the metric is $$ds_{10}^2(r\rightarrow\infty)=-dt^2+d\vec y^2+g_{\rm str}^{2/3}dz_i^2.$$ We assume that the size of the ${\bf T}^6$, which is determined by the periods of the coordinates $z_k$, is much larger than the string scale, in order to make the wrapped D6-branes, which we treat as a static background, sufficiently heavy.
In fact, the harmonic functions (\[eq15\]) and (\[hamo\]) give the supergravity description of $N$ coincident D6-branes in a constant $B$-field. The asymptotic value of the $B$-field is $$B_2(r\rightarrow\infty)=-\sum_{I=1}^3 b_Ie^{2I-1}\wedge e^{2I},
\label{badef}$$ where $e^k=g_{\rm str}^{1/3}dz_k$ is the vielbein in the ${\bf T}^6$, and the three parameters $b_I$ are related to $v_0$ and $k_I$ as $$b_I=\frac{1}{2v_0}(k_1+k_2+k_3-k_1k_2k_3-2k_I).\label{bbbkkk}$$ In order to make the definition of the parameter $b_I$ unambiguous, we have to specify the gauge choice for $B_2$. One way to do this is to specify the gauge field $F_2$ on the D6-branes. In the five-dimensional solutions we can define the gauge field on the D6-branes by the coupling with an M2-brane wrapped on a non-compact $2$-cycle, and we can show that $F_2=0$ for the classical solution given by (\[eq15\]) and (\[hamo\]).
The relations (\[v0kkk\]) and (\[bbbkkk\]) are solved with respect to $k_I$ and $v_0$ to give the solution $$v_0=\sin\xi,\quad
k_I=\frac{\cos(\beta_I+\xi)}{\cos\beta_I},
\label{v0kisol}$$ where the angles $\beta_I$ and $\xi$ are defined by $$\beta_I=\tan^{-1}b_I,\quad
\xi=\frac{\pi}{2}-\beta_1-\beta_2-\beta_3.$$ For definiteness, we assume $0\leq\beta_I\leq\pi/2$.
D-particle in the background
----------------------------
Let us expand the D-particle effective action in bubbling supertube solutions as $$L_{\rm D0}=L_0+L_1+L_2+\cdots,\label{ld0}$$ where the subscripts represent the power of the velocity $\dot y^i$ in each term. By substituting this solution into the DBI and CS actions, we obtain $$\begin{aligned}
-V_{\rm D0}=L_0&=&-2\pi\left(\frac{\sqrt{-g_{tt}}}{e^\phi}+C_t\right)
=\frac{2\pi V^2}{\sqrt{V^3Z^3}+V^2\mu}
,\label{l0}\\
L_1&=&-2\pi C_i \dot y^i=-2\pi A_i\dot y^i,\label{lf}\\
L_2&=&2\pi\frac{g_{ij}}{2e^\phi\sqrt{-g_{tt}}}\dot y^i\dot y^j
=\pi(VZ)^{3/2}|\dot y^m|^2.\label{l2}\end{aligned}$$
Let us consider the potential term $V_{\rm D0}\equiv -L_0$ first. Because $VZ$ and $V^2\mu$ are positive and regular for regular solutions, the minima of this potential are given by $V=0$. From the viewpoint of M-theory, the condition $V=0$ gives ergospheres, which are defined as submanifolds on which the world-line of a stationary point particle is light-like. This can be easily confirmed by considering the eleven-dimensional line element for a stationary particle, $$ds_{11}^2
=-e^{-(2/3)\phi}\frac{dt^2}{\sqrt{Q}}+e^{(4/3)\phi}(C_t dt)^2
=-\frac{1}{Z^2}dt^2
=-\frac{V^2}{(ZV)^2}dt^2.$$ Because $(VZ)^2$ is positive definite and finite, ergospheres are given by $V=0$.
For simplicity, let us restrict our attention to single-center solutions described by the harmonic functions (\[eq15\]) and (\[hamo\]). The potential $V_{\rm D0}$ for these single-center solutions depends on the parameters $g_{\rm str}$, $N$ and $\beta_I$, and the radial coordinate $r$. It is invariant under the replacement $(r,N)\rightarrow(\alpha r,\alpha N)$. Under another replacement $(g_{\rm str},N)\rightarrow(\alpha g_{\rm str},\alpha^{-1}N)$, the potential is rescaled as $V_{\rm D0}\rightarrow\alpha^{-1}V_{\rm D0}$. These two facts partially determine the functional form of the potential as $$V_{\rm D0}=\frac{1}{g_{\rm str}}f\left(\frac{Ng_{\rm str}}{r},\beta_I\right).$$ The potential $V_{\rm D0}$ depends on the three angles $\beta_I$ through the four parameters $v_0$ and $k_I$, which are related to each other as in (\[v0kkk\]). If this constraint were absent and $v_0$ and $k_I$ were four independent parameters, the potential would be rescaled as $V_{\rm D0}\rightarrow\alpha V_{\rm D0}$ through the replacement $(v_0,g_{\rm str})\rightarrow(\alpha v_0,\alpha g_{\rm str})$. This implies that the potential can be written in the form $$V_{\rm D0}=\frac{v_0^2}{g_{\rm str}}g(\rho,k_I(v_0,\beta_{I'})),\quad
\rho=\frac{v_0 r}{g_{\rm str}N}.
\label{frhok}$$ Instead of the three angles $\beta_I$ ($I=1,2,3$), we choose $v_0=\sin\xi$ and $\beta_{I'}$ ($I'=1,2$) as the three independent variables. In the next section, we compare this potential with the effective potential of a certain supersymmetric quantum mechanics. We determine the relation between the parameters $g_{\rm str}$, $N$ and $v_0$ and a coupling constant, the number of chiral multiplets, and FI parameter, respectively. However, there are no quantities that correspond to the $\beta_{I'}$. We decouple these unwanted parameters by taking the small $\xi$ limit as follows. (It may be possible to introduce extra parameters corresponding to $\beta_{I'}$ in our quantum mechanics. However, we do not discuss this possibility here.) Let us expand the function $g$ in (\[frhok\]) with respect to $v_0$ as $$g(\rho,k_I(v_0,\beta_{I'}))
=\sum_{n=0}^\infty g_n(\rho,\beta_{I'})v_0^n.\label{gexp}$$ Because the derivative of $k_I$ with respect to $\beta_I$ produces an extra factor of $v_0$, $$\frac{\partial k_I}{\partial\beta_I}=-\frac{v_0}{\cos^2\beta_I},$$ the $\beta_{I'}$ dependent terms in $g$ have at least one factor of $v_0$, and the leading term, $g_0$, on the right-hand side of (\[gexp\]) is independent of $\beta_{I'}$. In this paper, we focus only on the leading term, $g_0$, in the $v_0$ expansion. In other words, we take the following small $\xi$ limit in order to decouple the unwanted parameters $\beta_{I'}$: $$v_0=\sin\xi\rightarrow 0,\quad\mbox{with}\quad
\rho=\frac{v_0r}{g_{\rm str}N}\quad\mbox{fixed}.
\label{smallxi}$$ In this limit, both the terms $\sqrt{V^3Z^3}$ and $V^2\mu$ in the denominator of the right-hand side of (\[l0\]) approach $1/g_{\rm str}$, and the leading term of the D-particle potential in the $\xi$ expansion is $$V_{\rm D0}(r)
=\frac{2\pi g_{\rm str}}{2}V^2
=\frac{2\pi}{2g_{\rm str}}\left(\xi+\frac{Ng_{\rm str}}{4\pi r}\right)^2.
\label{d0potentialc2}$$
Let us take the same limit for the velocity dependent terms $L_1$ and $L_2$. The term $L_1$ is linear in the velocity, and it represents the Lorentz force due to the background RR $1$-form potential. $A_i$ in (\[lf\]) is the vector potential for a monopole in the $\vec y$ space. The monopole is located at $\vec y=0$, and its charge is $N$. This term is, in a sense, topological, and its form does not change in the limit (\[smallxi\]). The coefficient $(VZ)^{3/2}$ of the kinetic term $L_2$ in (\[l2\]) becomes the constant $1/g_{\rm str}$ in the small $\xi$ limit.
Summing $L_0$, $L_1$ and $L_2$, we obtain the D-particle effective Lagrangian in the small $\xi$ limit, $$L_{\rm D0}=\frac{2\pi}{2g_{\rm str}}|\dot{\vec y}|^2-2\pi A_i\dot y^i-\frac{2\pi g_{\rm str}}{2}V^2,
\label{multieffdac}$$ up to ${\cal O}(\dot y^3)$.
Large $N$ vector quantum mechanics {#vqm.sec}
==================================
Lagrangian
----------
To obtain the theory on a D-particle probe, we treat a classical solution as a D-brane system consisting of background D6-branes and a probe D-particle in a constant $B$-field, and quantize open strings in it. This D0-D6 system becomes BPS when $\xi=0$[@Ohta:1997fr; @CIMM; @Mihailescu:2000dn; @witten; @Fujii:2001wp].
Let us assume that $N$ D6-branes are located at the center, $\vec y=0$ in the transverse space. One $\U(1)$ vector multiplet and three neutral chiral multiplets arise in D0-D0 string modes. The vector multiplet ${\cal V}$ consists of a (non-dynamical) gauge field, $A_t$, three scalar fields, $\vec a=(a_1,a_2,a_3)$, and a two-component fermion, $\chi$. We ignore the neutral chiral multiplets, because they decouple in the $\U(1)$ case. As the lowest modes in D0-D6 strings, $N$ charged chiral multiplets $\Phi_\alpha$ ($\alpha=1,\ldots,N$) arise. Let $\phi_\alpha$ and $\psi_\alpha$ denote the complex bosons and two-component fermions in $\Phi_\alpha$. These fields carry the same $\U(1)$ charges, $+1$. Their masses, obtained through the quantization of open strings, are[@CIMM; @witten; @Fujii:2001wp] $$m_f^2=(2\pi r)^2,\quad
m_b^2=2\pi\xi+(2\pi r)^2,
\label{mass06}$$ where $r$ is the distance between the D-particle and the D6-branes. These masses become equal on the supersymmetric locus $\xi=0$ in the parameter space.
The tree-level Lagrangian for these multiplets, which is obtained through the dimensional reduction of the four-dimensional ${\cal N}=1$ Lagrangian, is $$\begin{aligned}
L_{\rm tree}
&=&\frac{1}{g_{\rm qm}^2}
\left[\left(\int d^2\theta W^2+\mbox{c.c.}\right)+\int d^4\theta \zeta{\cal V}
+\sum_{\alpha=1}^N\int d^4\theta \Phi_\alpha^\ast e^{\cal V}\Phi_\alpha\right]
\nonumber\\
&=&\frac{1}{g_{\rm qm}^2}
\Big[\frac{1}{2}(\partial_t\vec a)^2
+\chi^\dagger\partial_t\chi
-\frac{1}{2}D^2
\nonumber\\&&\quad
+\sum_{\alpha=1}^N\Big(
|D_t\phi_\alpha|^2
-\vec a^2|\phi_\alpha|^2
+\psi_\alpha^\dagger D_t\psi_\alpha
\nonumber\\&&\quad\quad\quad
+\psi_\alpha^\dagger\vec\sigma\cdot\vec a\psi_\alpha
+\phi_\alpha^\dagger\chi\psi_\alpha
+\phi_\alpha\chi^\dagger\psi_\alpha^\dagger\Big)\Big],
\label{treepot}\end{aligned}$$ where $\vec\sigma=(\sigma_x,\sigma_y,\sigma_z)$ are the Pauli matrices, and $D$ is the auxiliary field in the vector multiplet. The equation of motion for the auxiliary field has already been solved in the right-most expression in (\[treepot\]), and $D$ in (\[treepot\]) is given by $$D=\zeta+\sum_{\alpha=1}^{N} |\phi_\alpha|^2.
\label{auxD}$$ The Lagrangian (\[treepot\]) is invariant with respect to an $\SU(2)_R$ transforming $\vec a$, $\chi$, and $\psi_\alpha$ as $\bf 3$, $\bf 2$, and $\bf 2$, respectively. There is no superpotential because all the chiral multiplets carry the same charge. The classical masses of the bosons $\phi_\alpha$ and fermions $\psi_\alpha$ read off from the Lagrangian are $$m_\psi^2=|\vec a|^2,\quad
m_\phi^2=|\vec a|^2+\zeta.
\label{classicalmass}$$ The relation between the coupling constants $g_{\rm qm}$ and $g_{\rm str}$ is determined by comparing the DBI action of the D-particle and the kinetic term in (\[treepot\]). We also obtain other relations from comparison of (\[mass06\]) and (\[classicalmass\]). Specifically, we have $$g_{\rm qm}^2=2\pi g_{\rm str},\quad
\vec a=2\pi\vec y,\quad
\zeta=2\pi\xi,
\label{paramcorr}$$ where $\vec y$ is the position of the probe D-particle.
The classical supersymmetric vacuum conditions are $$\vec a|\phi_\alpha|=0,\quad
D=0.
\label{classicalvacua}$$ When $\zeta<0$, the chiral multiplets acquire the non-vanishing vacuum expectation value $|\phi_\alpha|=\sqrt{|\zeta|}$, and we have $\vec a=0$. Because the overall phase of $\phi_\alpha$ is a gauge degree of freedom, the moduli space is ${\bf CP}^{N-1}$. When $\zeta=0$, all the $\phi_\alpha$ must be zero, and the moduli space is ${\bf R}^3$ parameterized by $\vec a$. When $\zeta>0$, there is no supersymmetric vacuum.
The quantum mechanics represented by the Lagrangian in (\[treepot\]) was first suggested in Ref. in connection to the D0-D6 system, and the relation between its classical vacuum structure and the behavior of D0-D6 system was also discussed in Ref. . In what follows, we see that quantum corrections in our quantum mechanics reproduce the action of a D-particle in the supergravity solutions.
One-loop and two-loop corrections
---------------------------------
In order to demonstrate the validity of the treatment of a D-particle as a probe, we assume that the number $N$ of D6-branes is much larger than $1$, the number of the probe D-particle, and study only the leading term of the $1/N$ expansion. More precisely, we take the following large $N$ limit: $$N\rightarrow\infty,\quad
\mbox{with}\quad
\lambda\equiv Ng_{\rm qm}^2\quad\mbox{fixed}.
\label{largeN}$$ As is easily checked with the Feynman rules obtained from the Lagrangian (\[treepot\]), no diagrams containing $\vec a$ or $\chi$ as internal lines appear in the leading correction of the $1/N$ expansion. This implies that these fields behave like classical external fields in the large $N$ limit. We define the effective action as a functional of the classical external fields $\vec a$ and $\chi$. The fermion $\chi$ is set to zero in our analysis.
Corresponding to (\[smallxi\]) on the supergravity side, we take the small $\zeta$ limit $$\frac{\zeta}{|\vec a|^2}\rightarrow 0,\quad\mbox{with}\quad
\frac{\lambda}{\zeta|\vec a|}\quad\mbox{fixed},
\label{smzeta}$$ in addition to the large $N$ limit given in (\[largeN\]). As we show below, the effective potential is two-loop exact in this small $\zeta$ limit. It is known that in the context of the M(atrix) theory, the leading and sub-leading potential of the D0-D6 system can be reproduced as $1$-loop[@pierre; @BISY; @Lif; @KVK; @DM] and two-loop[@branco; @dhar] corrections, respectively. In this section, we confirm that the quantum mechanics proposed above also reproduces the D-particle effective action given in §\[sg.sec\].
Let us decompose the effective potential $V_{\rm eff}(\vec a)$ into three parts $V_{\rm tree}$, $V_f$ and $V_b$. $V_{\rm tree}$ is the tree-level potential: $$V_{\rm tree}(\vec a)=\frac{1}{2g_{\rm qm}^2}\zeta^2.
\label{Vtree}$$ We set $\phi_\alpha=0$, because $\phi_\alpha$ cannot acquire a non-vanishing vacuum expectation value, guaranteed by Coleman’s theorem. $V_f$ represents quantum correction, including fermions, $\psi_\alpha$. At leading order in the $1/N$ expansion of the effective action, the only diagram, including fermions $\psi_\alpha$ is the $1$-loop diagram which gives the contribution $$V_f(\vec a)
=-N\int\frac{dk}{2\pi}\log(k^2+m_\psi^2)
=-N\left(m_\psi+\frac{2}{\pi}\Lambda(\log\Lambda-1)\right),$$ where $\Lambda$ is a momentum cut off. The rest of the quantum corrections, which are collectively denoted by $V_b$, consist of the contribution of the scalar fields $\phi_\alpha$. Although there are an infinite number of loop diagram of scalar fields $\phi_\alpha$, the large $N$ assumption makes it quite easy to compute them. The loop momentum integrals in any multi-loop diagrams are factorized and are easily carried out. For example, the only two-loop diagram is the “8”-shaped diagram, which is essentially the square of a one-loop diagram. The one-loop and two-loop contributions to $V_b$ are given by $$\begin{aligned}
V_{b(1-loop)}(\vec a)
&=&N\int\frac{dk}{2\pi}\log(k^2+m_\phi^2)=N\left(m_\phi+\frac{2}{\pi}\Lambda(\log\Lambda-1)\right),\\
V_{b(2-loop)}(\vec a)
&=&\frac{g_{\rm qm}^2}{2}
\left(N\int\frac{dk}{2\pi}\frac{1}{k^2+m_\phi^2}\right)^2
=\frac{g_{\rm qm}^2N^2}{8m_\phi^2}.\end{aligned}$$ Because of the supersymmetry, the divergent terms in $V_{b(1-loop)}$ and $V_f$ cancel. Summing the contributions up to the two-loop order, we obtain $$V_{\rm eff}(\vec a)=V_{\rm tree}(\vec a)+V_f(\vec a)+V_b(\vec a)
=\frac{N}{2\lambda}\left(\zeta
+\frac{\lambda}{2|\vec a|}\right)^2.
\label{vlarger}$$
In the small $\zeta$ limit, the two-loop effective potential obtained above is exact, and there is no higher-loop contribution to $V_{\rm eff}$. This is shown as follows. Because $V_f$ is one-loop exact in the large $N$ limit, the potential higher-loop contributions are multi-loop diagrams of the scalar fields $\phi_\alpha$. They depend on $\zeta$ and $|\vec a|$ only through the bare scalar mass $m_\phi=\sqrt{|\vec a|^2+\zeta}$. From dimensional analysis, it is found that the $L$-loop contribution is $\lambda^L/m_\phi^{3L}$, up to a numerical constant. This is expanded with respect to $\zeta$ as $$V_{b(L-loop)}
=\sum_{P=0}^\infty\frac{c_{L,P}}{g_{\rm qm}^2}
\left(\frac{\lambda}{\zeta|\vec a|}\right)^L\left(\frac{\zeta}{|\vec a|^2}\right)^{P+L},$$ where $c_{L,P}$ in the equation represents numerical coefficients obtained in the loop calculation. From this expression, it is apparent that any diagrams with more than $3$ loops give terms of higher order in $\zeta/|\vec a|^2$, which should be ignored in the small $\zeta$ limit. Thus, the effective potential (\[vlarger\]) is exact in the large $N$, small $\zeta$ limit.
If we rewrite the effective potential (\[vlarger\]) in terms of variables on the supergravity side according to the parameter correspondence (\[paramcorr\]), we find it coincides with the D-particle potential (\[d0potentialc2\]).
It is useful to note that in the large $N$, small $\zeta$ limit, the relation $$V_{\rm eff}=\frac{1}{2g_{\rm qm}^2}\langle D\rangle^2
\label{DV2}$$ holds, where $\langle D\rangle$ is the vacuum expectation value of the auxiliary field (\[auxD\]), which is one-loop exact in the large $N$, small $\zeta$ limit. The following relation, somewhat simpler than (\[DV2\]), also holds: $$V=\frac{\langle D\rangle}{g_{\rm qm}^2}.
\label{VandD}$$
To this point, we have focused on the effective potential. To complete the comparison between the D-particle action (\[ld0\]) and the effective action of our quantum mechanics, let us check the coincidence of the velocity dependent terms $L_2$ and $L_1$.
The fermion one-loop correction to the two-point function takes the form $\Gamma^{(2)}=\delta a_i(p)M_{ij}\delta a_j(-p)$ with $$\begin{aligned}
M_{ij}&=&-N\int\frac{dk}{2\pi}
\tr\left(
\sigma_i\frac{i}{k+p/2+i\vec a\cdot\vec\sigma}
\sigma_j\frac{i}{k-p/2+i\vec a\cdot\vec\sigma}
\right)
\nonumber\\
&=&-\frac{\partial^2 V_f}{\partial a_i\partial a_j}
+N\frac{\epsilon_{ijk}a_k}{2m_f^3}p
-N\frac{a^2\delta_{ij}-a_ia_j}{4m_f^5}p^2
+{\cal O}(p^3).
\label{mij}\end{aligned}$$ The first term here is the second derivative of $-V_f(\vec a)$ and is independent of $p$. The second term in (\[mij\]) is linear in $p$. This term implies the existence of the parity-violating term in the effective action given by $$\Gamma
=\int\frac{N}{2m_f^3}\epsilon_{ijk}a_i\delta a_j\partial_t\delta a_k dt
=2\pi N\int d\vec a\cdot\vec A,$$ where $\vec A$ is the monopole potential in the $\vec a$ space, which is given by $$\vec A\sim \frac{1}{4\pi |\vec a|^3}\vec a\times\delta \vec a,$$ in the vicinity of $\vec a$. This reproduces the Lorentz force term $L_1$. The appearance of this term in the one-loop correction is also found in Ref. in the context of M(atrix) theory.
The third term in (\[mij\]) yields a wave function renormalization of order $\lambda/|\vec a|^3$. The loop diagrams of the scalar fields $\phi_\alpha$ also yield wave function renormalization of the same order of magnitude. These corrections vanish in the small $\zeta$ limit (\[smzeta\]), and the kinetic term of $\vec a$ is not corrected in this limit. This is consistent with the D-particle kinetic term with a constant coefficient appearing in (\[multieffdac\]).
We have thus confirmed that the quantum mechanics represented by (\[treepot\]) reproduces the D-particle action (\[multieffdac\]) as the effective action. Before ending this section, we give one more example of the correspondence between a classical quantity in the supergravity and a loop correction in our quantum mechanics. Let us consider the $\SU(2)_R$ symmetry, which rotates the $\vec a$ space. It transforms the fermions $\chi$ and $\psi_\alpha$ as doublets, and the current is $$\vec j=\frac{1}{g_{\rm qm}^2}(
\vec a\times\partial_t\vec a
+\chi^\dagger\vec\sigma\chi
+\psi_\alpha^\dagger\vec\sigma\psi_\alpha
).
\label{jvec}$$ Because we treat $\vec a$ and $\chi$ as background classical fields describing the classical motion of the D-particle, the first two terms just give the orbital angular momentum of the D-particle. In addition, the third term has a non-vanishing vacuum expectation value, due to the fermion one-loop correction: $$\langle\vec j\rangle
=N\int\frac{dk}{2\pi}\tr\left(\sigma_z\frac{i}{k+i\vec a\cdot\vec\sigma}\right)
=N\frac{\vec a}{|\vec a|}.
\label{su2vev}$$ This should also be identified with the angular momentum of the probe D-particle. It is well known that systems consisting of mutually non-local charges can have non-vanishing angular momenta, due to the non-vanishing Poynting vector. In our case, it can be evaluated, for example, by considering the asymptotic behavior of the differential $\omega$ perturbed by the probe D-particle. We obtain $$\vec J=N\frac{\vec y}{r},$$ and this coincides with (\[su2vev\]).
Generalization to multicenter solutions {#gen.sec}
=======================================
To this point, we have only treated single center solutions and the quantum mechanics with $N$ identical chiral multiplets. As demonstrated below, however, it is possible to generalize this quantum mechanics so that its effective action reproduces the D-particle effective action in arbitrary bubbling solutions.
In the previous section, we introduced the parameters $\beta_I$ as those determining the background $B$-field. There, the lower-dimensional brane charge dissolved in D6-branes is induced by the Chern-Simons term in the D6-brane action. This, however, cannot be done in general cases, in which both D6-branes and anti-D6-branes with different lower dimensional brane charges exist. In such cases, it is more convenient to regard the parameters $\beta_I$ as that determining the gauge fields on D6-branes, not the background. For this reason, we perform a $B$-field gauge transformation so that the asymptotic value of the $B$-field vanishes. For the bubbling solutions, $B$-field transformations amount to the following transformations of the harmonic functions[@BKW0504; @BW0505]: $$\begin{aligned}
K^I&\rightarrow& K^I+c^IV,\nonumber\\
L_I&\rightarrow& L_I-C_{IJK}c^JK^K-\frac{1}{2}C_{IJK}c^Jc^KV,\nonumber\\
M&\rightarrow& M-\frac{1}{2}c^IL_I+\frac{1}{12}C_{IJK}(Vc^Ic^Jc^K+3c^Ic^JK^K).\end{aligned}$$ To make the asymptotic value of $B_2$ vanish, we should carry out this transformation with parameters $c^I=g_{\rm str}^{2/3}b_I$. Doing so, we obtain $$\begin{aligned}
V&=&\frac{1}{g_{\rm str}}\left(\sin\xi+\frac{Ng_{\rm str}}{4\pi r}\right),\nonumber\\
K^I&=&\frac{1}{g_{\rm str}^{1/3}}\left(\cos\xi+b_I\frac{Ng_{\rm str}}{4\pi r}\right),\nonumber\\
L_I&=&g^{1/3}_{\rm str}\left(\sin\xi-\frac{1}{2}C_{IJK}b_Jb_K\frac{Ng_{\rm str}}{4\pi r}\right),\nonumber\\
M&=&-\frac{g_{\rm str}}{2}\left(\cos\xi
-b_1b_2b_3\frac{Ng_{\rm str}}{4\pi r}\right).\end{aligned}$$ In this gauge, the constant parts of the harmonic functions depend only on $\xi$, while the terms proportional to $1/r$ contain $b_I$ separately. Here, let us treat the parameters $\xi$ and $\beta_I$ as independent. Then we regard $\beta_I$ as parameters for the D6-branes and $\xi$ as that for the background. The relation (\[v0kisol\]) among $\xi$ and $\beta_I$ is obtained as the bubble equation[@BW0505; @Berglund:2005vb] for single center solutions.
Because the parameters $\beta_I$ are contained only in the pole terms, the solution can easily be generalized to $n$-center solutions by superposing single-center solutions as follows: $$\begin{aligned}
V&=&\frac{1}{g_{\rm str}}
\left(\sin\xi+\sum_{i=1}^n\frac{N_ig_{\rm str}}{4\pi|\vec y-\vec y_i|}\right),\nonumber\\
K^I&=&\frac{1}{g_{\rm str}^{1/3}}
\left(\cos\xi+\sum_{i=1}^n b^i_I\frac{N_ig_{\rm str}}{4\pi|\vec y-\vec y_i|}\right),\nonumber\\
L_I&=&g^{1/3}_{\rm str}
\left(\sin\xi-\frac{1}{2}\sum_{i=1}^n C_{IJK}b_J^ib_K^i\frac{N_ig_{\rm str}}{4\pi|\vec y-\vec y_i|}\right),\nonumber\\
M&=&-\frac{g_{\rm str}}{2}\left(\cos\xi
-\sum_{i=1}^n b_1^ib_2^ib_3^i\frac{N_ig_{\rm str}}{4\pi|\vec y-\vec y_i|}\right).
\label{vklmmulti}\end{aligned}$$ We assign different $\beta_I^i$ to each pole labeled by $i=1,\ldots,n$. Substituting these harmonic functions into the regularity condition $\mu(\vec y_i)=0$, we obtain the bubble equation $$\frac{\sin(\xi_i-\xi)}{\cos\beta^i_1\cos\beta^i_2\cos\beta^i_3}
+\sum_{j\neq i}(b^i_1-b^j_1)(b^i_2-b^j_2)(b^i_3-b^j_3)\frac{N_jg_{\rm str}}{4\pi|\vec y_i-\vec y_j|}=0,
\label{bubbleeq}$$ where we define $\xi_i$ for each GH center by $$\xi_i=\frac{\pi}{2}-\beta_1^i+\beta_2^i+\beta_3^i.$$ It is worth noting that the vacuum condition $V=0$ is in fact a special case of the bubble equation. Indeed, if we use the index $i=0$ for the probe D-particle, we obtain $V=0$ as the bubble equation in the limit $\beta_I^0\rightarrow\pi/2$.
We assume that all $\xi_i$ are of the same order as $\xi$, and take the following small $\xi$ limit: $$\xi\rightarrow 0,\quad\mbox{with}\quad
\frac{\xi_i}{\xi},\quad
\frac{\xi\vec y_i}{N_ig_{\rm str}}\quad\mbox{fixed}.
\label{smallxis}$$ In this small $\xi$ limit, the D-particle effective action is given by (\[multieffdac\]), with $V$ and $A$ replaced by the harmonic function in (\[vklmmulti\]) and its magnetic dual, respectively. This effective Lagrangian does not depend on $\xi^i$. In the small $\xi$ limit (\[smallxis\]), we can always tune these irrelevant parameters $\xi^i$ so that the bubble equation (\[bubbleeq\]) holds.
Each GH center labeled by the index $i$ in (\[vklmmulti\]) carries a NUT charge $N_i$. This charge represents the number of corresponding (anti-)D6-branes. Because each elementary (anti-)D6-brane gives one chiral multiplet with charge $+1$ ($-1$), it is more convenient to label them one by one when we investigate dual quantum mechanics. We use the index $\alpha$ for this labeling and write the harmonic function $V$ as $$V=\frac{\xi}{g_{\rm str}}+\sum_{\alpha=1}^N\frac{e_\alpha}{4\pi|\vec y-\vec y_\alpha|},
\quad
N=\sum_{i=1}^n|N_i|,
\label{posnegv}$$ where $\vec y_\alpha$ and $e_\alpha=\pm1$ are the position and charge of each (anti-)D6-brane. We propose the quantum mechanics consisting of a $\U(1)$ vector multiplet ${\cal V}=(A_t,\vec a,\chi)$ and $N$ charged chiral multiplets $\Phi_\alpha=(\phi_\alpha,\psi_\alpha)$ with masses $$\vec m_\alpha=2\pi \vec y_\alpha
\label{masspole}$$ and charges $e_\alpha$ as the theory probing the bubbling solution. The Lagrangian for this quantum mechanics is $$\begin{aligned}
L_{\rm tree}
&=&\frac{1}{g_{\rm qm}^2}
\left[\left(\int d^2\theta W^2+\mbox{c.c.}\right)+\int d^4\theta \zeta{\cal V}
+\sum_{\alpha=1}^N\int d^4\theta \Phi_\alpha^\ast e^{e_\alpha({\cal V}-\theta^\dagger\vec a\cdot\vec\sigma\theta)}\Phi_\alpha\right]
\nonumber\\
&=&\frac{1}{g_{\rm qm}^2}\Big[
\frac{1}{2}(\partial_t\vec a)^2
+\chi^\dagger\partial_t\chi
-\frac{1}{2}D^2
\nonumber\\&&
+\sum_{\alpha=1}^N\Big(
|D_t\phi_\alpha|^2
-|\vec a-\vec m_\alpha|^2|\phi_\alpha|^2
+\psi_\alpha^\dagger D_t\psi_\alpha
\nonumber\\&&
+e_\alpha\psi_\alpha^\dagger\vec\sigma\cdot|\vec a-\vec m_\alpha|\psi_\alpha
+e_\alpha(\phi_\alpha^\dagger\chi\psi_\alpha
+\phi_\alpha\chi^\dagger\psi_\alpha^\dagger)
\Big)\Big],
\label{treepot2}\end{aligned}$$ where the auxiliary field $D$ is given by $$D=\zeta+\sum_{\alpha=1}^N e_\alpha|\phi_\alpha|^2.$$ This Lagrangian is obtained through dimensional reduction from a four-dimensional ${\cal N}=1$ gauge theory. Although a non-vanishing superpotential is not forbidden by the gauge invariance, we simply set $W=0$ here. It may be interesting to seek that wich on the supergravity side corresponds to turning on a superpotential in our quantum mechanics.
The computation of the effective action proceed in parallel to that we did in the previous section. In the small $\zeta$ limit (\[smzeta\]), the wave function renormalization vanishes. The Lorentz force term in (\[multieffdac\]) is reproduced by the fermion $1$-loop diagram. In this case, each fermion with mass $\vec m_\alpha$ gives a monopole at $\vec a=\vec m_\alpha$ in the $\vec a$-space. Summing up the contributions from all the fermions, the Lorentz force term is reproduced.
In order to show that the potential term in (\[multieffdac\]) is correctly reproduced, we can use the relation (\[VandD\]) instead of computing the effective potential itself. The expectation value $\langle D\rangle$, which is one-loop exact in the small $\zeta$ limit, is given by $$\langle D\rangle
=\zeta+\frac{g_{\rm qm}^2}{2}\sum_{\alpha=1}^{N}
\frac{e_\alpha}{|\vec a-\vec m_\alpha|}.
\label{multiconsis}$$ This is identical to the harmonic function $V$ through the parameter correspondence (\[VandD\]), and thus the D-particle potential is correctly reproduced.
The angular momentum of a D-particle in a general bubbling solution is obtained as the vacuum expectation value of the $\SU(2)_R$ current. In general, the background geometry itself may have non-vanishing angular momentum. It should be noted that the $\SU(2)_R$ current gives only the contribution of the probe, including both angular momentum due to the classical motion of the probe and that due to the Poynting vector induced by the charge of the probe.
Conclusions {#conc.sec}
===========
We proposed a supersymmetric large $N$ vector quantum mechanics as the theory describing a probe D-particle in bubbling supertube solutions.
A bubbling supertube solution can be regarded as a system of D6 and anti-D6 branes carrying lower-dimensional brane charges. This solution is parameterized by two parameters, $g_{\rm str}$ and $\xi$, for the asymptotic behavior of fields and six parameters, $\beta_I^i$ and $\vec y_i$, for every (anti-)D6-brane. We computed the D-particle effective action in the background and showed that in the small $\xi$ limit (\[smallxis\]), the action does not depend on $\beta_I^i$. We can always tune these irrelevant parameters so that the bubble equation holds for any given positions $\vec y_i$ of the branes.
The quantum mechanics we propose consists of one $\U(1)$ vector multiplet and $N$ charged chiral multiplets. Each chiral multiplet corresponds to one (anti-)D6-brane, and its $\U(1)$ charge is $+1$ for a D6-brane and $-1$ for an anti-D6-brane. The parameters $g_{\rm str}$, $\xi$, and $\vec y_\alpha$ of the bubbling supertube solution are mapped to the coupling constant, the FI-parameter, and the bare masses of the chiral multiplets. We showed that the D-particle effective action up to quadratic order in the velocity is correctly reproduced as the effective action of this quantum mechanics. Stability loci for the probe D-particle, which are equivalent to ergospheres in the five-dimensional solution, correspond to the quantum moduli space of this quantum mechanics.
We also showed that the angular momentum of the probe D-particle is correctly reproduced in out quantum mechanics as the one-loop expectation value of the $\SU(2)_R$ charge.
Acknowledgements {#acknowledgements .unnumbered}
================
I would like to thank Y. Tachikawa for valuable discussions. This work is supported in part by a Grant-in-Aid for the Encouragement of Young Scientists (\#15740140) from the Japan Ministry of Education, Culture, Sports, Science and Technology.
[99]{} H. Lin, O. Lunin, J. Maldacena, JHEP [**0410**]{} (2004) 025. S. Corley, A. Jevicki and S. Ramgoolam, Adv. Theor. Math. Phys. [**5**]{} (2002) 809. D. Berenstein, JHEP [**0407**]{} (2004) 018. M. M. Caldarelli and P. J. Silva, JHEP [**0408**]{} (2004) 029. A. Buchel, [*“Coarse-graining 1/2 BPS geometries of type IIB supergravity”*]{}, [hep-th/0409271]{}. N. V. Suryanarayana, [*“Half-BPS Giants, Free Fermions and Microstates of Superstars”*]{}, [hep-th/0411145]{}. M. M. Caldarelli, D. Klemm and P. J. Silva, Class. Quant. Grav. [**22**]{} (2005) 3461. V. Balasubramanian, D. Berenstein, B. Feng, M. Huang, JHEP [**0503**]{} (2005) 006. M. M. Sheikh-Jabbari, M. Torabian, JHEP [**0504**]{} (2005) 001. G. Mandal, JHEP [**0508**]{} (2005) 052. L. Grant, L. Maoz, J. Marsano, K. Papadodimas, V. S. Rychkov, JHEP [**0508**]{} (2005) 025. Y. Takayama, A. Tsuchiya, JHEP [**0510**]{} (2005) 004. D. Berenstein, [*“Large N BPS states and emergent quantum gravity”*]{}, [hep-th/0507203]{}. L. Maoz, V. S. Rychkov, JHEP [**0508**]{} (2005) 096. P. J. Silva, JHEP [**0511**]{} (2005) 012. H. Lin, J. Maldacena, [*“Fivebranes from gauge theory”*]{}, [hep-th/0509235]{}. D. Bak, S. Siwach, H.-U. Yee, Phys.Rev. [**D72**]{} (2005) 086010. M. Spalinski, [*“Some half-BPS solutions of M-theory”*]{}, [hep-th/0506247]{}. M. A. Ganjali, [*“On Toda Equation and Half BPS Supergravity Solution in M-Theory”*]{}, [hep-th/0511145]{}. S. D. Mathur, Fortsch.Phys. [**53**]{} (2005) 793. J. T. Liu, D. Vaman, W. Y. Wen, [*“Bubbling 1/4 BPS solutions in type IIB and supergravity reductions on $S^n \times S^n$”*]{}, [hep-th/0412043]{}. D. Martelli, J. F. Morales, JHEP [**0502**]{} (2005) 048. Z.-W. Chong, H. Lu, C.N. Pope, Phys.Lett. [**B614**]{} (2005) 96. J. T. Liu, D. Vaman, [*“Bubbling 1/2 BPS solutions of minimal six-dimensional supergravity”*]{}, [hep-th/0412242]{}. M. Boni, P. J. Silva, JHEP [**0510**]{} (2005) 070. R. Emparan, H. S. Reall, Phys. Rev. Lett. [**88**]{} (2002) 101101. H. Elvang, R. Emparan, D. Mateos, H. S. Reall, Phys. Rev. Lett. [**93**]{} (2004) 211302. H. Elvang, R. Emparan, D. Mateos, H. S. Reall, Phys.Rev. [**D71**]{} (2005) 024033. J. P. Gauntlett, J. B. Gutowski, Phys.Rev. [**D71**]{} (2005) 045002. J. P. Gauntlett, J. B. Gutowski, C. M. Hull, S. Pakis, H. S. Reall, Class.Quant.Grav. [**20**]{} (2003) 4587. I. Bena, N. P. Warner, [*“One Ring to Rule Them All ... and in the Darkness Bind Them?”*]{}, [hep-th/0408106]{}. I. Bena, N. P. Warner, [*“Bubbling Supertubes and Foaming Black Holes”*]{}, [hep-th/0505166]{}. P. Berglund, E. G. Gimon and T. S. Levi, [*“Supergravity microstates for BPS black holes and black rings”*]{}, [hep-th/0505167]{}. I. Bena, P. Kraus, N. P. Warner, Phys.Rev. [**D72**]{} (2005) 084019. F. Denef, JHEP [**0008**]{} (2000) 050. F. Denef, B. Greene, M. Raugas, JHEP [**0105**]{} (2001) 012. B. Bates, F. Denef, [*“Exact solutions for supersymmetric stationary black hole composites”*]{}, [hep-th/0304094]{}. F. Denef, JHEP [**0210**]{} (2002) 023. K. Behrndt, G. L. Cardoso and S. Mahapatra, Nucl. Phys. [**B732**]{} (2006) 200. N. Ohta and P. K. Townsend, Phys. Lett. B [**418**]{} (1998) 77. B. Chen, H. Itoyama, T. Matsuo, K. Murakami, Nucl.Phys. [**B576**]{} (2000) 177. M. Mihailescu, I. Y. Park and T. A. Tran, Phys. Rev. D [**64**]{} (2001) 046006 E. Witten, JHEP [**0204**]{} (2002) 012. A. Fujii, Y. Imaizumi and N. Ohta, Nucl. Phys. B [**615**]{} (2001) 61. J. M. Pierre, Phys.Rev. [**D56**]{} (1997) 6710. A. Brandhuber, N. Itzhaki, J. Sonnenschein, S. Yankielowicz, Phys.Lett. [**B423**]{} (1998) 238. G. Lifschytz, Nucl.Phys. [**B520**]{} (1998) 105. E. Keski-Vakkuri, P. Kraus, Nucl.Phys. [**B510**]{} (1998) 199. A. Dhar, G. Mandal, Nucl.Phys. [**B531**]{} (1998) 256. J. Branco, Class.Quant.Grav. [**15**]{} (1998) 3739. A. Dhar, Nucl.Phys. [**B551**]{} (1999) 155. M. Billo’, P. Di Vecchia, M. Frau, A. Lerda, R. Russo, S. Sciuto, Mod.Phys.Lett. [**A13**]{} (1998) 2977.
[^1]: E-mail: imamura@hep-th.phys.s.u-tokyo.ac.jp
[^2]: Here we use the term “particles” in the four-dimensional sense. From the viewpoint of five-dimensional spacetime, it means two different kinds of objects: CG centers and rings.
|
[**Linear-in-Delta Lower Bounds in the [LOCAL]{} Model**]{}
**Mika Göös**\
Department of Computer Science, University of Toronto, Canada\
**Juho Hirvonen**\
Helsinki Institute for Information Technology HIIT,\
Department of Computer Science, University of Helsinki, Finland\
**Jukka Suomela**\
Helsinki Institute for Information Technology HIIT,\
Department of Computer Science, University of Helsinki, Finland\
**Abstract.** By prior work, there is a distributed algorithm that finds a maximal fractional matching (maximal edge packing) in $O(\Delta)$ rounds, where $\Delta$ is the maximum degree of the graph. We show that this is optimal: there is no distributed algorithm that finds a maximal fractional matching in $o(\Delta)$ rounds.
Our work gives the first linear-in-$\Delta$ lower bound for a natural graph problem in the standard model of distributed computing—prior lower bounds for a wide range of graph problems have been at best logarithmic in $\Delta$.
Introduction
============
This work settles the distributed time complexity of the maximal fractional matching problem as a function of $\Delta$, the maximum degree of the input graph.
By prior work [@astrand10vc-sc], there is a distributed algorithm that finds a maximal fractional matching (also known as a maximal edge packing) in $O(\Delta)$ communication rounds, independently of the number of nodes. In this work, we show that this is optimal: there is no distributed algorithm that finds a maximal fractional matching in $o(\Delta)$ rounds.
This is the first linear-in-$\Delta$ lower bound for a natural graph problem in the standard $\LOCAL$ model of distributed computing. It is also a step towards understanding the complexity of the non-fractional analogue, the maximal matching problem, which is a basic symmetry breaking primitive in the field of distributed graph algorithms. For many related primitives, the prior lower bounds in the $\LOCAL$ model have been at best logarithmic in $\Delta$.
Matchings
---------
Simple randomised distributed algorithms that find a maximal matching in time $O(\log n)$ have been known since the 1980s [@alon86fast; @israeli86matching; @luby86simple]. Currently, the fastest algorithms that compute a maximal matching stand as follows:
- [**Dense graphs.**]{} There is a recent $O(\log\Delta + \log^4\log n)$-time randomised algorithm due to Barenboim et al. [@barenboim12locality]. The fastest known deterministic algorithm runs in time $O(\log^4 n)$ and is due to Ha[ń]{}[ć]{}kowiak et al. [@hanckowiak01distributed].
- [**Sparse graphs.**]{} There is a $O(\Delta + \log^*n)$-time deterministic algorithm due to Panconesi and Rizzi [@panconesi01some]. Here $\log^* n$ is the iterated logarithm of $n$, a very slowly growing function.
Our focus is on the sparse case. It is a long-standing open problem to either improve on the $O(\Delta+\log^*n)$-time algorithm of Panconesi and Rizzi, or prove it optimal by finding a matching lower bound. In fact, Linial’s [@linial92locality] seminal work already implies that $O(\Delta) + o(\log^* n)$ rounds is not sufficient. This leaves us with the following possibility (see Barenboim and Elkin [@barenboim13distributed Open Problem 10.6]):
Can maximal matchings be computed in time $o(\Delta)+O(\log^*n)$?
We conjecture that there are no such algorithms. The lower bound presented in this work builds towards proving conjectures of this form.
Fractional matchings
--------------------
While a matching associates a weight $0$ or $1$ with each edge of a graph, with $1$ indicating that the edge is in a matching, a fractional matching ([[FM]{}]{}) associates a weight between $0$ and $1$ with each edge. In both cases, the total weight of the edges incident to any given node has to be at most $1$.
Formally, let $G = (V,E)$ be a simple undirected graph and let $y\colon E \to [0,1]$ associate weights to the edges of $G$. Define, for each $v\in V$, $$y[v] := \sum_{e \in E: v \in e} y(e).$$ The function $y$ is called a *fractional matching*, or an [[FM]{}]{}for short, if $y[v] \le 1$ for each node $v$. A node $v$ is *saturated* if $y[v]=1$.
There are two interesting varieties of fractional matchings.
- [**Maximum weight.**]{} An [[FM]{}]{}$y$ is of *maximum weight*, if its total weight $\sum_{e\in E} y(e)$ is the maximum over all fractional matchings on $G$.
- [**Maximality.**]{} An [[FM]{}]{}$y$ is *maximal*, if each edge $e$ has at least one saturated endpoint $v \in e$.
See below for examples of (a) a maximum-weight [[FM]{}]{}, and (b) a maximal [[FM]{}]{}; the saturated nodes are highlighted.

#### Distributed complexity.
The distributed complexity of computing maximum-weight [[FM]{}]{}s is completely understood. It is easy to see that computing an exact solution requires time $\Omega(n)$ already on odd-length path graphs. If one settles for an approximate solution, then [[FM]{}]{}s whose total weight is at least a $(1-\epsilon)$-fraction of the maximum can be computed in time $O(\epsilon^{-1}\log\Delta)$ by the well-known results of Kuhn et al. [@kuhn04what; @kuhn06price; @kuhn10local]. This is optimal: Kuhn et al. also show that any constant-factor approximation of maximum-weight [[FM]{}]{}s requires time $\Omega(\log\Delta)$.
By contrast, the complexity of computing maximal [[FM]{}]{}s has not been understood. A maximal [[FM]{}]{}is a $1/2$-approximation of a maximum-weight [[FM]{}]{}, so the results of Kuhn et al. imply that finding a maximal [[FM]{}]{}requires time $\Omega(\log \Delta)$, but this lower bound is exponentially small in comparison to the $O(\Delta)$ upper bound [@astrand10vc-sc].
Contributions
-------------
We prove that the $O(\Delta)$-time algorithm [@astrand10vc-sc] for maximal fractional matchings is optimal:
\[thm:main\] There is no $\LOCAL$ algorithm that finds a maximal [[FM]{}]{}in $o(\Delta)$ rounds.
To our knowledge, this is the first linear-in-$\Delta$ lower bound in the $\LOCAL$ model for a classical graph problem. Indeed, prior lower bounds have typically fallen in one of the following categories:
- they are logarithmic in $\Delta$ [@kuhn04what; @kuhn06price; @kuhn10local],
- they analyse the complexity as a function of $n$ for a fixed $\Delta$ [@czygrinow08fast; @floreen11max-min-lp; @goos12local-approximation; @goos12bipartite-vc; @lenzen08leveraging; @linial92locality; @naor95what],
- they only hold in a model that is strictly weaker than $\LOCAL$ [@hirvonen12maximal-matching; @kuhn06complexity].
We hope that our methods can eventually be extended to analyse algorithms (e.g., for maximal matching) whose running times depend mildly on $n$.
The LOCAL model {#ssec:localmodel}
---------------
Our result holds in the standard $\LOCAL$ model of distributed computing [@linial92locality; @peleg00distributed]. For now, we only recall the basic setting; see Section \[sec:tools\] for precise definitions.
In the $\LOCAL$ model an input graph $G = (V,E)$ defines both the problem instance and the structure of the communication network. Each node $v \in V$ is a computer and each edge $\{u,v\} \in E$ is a communication link through which nodes $u$ and $v$ can exchange messages. Initially, each node is equipped with a *unique identifier* and, if we study randomised algorithms, a source of randomness. In each *communication round*, each node in parallel (1) sends a message to each neighbour, (2) receives a message from each neighbour, and (3) updates its local state. Eventually, all nodes have to stop and announce their local outputs—in our case the local output of a node $v \in V$ is an encoding of the weight $y(e)$ for each edge $e$ incident to $v$. The *running time* $t$ of the algorithm is the number of communication rounds until all nodes have stopped. We call an algorithm *strictly local*, or simply *local*, if $t=t(\Delta)$ is only a function of $\Delta$, i.e., independent of $n$.
The $\LOCAL$ model is the strongest model commonly in use—in particular, the size of each message and the amount of local computation in each communication round is unbounded—and this makes *lower bounds* in this model very widely applicable.
Overview
========
The maximal [[FM]{}]{}problem is an example of a *locally checkable* problem: there is a local algorithm that can check whether a proposed function $y$ is a feasible solution.
It is known that randomness does not help a local algorithm in solving a locally checkable problem [@naor95what]: if there is a $t(\Delta)$-time worst-case randomised algorithm, then there is a $t(\Delta)$-time deterministic algorithm (see Appendix \[app:randomness\]). Thus, we need only prove our lower bound for deterministic algorithms.
Deterministic models {#ssec:models}
--------------------
Our lower bound builds on a long line of prior research. During the course of the proof, we will visit each of the following deterministic models (see Figure \[fig:models\]), whose formal definitions are given in Section \[sec:tools\].
- *Deterministic $\LOCAL$.* Each node has a unique identifier [@peleg00distributed; @linial92locality]. This is the standard model in the field of deterministic distributed algorithms.
- *Order-invariance.* The output of an algorithm is not allowed to change if we relabel the nodes while preserving the relative order of the labels [@naor95what]. Equivalently, the algorithm can only compare the identifiers, not access their numerical value.
- *Port numbering and orientation.* For each node, there is an ordering on the incident edges, and all edges carry an orientation [@mayer95local].
- *Edge colouring.* A proper edge colouring with $O(\Delta)$ colours is given [@hirvonen12maximal-matching].
The models are listed here roughly in the order of decreasing strength. For example, the $\ID$ model is strictly stronger than $\OI$, which is strictly stronger than $\PO$. However, the $\EC$ model is not directly comparable: there are problems that are trivial to solve in $\ID$, $\OI$, and $\PO$ but impossible to solve in $\EC$ with any deterministic algorithm (example: graph colouring in $1$-regular graphs); there are also problems that can be solved with a local algorithm in $\EC$ but they do not admit a local algorithm in $\ID$, $\OI$, or $\PO$ (example: maximal matching).
![Deterministic models that are discussed in this work.[]{data-label="fig:models"}](figs.pdf)
Proof outline
-------------
In short, our proof is an application of techniques that were introduced in two of our earlier works [@hirvonen12maximal-matching; @goos12local-approximation]. Accordingly, our proof is in two steps.
#### A weak lower bound.
In our prior work [@hirvonen12maximal-matching] we showed that *maximal matchings* cannot be computed in time $o(\Delta)$ in the weak $\EC$ model. The lower-bound construction there is a regular graph, and as such, tells us very little about the fractional matching problem, since maximal fractional matchings are trivial to compute in regular graphs.
Nevertheless, we use a similar *unfold-and-mix* argument on what will be called *loopy $\EC$-graphs* to prove the following intermediate result in Section \[sec:lb-in-ec\]:
\[step:one\] The maximal [[FM]{}]{}problem cannot be solved in time $o(\Delta)$ on loopy $\EC$-graphs.
The proof heavily exploits the limited symmetry breaking capabilities of the $\EC$ model. To continue, we need to argue that similar limitations exist in the $\ID$ model.
#### Strengthening the lower bound.
To extend the lower bound to the $\ID$ model, we give a series of local simulation results $$\EC \leadsto \PO \leadsto \OI \leadsto \ID,$$ which state that a local algorithm for the maximal fractional matching problem in one model can be simulated fast in the model preceding it. That is, even though the models $\EC$, $\PO$, $\OI$, and $\ID$ are generally very different, we show that the models are roughly equally powerful for computing a maximal fractional matching.
This part of the argument applies ideas from another prior work [@goos12local-approximation]. There, we showed that, for a large class of optimisation problems, a run-time preserving simulation $\PO\leadsto\ID$ exists. Unfortunately, the maximal fractional matching problem is not included in the scope of this result (fractional matchings are not *simple* in the sense of [@goos12local-approximation]), so we may not apply this result directly in a black-box fashion. In addition, this general result does not hold for the $\EC$ model.
Nevertheless, we spend Section \[sec:simulations\] extending the methods of [@goos12local-approximation] and show that they can be tailored to the case of fractional matchings:
\[step:two\] If the maximal [[FM]{}]{}problem can be solved in time $t(\Delta)$ on $\ID$-graphs, then it can be solved in time $t(\Theta(\Delta))$ on loopy $\EC$-graphs.
In combination with Step \[step:one\], this proves Theorem \[thm:main\].
Tools of the Trade {#sec:tools}
==================
Before we dive into the lower-bound proof, we recall the definitions of the four models mentioned in Section \[ssec:models\], and describe the standard tools that are used in their analysis.
Locality
--------
Distributed algorithms are typically described in terms of networked state machines: the nodes of a network exchange messages for $t$ synchronous communication rounds after which they produce their local outputs (cf. Section \[ssec:localmodel\]).
Instead, for the purposes of our lower-bound analysis, we view an algorithm ${\ensuremath{\mathcal{A}}}$ simply as a function that associates to each pair $(G,v)$ an output ${\ensuremath{\mathcal{A}}}(G,v)$ in a way that respects *locality*. That is, an algorithm ${\ensuremath{\mathcal{A}}}$ is said to have run-time $t$, if the output ${\ensuremath{\mathcal{A}}}(G,v)$ depends only on the information that is available in the radius-$t$ neighbourhood around $v$. More formally, define $$\tau_t(G,v) \subseteq (G,v)$$ as consisting of the nodes and edges of $G$ that are within distance $t$ from $v$—the distance of an edge $\{u,w\}$ from $v$ is $\min\{\operatorname{dist}(v,u),\operatorname{dist}(v,w)\}+1$. A $t$-time algorithm ${\ensuremath{\mathcal{A}}}$ is then a mapping that satisfies $$\label{eq:locality}
{\ensuremath{\mathcal{A}}}(G,v) = {\ensuremath{\mathcal{A}}}(\tau_t(G,v)).$$
The information contained in $\tau_t(G,v)$ depends on which of the models $\EC$, $\PO$, $\OI$, and $\ID$ we are studying. For each model we define an associated graph class.
Identifier-based networks
-------------------------
#### $\ID$-graphs.
An *$\ID$-graph* is simply a graph $G$ whose nodes are assigned unique identifiers; namely, $V(G)\subseteq {\ensuremath{\mathbb{N}}}$. Any mapping ${\ensuremath{\mathcal{A}}}$ satisfying (\[eq:locality\]) is a $t$-time $\ID$-algorithm.
#### $\OI$-graphs.
An *$\OI$-graph* is an ordered graph $(G,\preceq)$ where $\preceq$ is a linear order on $V(G)$. An $\OI$-algorithm ${\ensuremath{\mathcal{A}}}$ operates on $\OI$-graphs in such a way that if $(G,\preceq,v)$ and $(G',\preceq',v')$ are isomorphic (as ordered structures), then ${\ensuremath{\mathcal{A}}}(G,\preceq,v)={\ensuremath{\mathcal{A}}}(G',\preceq',v')$.
Every $\ID$-graph $G$ is naturally an $\OI$-graph $(G,\leq)$ under the usual order $\leq$ on ${\ensuremath{\mathbb{N}}}$. In the converse direction, we often convert an $\OI$-graph $(G,\preceq)$ into an $\ID$-graph by specifying an $\ID$-assignment $\varphi\colon V(G)\to{\ensuremath{\mathbb{N}}}$ that *respects* $\preceq$ in the sense that $v\preceq u$ implies $\varphi(v)\leq \varphi(u)$. The resulting $\ID$-graph is denoted $\varphi(G)$.
Anonymous networks {#ssec:anonymous}
------------------
On anonymous networks the nodes do not have identifiers. The only symmetry breaking information is now provided in an *edge colouring* of a suitable type. This means that whenever there is an isomorphism between $(G,v)$ and $(G',v')$ that preserves edge colours, we will have ${\ensuremath{\mathcal{A}}}(G,v)={\ensuremath{\mathcal{A}}}(G',v')$.
#### $\EC$-graphs.
An *$\EC$-graph* carries a proper edge colouring $E(G)\to \{1,\ldots,k\}$, where $k=O(\Delta)$. That is, if two edges are adjacent, they have distinct colours.
#### $\PO$-graphs.
A *$\PO$-graph* is a directed graph whose edges are coloured in the following way: if $(u,v)$ and $(u,w)$ are outgoing edges incident to $u$, then they have distinct colours; and if $(v,u)$ and $(w,u)$ are incoming edges incident to $u$, then they have distinct colours. Thus, we may have $(v,u)$ and $(u,w)$ coloured the same.
We find it convenient to treat $\PO$-graphs as edge-coloured digraphs, even if this view is nonstandard. Usually, $\PO$-graphs are defined as digraphs with a *port numbering*, i.e., each node is given an ordering of its neighbours. This is equivalent to our definition: A port numbering gives rise to an edge colouring where an edge $(u,v)$ is coloured with $(i,j)$ if $v$ is the $i$-th neighbour of $u$ and $u$ is the $j$-th neighbour of $v$ (see Figure \[fig:podef\]a). Conversely, we can derive a port numbering from an edge colouring—first take all outgoing edges ordered by the edge colours, and then take all incoming edges ordered by the edge colours (Figure \[fig:podef\]b).
![Two equivalent definitions of $\PO$-graphs: ($\PO_1$) a node of degree $d$ can refer to incident edges with labels $1,2,\dotsc,d$; ($\PO_2$) edges are coloured so that incoming edges have distinct colours and outgoing edges have distinct colours.[]{data-label="fig:podef"}](figs.pdf)
We are not done with defining $\EC$ and $\PO$ algorithms. We still need to restrict their power by requiring that their outputs are *invariant under graph lifts*, as defined next.
Lifts
-----
A graph $H$ is said to be a *lift* of another graph $G$ if there exists an onto graph homomorphism $\alpha\colon V(H)\to V(G)$ that is a *covering map*, i.e., $\alpha$ preserves node degrees, $\deg_H(v) = \deg_G(\alpha(v))$. Our discussion of lifts always takes place in either $\EC$ or $\PO$; in this context we require that a covering map preserves edge colours.

The defining characteristic of anonymous models is that the output of an algorithm is invariant under taking lifts. That is, if $\alpha\colon V(H)\to V(G)$ is a covering map, then $$\label{eq:lift}
{\ensuremath{\mathcal{A}}}(H,v) = {\ensuremath{\mathcal{A}}}(G,\alpha(v)),\qquad\text{for each}\ v\in V(H).$$ Since an isomorphism between $H$ and $G$ is a special case of a covering map, the condition (\[eq:lift\]) generalises the discussion in Section \[ssec:anonymous\]. We will be exploiting this limitation extensively in analysing the models $\EC$ and $\PO$.
Graphs are partially ordered by the *lift* relation. For any connected graph $G$, there are two graphs $U_G$ and $F_G$ of special interest that are related to $G$ via lifts.
#### Universal cover $U_G$.
The *universal cover* $U_G$ of $G$ is an unfolded tree-like version of $G$. More precisely, $U_G$ is the unique tree that is a lift of $G$. Thus, if $G$ is a tree, $U_G = G$; if $G$ has cycles, $U_G$ is infinite. In passing from $G$ to $U_G$ we lose all the cycle structure that is present in $G$. The universal cover is often used to model the information that a distributed algorithm—even with unlimited running time—is able to collect on an anonymous network [@angluin80local].

#### Factor graph $F_G$.
The *factor graph* $F_G$ of $G$ is the smallest graph $F$ such that $G$ is a lift of $F$; see Figure \[fig:factorgraph\]. In general, $F_G$ is a multigraph with loops and parallel edges. It is the most concise representation of all the global symmetry breaking information available in $G$. For example, in the extreme case when $G$ is vertex-transitive, $F_G$ consists of just one node and some loops.
![Factor graphs and loops. We follow the convention that undirected loops in $\EC$-graphs count as a single incident edge, while directed loops in $\PO$-graphs count as two incident edges: an incoming edge and an outgoing edge. In this example, both $u$ and its preimage $u'$ are nodes of degree $2$; they are incident to one edge of colour $1$ and one edge of colour $2$. Both $v$ and its preimage $v'$ are nodes of degree $3$; they are incident to two outgoing edges of colours $1$ and $2$, and one incoming edge of colour $1$.[]{data-label="fig:factorgraph"}](figs.pdf)
Even though we want our input graphs always to be simple, we may still analyse $\EC$ and $\PO$-algorithms ${\ensuremath{\mathcal{A}}}$ on multigraphs $F$ with the understanding that the output ${\ensuremath{\mathcal{A}}}(F,v)$ is interpreted according to (\[eq:lift\]). That is, to determine ${\ensuremath{\mathcal{A}}}(F,v)$, do the following:
1. Lift $F$ to a simple graph $G$ (e.g., take $G=U_F$) via some $\alpha\colon V(G)\to V(F)$.
2. Execute ${\ensuremath{\mathcal{A}}}$ on $(G,u)$ for some $u\in\alpha^{-1}(v)$.
3. Interpret the output of $u$ as an output of $v$.
In what follows we refer to multigraphs simply as graphs.
Loops {#sec:loops}
-----
In $\EC$-graphs, a single loop on a node contributes $+1$ to its degree, whereas in $\PO$-graphs, a single (directed) loop contributes $+2$ to the degree, once for the tail and once for the head. This is reflected in the way we draw loops—see Figure \[fig:factorgraph\].
The loop count on a node $v\in V(G)$ measures the inability of $v$ to break local symmetries. Indeed, if $v$ has $\ell$ loops, then in any simple lift $H$ of $G$ each node $u\in V(H)$ that is mapped to $v$ by the covering map will have $\ell$ distinct neighbours $w_1,\ldots,w_\ell$ that, too, get mapped to $v$. Thus, an anonymous algorithm is forced to output the same on $u$ as on each of $w_1,\ldots,w_\ell$.
We consider loops as an important resource.
An edge-coloured graph $G$ is called *$k$-loopy* if each node in $F_G$ has at least $k$ loops. A graph is simply *loopy* if it is $1$-loopy.
When computing maximal fractional matchings on a loopy graph $G$, an anonymous algorithm must saturate all the nodes. For suppose not. If $v\in V(G)$ is a node that does not get saturated, the loopiness of $G$ implies that $v$ has a neighbour $u$ (can be $u=v$ via a loop) that produces the same output as $v$. But now neither endpoint of $\{u,v\}$ is saturated, which contradicts maximality; see Figure \[fig:ec-saturate\]. We record this observation.
![$\EC$-graph $G$ is loopy. Assume that an $\EC$-algorithm ${\ensuremath{\mathcal{A}}}$ produces an output in which node $v$ is unsaturated. Then we can construct a simple $\EC$-graph $H$ that is a lift of $G$ via $\alpha\colon V(H) \to V(G)$ such that $\alpha(v_1) = \alpha(v_2) = v$ and $\{v_1,v_2\} \in E(H)$. If we apply ${\ensuremath{\mathcal{A}}}$ to $H$, both $v_1$ and $v_2$ are unsaturated; hence ${\ensuremath{\mathcal{A}}}$ fails to produce a maximal [[FM]{}]{}.[]{data-label="fig:ec-saturate"}](figs.pdf)
\[lem:ec-saturate\] Any $\EC$-algorithm for the maximal [[FM]{}]{}problem computes a fully saturated [[FM]{}]{}on a loopy $\EC$-graph.
Lower Bound in EC {#sec:lb-in-ec}
=================
In this section we carry out Step \[step:one\] of our lower-bound plan. To do this we extend the previous lower bound result [@hirvonen12maximal-matching] to the case of maximal fractional matchings.
Strategy
--------
Let ${\ensuremath{\mathcal{A}}}$ be any $\EC$-algorithm computing a maximal fractional matching. We construct inductively a sequence of $\EC$-graph pairs $$(G_i,H_i),\quad i=0,1,\ldots,\Delta-2,$$ that witness ${\ensuremath{\mathcal{A}}}$ having run-time greater than $i$. Each of the graphs $G_i$ and $H_i$ will have maximum degree at most $\Delta$, so for $i=\Delta-2$, we will have the desired lower bound. More precisely, we show that there are nodes $g_i\in V(G_i)$ and $h_i\in V(H_i)$ satisfying the following property:
1. The $i$-neighbourhoods $\tau_i(G_i,g_i)$ and $\tau_i(H_i,h_i)$ are isomorphic—yet, $${\ensuremath{\mathcal{A}}}(G_i,g_i) \neq {\ensuremath{\mathcal{A}}}(H_i,h_i).$$ Moreover, there is a loop of some colour $c_i$ adjacent to both $g_i$ and $h_i$ such that the outputs disagree on its weight.
We will also make use of the following additional properties in the construction:
1. The graphs $G_i$ and $H_i$ are $(\Delta-1-i)$-loopy—consequently, ${\ensuremath{\mathcal{A}}}$ will saturate all their nodes by Lemma \[lem:ec-saturate\].
2. When the loops are ignored, both $G_i$ and $H_i$ are trees.
Base case (i = 0)
-----------------
Let $G_0$ consist of a single node $v$ that has $\Delta$ differently coloured loops. When ${\ensuremath{\mathcal{A}}}$ is run on $G_0$, it saturates $v$ by assigning at least one loop $e$ a non-zero weight; see Figure \[fig:ecbasecase\]. Letting $H_0 := G_0-e$ it is now easy to check that the pair $(G_0,H_0)$ satisfies (P1–P3) for $g_0=h_0=v$. For example, $\tau_0(G_0,v)\cong\tau_0(H_0,v)$ only because we consider the loops to be at distance $1$ from $v$.
![Base case. By removing a loop $e$ with a non-zero weight, we force the algorithm to change the weight of at least one edge that is present in both $G_0$ and $H_0$.[]{data-label="fig:ecbasecase"}](figs.pdf)
Inductive step
--------------
Suppose $(G_i,H_i)$ is a pair satisfying (P1–P3). For convenience, we write $G$, $H$, $g$, $h$, and $c$ in place of $G_i$, $H_i$, $g_i$, $h_i$, and $c_i$. Also, we let $e\in E(G)$ and $f\in E(H)$ be the colour-$c$ loops adjacent to $g$ and $h$ to which ${\ensuremath{\mathcal{A}}}$ assigns different weights.
To construct the pair $(G_{i+1},H_{i+1})$ we unfold and mix; see Figure \[fig:ecunfoldmix\]
#### Unfolding.
First, we unfold the loop $e$ in $G$ to obtain a 2-lift ${G\mspace{-1mu}G}$ of $G$. That is, ${G\mspace{-1mu}G}$ consists of two disjoint copies of $G-e$ and a new edge of colour $c$ (which we still call $e$) that connects the two copies of $g$ in ${G\mspace{-1mu}G}$. For notational purposes, we fix some identification $V(G)\subseteq V({G\mspace{-1mu}G})$ so that we can easily talk about one of the copies. Similarly, we construct a 2-lift ${H\mspace{-3mu}H}$ of $H$ by unfolding the loop $f$.
Recall that ${\ensuremath{\mathcal{A}}}$ cannot tell apart $G$ from ${G\mspace{-1mu}G}$, or $H$ from ${H\mspace{-3mu}H}$. In particular ${\ensuremath{\mathcal{A}}}$ continues to assign unequal weights to $e$ and $f$ in these lifts.
![Unfold and mix. The weights of $e$ and $f$ differ; hence the weight of $\{g,h\}$ is different from the weight of $e$ or $f$.[]{data-label="fig:ecunfoldmix"}](figs.pdf)
#### Mixing.
Next, we mix together the graphs ${G\mspace{-1mu}G}$ and ${H\mspace{-3mu}H}$ to obtain a graph ${G\mspace{-2mu}H}$ defined as follows: ${G\mspace{-2mu}H}$ contains a copy of $G-e$, a copy of $H-f$, and a new colour-$c$ edge that connects the nodes $g$ and $h$. For notational purposes, we let $V({G\mspace{-2mu}H}) := V(G)\cup V(H)$, where we tacitly assume that $V(G)\cap V(H) = \varnothing$.
#### Analysis.
Consider the weight that ${\ensuremath{\mathcal{A}}}$ assigns to the colour-$c$ edge $\{g,h\}$ in ${G\mspace{-2mu}H}$. Since ${\ensuremath{\mathcal{A}}}$ gives the edges $e$ and $f$ different weights in ${G\mspace{-1mu}G}$ and ${H\mspace{-3mu}H}$, we must have that the weight of $\{g,h\}$ differs from the weight of $e$ or the weight of $f$ (or both). We assume the former (the latter case is analogous), and argue that the pair $$(G_{i+1},H_{i+1}) := ({G\mspace{-1mu}G},{G\mspace{-2mu}H})$$ satisfies the properties (P1–P3). It is easy to check that (P2) and (P3) are satisfied by the construction; it remains is to find the nodes $g_{i+1}\in V({G\mspace{-1mu}G})$ and $h_{i+1}\in V({G\mspace{-2mu}H})$ that satisfy (P1).
To this end, we exploit the following property of fractional matchings:
Let $y$ and $y'$ be fractional matchings that saturate a node $v$. If $y$ and $y'$ disagree on some edge incident to $v$, there must be another edge incident to $v$ where $y$ and $y'$ disagree.
Our idea is to apply this principle in a fully saturated graph, where the disagreements propagate until they are resolved at a loop; this is where we locate $g_{i+1}$ and $h_{i+1}$. See Figure \[fig:ecpropagation\] for an example.
We consider the following fully saturated fractional matchings on $G$: $$\begin{aligned}
y\ &=\ \text{the {{\small FM}\xspace}determined by ${\ensuremath{\mathcal{A}}}$'s output on the nodes $V(G)$ in ${G\mspace{-1mu}G}$}, \\[-3pt]
y'\ &=\ \text{the {{\small FM}\xspace}determined by ${\ensuremath{\mathcal{A}}}$'s output on the nodes $V(G)$ in ${G\mspace{-2mu}H}$}.\end{aligned}$$ Starting at the node $g\in V(G)$ we already know by assumption that $y$ and $y'$ disagree on the colour-$c$ edge incident to $g$. Thus, by the propagation principle, $y$ and $y'$ disagree on some other edge incident to $g$. If this edge is not a loop, it connects to a neighbour $g'\in V(G)$ of $g$ and the argument can be continued: because $y$ and $y'$ disagree on $\{g,g'\}$, there must be another edge incident to $g'$ where $y$ and $y'$ disagree, and so on. Since $G$ does not have any cycles (apart from the loops), this process has to terminate at some node $g^*\in V(G)$ such that $y$ and $y'$ disagree on a loop $e^*\neq e$ incident to $g^*$. Note that $e^*$ is a loop in both ${G\mspace{-1mu}G}$ and ${G\mspace{-2mu}H}$, too. Thus, we have found our candidate $g_{i+1} = h_{i+1} = g^*$.
![Propagation. The weights of $e$ and $\{g,h\}$ differ. We apply the propagation principle towards the common part $G$ that is shared by ${G\mspace{-1mu}G}$ and ${G\mspace{-2mu}H}$. The graphs are loopy and hence all nodes are saturated by ${\ensuremath{\mathcal{A}}}$; we will eventually find a loop $e^*$ that is present in both ${G\mspace{-1mu}G}$ and ${G\mspace{-2mu}H}$, with different weights.[]{data-label="fig:ecpropagation"}](figs.pdf)
To finish the proof, we need to show that $$\label{eq:neigh-cong}
\tau_{i+1}({G\mspace{-1mu}G},g^*) \cong \tau_{i+1}({G\mspace{-2mu}H},g^*).$$ The critical case is when $g^* = g$ as this node is the closest among $V(G)$ to seeing the topological differences between the graphs ${G\mspace{-1mu}G}$ and ${G\mspace{-2mu}H}$. Starting from $g$ and stepping along the colour-$c$ edge towards the differences, we arrive, in ${G\mspace{-1mu}G}$, at a node $\hat{g}$ that is a copy of $g\in V(G)$, and in ${G\mspace{-2mu}H}$, at the node $h$. But these nodes satisfy $$\tau_i({G\mspace{-1mu}G},\hat{g})\cong \tau_i({G\mspace{-2mu}H},h)$$ by our induction assumption. Using this, (\[eq:neigh-cong\]) follows.
Local Simulations {#sec:simulations}
=================
Now that we have an $\Omega(\Delta)$ time lower bound in the $\EC$ model, our next goal is to extend this result to the $\ID$ model. In this section we implement Step \[step:two\] of our plan and give a series of local simulations $$\EC \leadsto \PO \leadsto \OI \leadsto \ID.$$ Here, each simulation preserves the running time of an algorithm up to a constant factor. In particular, together with Step \[step:one\], this will imply the $\Omega(\Delta)$ time lower bound in the $\ID$ model.
Simulation EC to PO {#sec:ec-simulates-po}
-------------------
We start with the easiest simulation. Suppose there is a $t$-time $\PO$-algorithm for the maximal fractional matching problem on graphs of maximum degree $\Delta$; we describe a $t$-time $\EC$-algorithm for graphs of maximum degree $\Delta/2$.
The local simulation is simple; see Figure \[fig:ecpo\]. On input an $\EC$-graph $G$ we interpret each edge $\{u,v\}$ of colour $c$ as two directed edges $(u,v)$ and $(v,u)$, both of colour $c$; this interpretation makes $G$ into a $\PO$-graph $G_\leftrightarrows$. We can now locally simulate the $\PO$-algorithm on $G_\leftrightarrows$ to obtain an [[FM]{}]{}$y$ as output. Finally, we transform $y$ back to an [[FM]{}]{}of $G$: the edge $\{u,v\}$ is assigned weight $y(u,v) + y(v,u)$.
![$\EC\leadsto\PO$. Mapping an $\EC$-graph $G$ into a $\PO$-graph $G_\leftrightarrows$, and mapping the output of a $\PO$-algorithm back to the original graph.[]{data-label="fig:ecpo"}](figs.pdf)
Tricky identifiers {#sec:tricky-ids}
------------------
When we are computing a maximal fractional matching $y\colon E(G)\to[0,1]$, we have, a priori, infinitely many choices for the weight $y(e)$ of an edge. For example, in a path on nodes $v_1$, $v_2$, and $v_3$, we can freely choose $y(\{v_1,v_2\})\in[0,1]$ provided we set $y(\{v_2,v_3\})=1-y(\{v_1,v_2\})$. In particular, an $\ID$-algorithm can output edge weights that depend on the node identifiers whose magnitude is not bounded.
Unbounded outputs are tricky from the perspective of proving lower bounds. The main result of the recent work [@goos12local-approximation] is a run-time preserving local simulation $\PO\leadsto \ID$, but the result only holds under the assumption that the solution can be encoded using finitely many values per node on graphs of maximum degree $\Delta$. This restriction has its source in an earlier local simulation $\OI\leadsto\ID$ due to Naor and Stockmeyer [@naor95what] that is crucially using Ramsey’s theorem. In fact, these two local simulation results fail if unbounded outputs are allowed; counterexamples include even natural graph problems [@hasemann12scheduling].
In conclusion, we need an ad hoc argument to establish that an $\ID$-algorithm cannot benefit from unique identifiers in case of the maximal fractional matching problem.
Simulation PO to OI {#sec:po-simulates-oi}
-------------------
Before we address the question of simulating $\ID$-algorithms, we first salvage one part of the result in [@goos12local-approximation]: there is local simulation $\PO\leadsto\OI$ that applies to many locally checkable problems, regardless of the size of the output encoding. Even though this simulation works off-the-shelf in our present setting, we cannot use this result in a black-box fashion, as we need to access its inner workings later in the analysis. Thus, we proceed with a self-contained proof.
The following presentation is considerably simpler than that in [@goos12local-approximation], since we are only interested in a simulation that produces a *locally maximal* fractional matching, not in a simulation that also provides approximation guarantees on the *total weight*, as does the original result.
#### $\PO$-checkability.
Maximal fractional matchings are not only locally checkable, but also *$\PO$-checkable*: there is a local $\PO$-algorithm that can check whether a given $y$ is a maximal [[FM]{}]{}. An important consequence of $\PO$-checkability is that if $H$ is a lift of $G$ then any $\PO$-algorithm produces a feasible solution on $H$ if and only if it produces a feasible solution on $G$.
#### Order homogeneity.
The key to the simulation $\PO\leadsto\OI$ is a *canonical linear order* that can be computed for any tree-like $\PO$-neighbourhood. To define this ordering, let $d$ denote the maximum number of edge colours appearing in the input $\PO$-graphs that have maximum degree $\Delta$, and let $T$ denote the infinite $2d$-regular $d$-edge-coloured $\PO$-tree. We fix a homogeneous linear order for $T$:
\[lem:tree-order\] There is a linear order $\preceq$ on $V(T)$ such that all the ordered neighbourhoods $(T,\preceq,v)$, $v\in V(T)$, are pairwise isomorphic (i.e., up to any radius).
For a proof, see Appendix \[app:tree-order\].
#### Simulation.
Let ${\ensuremath{\mathcal{A}}}_\OI$ be any $t$-time $\OI$-algorithm solving a $\PO$-checkable problem; we describe a $t$-time $\PO$-algorithm ${\ensuremath{\mathcal{A}}}_\PO$ solving the same problem.
The algorithm ${\ensuremath{\mathcal{A}}}_\PO$ operates on a $\PO$-graph $G$ as follows; see Figure \[fig:po-simulates-oi\]. Given a $\PO$-neighbourhood $\tau:=\tau_t(U_G,v)$, we first embed $\tau$ in $T$: we choose an arbitrary node $u\in V(T)$, identify $v$ with $u$, and let the rest of the embedding $\tau\subseteq (T,u)$ be dictated uniquely by the edge colours. We then use the ordering $\preceq$ inherited from $T$ to order the nodes of $\tau$. By Lemma \[lem:tree-order\], the resulting structure $(\tau,\preceq)$ is independent of the choice of $u$, i.e., the isomorphism type of $(\tau,\preceq)$ is only a function of $\tau$. Finally, we simulate $$\label{eq:po-simulates-oi}
{\ensuremath{\mathcal{A}}}_\PO(\tau) := {\ensuremath{\mathcal{A}}}_\OI(\tau,\preceq).$$
![Given a $\PO$-graph $G$, algorithm ${\ensuremath{\mathcal{A}}}_\PO$ simulates the execution of ${\ensuremath{\mathcal{A}}}_\OI$ on $\OI$-graph $\tau$. The linear order on $V(\tau)$ is inherited from the regular tree $T$. As $T$ is homogeneous, the linear order does not depend on the choice of node $u$ in $T$.[]{data-label="fig:po-simulates-oi"}](figs.pdf)
To see that the output of ${\ensuremath{\mathcal{A}}}_\PO$ is feasible, we argue as follows. Embed the universal cover $U_G$ as a subgraph of $(T,\preceq)$ in a way that respects edge colours. Again, all possible embeddings are isomorphic; we call the inherited ordering $(U_G,\preceq)$ the *canonical ordering* of $U_G$. Our definition of ${\ensuremath{\mathcal{A}}}_\PO$ and the order homogeneity of $(T,\preceq)$ now imply that $${\ensuremath{\mathcal{A}}}_\PO(U_G,v) = {\ensuremath{\mathcal{A}}}_\OI(U_G,\preceq,v)\qquad \text{for all}\ v\in V(U_G).$$ Therefore, the output of ${\ensuremath{\mathcal{A}}}_\PO$ is feasible on $U_G$. Finally, by $\PO$-checkability, the output of ${\ensuremath{\mathcal{A}}}_\PO$ is feasible also on $G$, as desired.
Simulation OI to ID {#sec:oi-simulates-id}
-------------------
The reason why an $\ID$-algorithm ${\ensuremath{\mathcal{A}}}$ cannot benefit from unbounded identifiers is due to the propagation principle. We formalise this in two steps.
1. We use the Naor–Stockmeyer $\OI\leadsto\ID$ result to see that ${\ensuremath{\mathcal{A}}}$ can be forced to output fully saturated [[FM]{}]{}s on so-called *loopy* $\OI$-neighbourhoods.
2. We then observe that, on these neighbourhoods, ${\ensuremath{\mathcal{A}}}$ behaves like an $\OI$-algorithm: ${\ensuremath{\mathcal{A}}}$’s output cannot change if we relabel a node in an order-preserving fashion, because the changes in the output would have to propagate outside of ${\ensuremath{\mathcal{A}}}$’s run-time.
That is, our simulation $\OI\leadsto\ID$ will work only on certain types of neighbourhoods (in contrast to our previous simulations), but this will be sufficient for the purposes of the lower bound proof.
#### Step (i).
Let ${\ensuremath{\mathcal{A}}}$ be a $t$-time $\ID$-algorithm that computes a maximal fractional matching on graphs of maximum degree $\Delta$.
From ${\ensuremath{\mathcal{A}}}$ we can derive, by a straightforward simulation, a $t$-time *binary-valued* $\ID$-algorithm ${\ensuremath{\mathcal{A}}}^*$ that indicates whether ${\ensuremath{\mathcal{A}}}$ saturates a node. That is, ${\ensuremath{\mathcal{A}}}^*(G,v) := 1$ if ${\ensuremath{\mathcal{A}}}$ saturates $v$ in $G$, otherwise ${\ensuremath{\mathcal{A}}}^*(G,v):=0$. Such saturation indicators ${\ensuremath{\mathcal{A}}}^*$ were considered previously in [@astrand09vc2apx 4].
Because (and *only* because) ${\ensuremath{\mathcal{A}}}^*$ outputs finitely many values, we can now apply the Ramsey technique of Naor and Stockmeyer [@naor95what Lemma 3.2]. To avoid notational clutter, we use a version of their result that follows from the application of the infinite Ramsey’s theorem (rather than the finite):
\[lem:ramsey\] There is an infinite set $I\subseteq{\ensuremath{\mathbb{N}}}$ such that ${\ensuremath{\mathcal{A}}}^*$ is an $\OI$-algorithm when restricted to graphs whose identifiers are in $I$.
We say that $\tau_t(U_G,\preceq,v)$ is a *loopy* $\OI$-neighbourhood if $G$ is a loopy $\PO$-graph and $(U_G,\preceq)$ is the canonically ordered universal cover of $G$. We also denote by $B_t(v)\subseteq V(U_G)$ the node set of $\tau_t(U_G,v)$.
Our saturation indicator ${\ensuremath{\mathcal{A}}}^*$ is useful in proving the following lemma, which encapsulates step (i) of our argument.
\[lem:oi-saturate\] Let $\tau:=\tau_t(U_G,\preceq,v)$ be loopy. If $\varphi\colon B_t(v)\to I$ is an $\ID$-assignment to the nodes of $\tau$ that respects $\preceq$, then ${\ensuremath{\mathcal{A}}}$ saturates $v$ under $\varphi$.
By loopiness of $G$, the node $v$ has a neighbour $u\in V(U_G)$ such that $\tau_t(U_G,v)\cong\tau_t(U_G,u)$ as $\PO$-neighbourhoods. By order homogeneity, $\tau_t(U_G,\preceq,v)\cong\tau_t(U_G,\preceq,v)$ as $\OI$-neighbourhoods. By Lemma \[lem:ramsey\], this forces ${\ensuremath{\mathcal{A}}}^*$ to output the same on $v$ and $u$ under any $\ID$-assignment $\varphi'\colon B_t(v)\cup B_t(u) \to I$ that respects $\preceq$. But ${\ensuremath{\mathcal{A}}}^*$ cannot output two adjacent $0$’s if ${\ensuremath{\mathcal{A}}}$ is to produce a maximal fractional matching. Hence, ${\ensuremath{\mathcal{A}}}^*$ outputs $1$ on $\varphi'(\tau)$. Finally, by order-invariance, ${\ensuremath{\mathcal{A}}}^*$ outputs $1$ on $\varphi(\tau)$, which proves the claim.
#### Step (ii).
Define $J$ as an infinite subset of $I$ that is obtained by picking every $(m+1)$-th identifier from $I$, where $m$ is the maximum number of nodes in a $(2t+1)$-neighbourhood of maximum degree $\Delta$. That is, for any two $j,j'\in J$, $j<j'$, there are $m$ distinct identifiers $i\in I$ with $j<i<j'$.
The next lemma states that ${\ensuremath{\mathcal{A}}}$ behaves like an $\OI$-algorithm on loopy neighbourhoods that have identifiers from $J$.
\[lem:loopy-neigh\] Let $\tau:=\tau_t(U_G,\preceq,v)$ be loopy. If $\varphi_1,\varphi_2\colon B_t(v)\to J$ are any two $\ID$-assignments that respect $\preceq$, then ${\ensuremath{\mathcal{A}}}(\varphi_1(\tau)) = {\ensuremath{\mathcal{A}}}(\varphi_2(\tau))$.
We first consider the case where $\varphi_1$ and $\varphi_2$ disagree only on a single node $v^*\in B_t(v)$. Towards a contradiction suppose that $$\label{eq:contr-assumption}
{\ensuremath{\mathcal{A}}}(\varphi_1(\tau)) \neq {\ensuremath{\mathcal{A}}}(\varphi_2(\tau)).$$
We start with partial $\ID$-assignments for $U_G$ that are defined on the nodes $B_{2t+1}(v)$; this will suffice for running ${\ensuremath{\mathcal{A}}}$ on the nodes $B_{t+1}(v)$. Indeed, because $J\subseteq I$ is sufficiently sparse, we can extend $\varphi_1$ and $\varphi_2$ into assignments $\bar{\varphi}_1,\bar{\varphi}_2\colon B_{2t+1}(v)\to I$ such that
- $\bar{\varphi}_1$ and $\bar{\varphi}_2$ respect $\preceq$, and
- $\bar{\varphi}_1$ and $\bar{\varphi}_2$ still disagree only on the node $v^*$.
Let $y_i$, $i=1,2$, be the fractional matching defined on the edges incident to $B_{t+1}(v)$ that is determined by the output of ${\ensuremath{\mathcal{A}}}$ on the nodes $B_{t+1}(v)$ under the assignment $\bar{\varphi}_i$. By Lemma \[lem:oi-saturate\], all the nodes $B_{t+1}(v)$ are saturated in both $y_1$ and $y_2$.
Let $D\subseteq U_G$ be the subgraph consisting of the edges $e$ with $y_1(e)\neq y_2(e)$ and of the nodes that are incident to such edges; by (\[eq:contr-assumption\]), we have $v\in V(D)$. Now we can reinterpret the propagation principle from Section \[sec:lb-in-ec\]:
Each node $u\in B_{t+1}(v)\cap V(D)$ has $\deg_D(u)\geq 2$.
Using the fact that $D\subseteq U_G$ is a tree, we can start a simple walk at $v\in V(D)$, take the first step away from $v^*$, and finally arrive at a node $u\in B_{t+1}(v)\cap V(D)$ that has $\operatorname{dist}(u,v^*)\geq t+1$, i.e, the node $u$ does not see the difference between the assignments $\bar{\varphi}_1$ and $\bar{\varphi}_2$. But this is a contradiction: as the $t$-neighbourhoods $\bar{\varphi}_i(\tau_{t}(U_G,u))$, $i=1,2$, are the same, so should the weights output by ${\ensuremath{\mathcal{A}}}$.
*General case.* If $\varphi_1,\varphi_2\colon B_t(v)\to J$ are any two assignments respecting $\preceq$, they can be related to one another by a series of assignments $$\varphi_1=\pi_1,\pi_2,\ldots,\pi_k=\varphi_2,$$ where any two consecutive assignments $\pi_{i}$ and $\pi_{i+1}$ both respect $\preceq$ and disagree on exactly one node. Thus, the claim follows from the analysis above.
Let ${\ensuremath{\mathcal{A}}}_\OI$ be any $t$-time $\OI$-algorithm that agrees with the order-invariant output of ${\ensuremath{\mathcal{A}}}$ on loopy $\OI$-neighbourhoods that have identifiers from $J$. We now obtain the final form of our $\OI\leadsto\ID$ simulation:
\[cor:oi-simulates-id\] If $G$ is a loopy $\PO$-graph, ${\ensuremath{\mathcal{A}}}_\OI$ produces a maximal fractional matching on the canonically ordered universal cover $(U_G,\preceq)$.
The claim follows by a standard argument [@naor95what Lemma 3.2] from two facts: $J$ is large enough; and maximal fractional matchings are locally checkable.
Conclusion {#sec:conclusion}
----------
To get the final lower bound of Theorem \[thm:main\] we reason backwards. Assume that ${\ensuremath{\mathcal{A}}}$ is a $t$-time $\ID$-algorithm that computes a maximal fractional matching on any graph of maximum degree $\Delta$.
- Corollary \[cor:oi-simulates-id\] in Section \[sec:oi-simulates-id\] gives us a $t$-time $\OI$-algorithm ${\ensuremath{\mathcal{A}}}_\OI$ that computes a maximal fractional matching on the canonically ordered universal cover $(U_G,\preceq)$ for any loopy $\PO$-graph $G$ of maximum degree $\Delta$.
- Simulation (\[eq:po-simulates-oi\]) in Section \[sec:po-simulates-oi\] queries the output of ${\ensuremath{\mathcal{A}}}_\OI$ only on $(U_G,\preceq)$. This gives us a $t$-time $\PO$-algorithm ${\ensuremath{\mathcal{A}}}_\PO$ that computes a maximal fractional matching on any loopy $\PO$-graph $G$ of maximum degree $\Delta$.
- The simple simulation in Section \[sec:ec-simulates-po\] gives us a $t$-time $\EC$-algorithm ${\ensuremath{\mathcal{A}}}_\EC$ that computes a maximal fractional matching on any loopy $\EC$-graph $G$ of maximum degree $\Delta/2$.
But now we can use the construction of Section \[sec:lb-in-ec\]: there is a loopy $\EC$-graph of maximum degree $\Delta/2$ where ${\ensuremath{\mathcal{A}}}_\EC$ runs for $\Omega(\Delta)$ rounds. Hence the running time of ${\ensuremath{\mathcal{A}}}$ is also $\Omega(\Delta)$.
Acknowledgements {#acknowledgements .unnumbered}
================
This work is supported in part by the Academy of Finland, Grants 132380 and 252018, and by the Research Funds of the University of Helsinki. The combinatorial proof in Appendix \[app:tree-order\] is joint work with Christoph Lenzen and Roger Wattenhofer.
[26]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle
Noga Alon, L[á]{}szl[ó]{} Babai, and Alon Itai. A fast and simple randomized parallel algorithm for the maximal independent set problem. *Journal of Algorithms*, 70 (4):0 567–583, 1986. [doi: ]{}[10.1016/0196-6774(86)90019-2]{}.
Dana Angluin. Local and global properties in networks of processors. In *Proc. 12th Symposium on Theory of Computing (STOC 1980)*, pages 82–93, New York, 1980. ACM Press. [doi: ]{}[10.1145/800141.804655]{}.
Matti [Å]{}strand and Jukka Suomela. Fast distributed approximation algorithms for vertex cover and set cover in anonymous networks. In *Proc. 22nd Symposium on Parallelism in Algorithms and Architectures (SPAA 2010)*, pages 294–302, New York, 2010. ACM Press. [doi: ]{}[10.1145/1810479.1810533]{}.
Matti [Å]{}strand, Patrik Flor[é]{}en, Valentin Polishchuk, Joel Rybicki, Jukka Suomela, and Jara Uitto. A local 2-approximation algorithm for the vertex cover problem. In *Proc. 23rd Symposium on Distributed Computing (DISC 2009)*, volume 5805 of *LNCS*, pages 191–205, Berlin, 2009. Springer. [doi: ]{}[10.1007/978-3-642-04355-0\_21]{}.
Leonid Barenboim and Michael Elkin. *Distributed Graph Coloring*. March 2013. URL <http://www.cs.bgu.ac.il/~elkinm/BarenboimElkin-monograph.pdf>.
Leonid Barenboim, Michael Elkin, Seth Pettie, and Johannes Schneider. The locality of distributed symmetry breaking. In *Proc. 53rd Symposium on Foundations of Computer Science (FOCS 2012)*, pages 321–330, Los Alamitos, 2012. IEEE Computer Society Press. [doi: ]{}[10.1109/FOCS.2012.60]{}.
Andrzej Czygrinow, Micha[ł]{} Ha[ń]{}[ć]{}kowiak, and Wojciech Wawrzyniak. Fast distributed approximations in planar graphs. In *Proc. 22nd Symposium on Distributed Computing (DISC 2008)*, volume 5218 of *LNCS*, pages 78–92, Berlin, 2008. Springer. [doi: ]{}[10.1007/978-3-540-87779-0\_6]{}.
Patrik Flor[é]{}en, Marja Hassinen, Joel Kaasinen, Petteri Kaski, Topi Musto, and Jukka Suomela. Local approximability of max-min and min-max linear programs. *Theory of Computing Systems*, 490 (4):0 672–697, 2011. [doi: ]{}[10.1007/s00224-010-9303-6]{}.
Mika G[ö]{}[ö]{}s and Jukka Suomela. No sublogarithmic-time approximation scheme for bipartite vertex cover. In *Proc. 26th Symposium on Distributed Computing (DISC 2012)*, volume 7611 of *LNCS*, pages 181–194, Berlin, 2012. Springer. [doi: ]{}[10.1007/978-3-642-33651-5\_13]{}.
Mika G[ö]{}[ö]{}s, Juho Hirvonen, and Jukka Suomela. Lower bounds for local approximation. In *Proc. 31st Symposium on Principles of Distributed Computing (PODC 2012)*, pages 175–184, New York, 2012. ACM Press. [doi: ]{}[10.1145/2332432.2332465]{}.
Micha[ł]{} Ha[ń]{}[ć]{}kowiak, Micha[ł]{} Karo[ń]{}ski, and Alessandro Panconesi. On the distributed complexity of computing maximal matchings. *SIAM Journal on Discrete Mathematics*, 150 (1):0 41–57, 2001. [doi: ]{}[10.1137/S0895480100373121]{}.
Henning Hasemann, Juho Hirvonen, Joel Rybicki, and Jukka Suomela. Deterministic local algorithms, unique identifiers, and fractional graph colouring. In *Proc. 19th Colloquium on Structural Information and Communication Complexity (SIROCCO 2012)*, volume 7355 of *LNCS*, pages 48–60, Berlin, 2012. Springer. [doi: ]{}[10.1007/978-3-642-31104-8\_5]{}.
Juho Hirvonen and Jukka Suomela. Distributed maximal matching: greedy is optimal. In *Proc. 31st Symposium on Principles of Distributed Computing (PODC 2012)*, pages 165–174, New York, 2012. ACM Press. [doi: ]{}[10.1145/2332432.2332464]{}.
Amos Israeli and Alon Itai. A fast and simple randomized parallel algorithm for maximal matching. *Information Processing Letters*, 220 (2):0 77–80, 1986. [doi: ]{}[10.1016/0020-0190(86)90144-4]{}.
Fabian Kuhn and Roger Wattenhofer. On the complexity of distributed graph coloring. In *Proc. 25th Symposium on Principles of Distributed Computing (PODC 2006)*, pages 7–15, New York, 2006. ACM Press. [doi: ]{}[10.1145/1146381.1146387]{}.
Fabian Kuhn, Thomas Moscibroda, and Roger Wattenhofer. What cannot be computed locally! In *Proc. 23rd Symposium on Principles of Distributed Computing (PODC 2004)*, pages 300–309, New York, 2004. ACM Press. [doi: ]{}[10.1145/1011767.1011811]{}.
Fabian Kuhn, Thomas Moscibroda, and Roger Wattenhofer. The price of being near-sighted. In *Proc. 17th Symposium on Discrete Algorithms (SODA 2006)*, pages 980–989, New York, 2006. ACM Press. [doi: ]{}[10.1145/1109557.1109666]{}.
Fabian Kuhn, Thomas Moscibroda, and Roger Wattenhofer. Local computation: Lower and upper bounds, 2010. Manuscript, arXiv:1011.5470 \[cs.DC\].
Christoph Lenzen and Roger Wattenhofer. Leveraging [L]{}inial’s locality limit. In *Proc. 22nd Symposium on Distributed Computing (DISC 2008)*, volume 5218 of *LNCS*, pages 394–407, Berlin, 2008. Springer. [doi: ]{}[10.1007/978-3-540-87779-0\_27]{}.
Nathan Linial. Locality in distributed graph algorithms. *SIAM Journal on Computing*, 210 (1):0 193–201, 1992. [doi: ]{}[10.1137/0221015]{}.
Michael Luby. A simple parallel algorithm for the maximal independent set problem. *SIAM Journal on Computing*, 150 (4):0 1036–1053, 1986. [doi: ]{}[10.1137/0215074]{}.
Alain Mayer, Moni Naor, and Larry Stockmeyer. Local computations on static and dynamic graphs. In *Proc. 3rd Israel Symposium on the Theory of Computing and Systems (ISTCS 1995)*, pages 268–278, Piscataway, 1995. IEEE. [doi: ]{}[10.1109/ISTCS.1995.377023]{}.
Moni Naor and Larry Stockmeyer. What can be computed locally? *SIAM Journal on Computing*, 240 (6):0 1259–1277, 1995. [doi: ]{}[10.1137/S0097539793254571]{}.
B. H. Neumann. On ordered groups. *American Journal of Mathematics*, 710 (1):0 1–18, 1949. [doi: ]{}[10.2307/2372087]{}.
Alessandro Panconesi and Romeo Rizzi. Some simple distributed algorithms for sparse networks. *Distributed Computing*, 140 (2):0 97–100, 2001. [doi: ]{}[10.1007/PL00008932]{}.
David Peleg. *Distributed Computing: A Locality-Sensitive Approach*. SIAM Monographs on Discrete Mathematics and Applications. SIAM, Philadelphia, 2000.
Proof of Lemma \[lem:tree-order\] {#app:tree-order}
=================================
We give two proofs for Lemma \[lem:tree-order\], the second one of which we have not seen in print.
Algebraic proof
---------------
The tree $T$ can be thought of as a Cayley graph of the free group on $d$ generators, and the free group admits a linear order that is invariant under the group acting on itself by multiplication; for details, see Neumann [@neumann49ordered] and the discussion in [@goos12local-approximation 5].
Combinatorial proof
-------------------
In $T$ there is a unique simple directed path $x{\!\rightsquigarrow\!}y$ between any two nodes $x,y\in V(T)$. We use $V(x{\!\rightsquigarrow\!}y)$ and $E(x{\!\rightsquigarrow\!}y)$ to denote the nodes and edges of the path. Also, we set ${V_{\textsf{in}}}(x{\!\rightsquigarrow\!}y) := V(x{\!\rightsquigarrow\!}y)\smallsetminus \{x,y\}$. We will assign to each path $x{\!\rightsquigarrow\!}y$ an integer value, denoted $\llbracket x{\!\rightsquigarrow\!}y\rrbracket$, which will determine the relative order of the endpoints.
By definition, in the $\PO$ model, we are given the following linear orders:
- Each node $v \in V(T)$ has a linear order $\prec_v$ on its incident edges.
- Each edge $e \in E(T)$ has a linear order $\prec_e$ on its incident nodes.
For notational convenience, we extend these relations a little: for $v \in {V_{\textsf{in}}}(x{\!\rightsquigarrow\!}y)$ we define $x\prec_v y \iff e\prec_v e'$, where $e$ is the last edge on the path $x{\!\rightsquigarrow\!}v$ and $e'$ is the first edge on the path $v{\!\rightsquigarrow\!}y$; similarly, for $e\in E(x{\!\rightsquigarrow\!}y)$, we define $x\prec_e y\iff x'\prec_e y'$, where $e=\{x',y'\}$ and $x'$ and $y'$ appear on the path $x{\!\rightsquigarrow\!}y$ in this order.
![In this example, $\llbracket u{\!\rightsquigarrow\!}v \rrbracket = +1$, $\llbracket v{\!\rightsquigarrow\!}u \rrbracket = -1$, and hence $u \prec v$.[]{data-label="fig:tree-order"}](figs.pdf)
For any statement $P$, we will use the following type of Iverson bracket notation: $$[P] := \begin{cases}
+1 & \text{if $P$ is true}, \\
-1 & \text{if $P$ is false}.
\end{cases}$$ We can now define $$\label{eq:path-value}
\llbracket x{\!\rightsquigarrow\!}y\rrbracket\ := \sum_{e\in E(x\rightsquigarrow y)} [x\prec_e y]\quad +\
\sum_{v\in {V_{\textsf{in}}}(x\rightsquigarrow y)} [x\prec_v y].$$ In particular, $\llbracket x{\!\rightsquigarrow\!}x\rrbracket = 0$. The linear order $\prec$ on $V(T)$ is now defined by setting $$x \prec y \iff \llbracket x{\!\rightsquigarrow\!}y \rrbracket > 0.$$ See Figure \[fig:tree-order\]. Next, we show that this is indeed a linear order.
#### Antisymmetry and totality.
Since $[x\prec_v y] = -[y\prec_v x]$ and $[x\prec_e y] = -[y\prec_e x]$, we have the property that $$\llbracket x{\!\rightsquigarrow\!}y\rrbracket = -\llbracket y{\!\rightsquigarrow\!}x\rrbracket.$$ Moreover, if $x\neq y$, the first sum in is odd iff the second sum in is even. Therefore $\llbracket x{\!\rightsquigarrow\!}y\rrbracket$ is always odd; in particular, it is non-zero. These properties establish that either $x \prec y$ or $x \prec y$ (but never both).
#### Transitivity.
Let $x,y,z \in V(T)$ be three distinct nodes with $x \prec y$ and $y \prec z$; we need to show that $x\prec z$. Denote by $v \in V(T)$ the unique node in the intersection of the paths $x{\!\rightsquigarrow\!}z$, $z{\!\rightsquigarrow\!}y$, and $y{\!\rightsquigarrow\!}x$.
Viewing the path $x{\!\rightsquigarrow\!}z$ piecewise as $x{\!\rightsquigarrow\!}v {\!\rightsquigarrow\!}z$ we write $$\llbracket x{\!\rightsquigarrow\!}z\rrbracket
= \llbracket x{\!\rightsquigarrow\!}v\rrbracket
+ [x\prec_v z]
+ \llbracket v{\!\rightsquigarrow\!}z\rrbracket,$$ where it is understood that $[x\prec_v z] := 0$ in the degenerate cases where $v\in\{x,z\}$. Similar decompositions can be written for $z{\!\rightsquigarrow\!}y$ and $y{\!\rightsquigarrow\!}x$. Indeed, it is easily checked that $$\llbracket x{\!\rightsquigarrow\!}z\rrbracket
+ \llbracket z{\!\rightsquigarrow\!}y\rrbracket
+ \llbracket y{\!\rightsquigarrow\!}x\rrbracket
= [x\prec_v z]
+ [z\prec_v y]
+ [y\prec_v x].$$ By assumption, $\llbracket z{\!\rightsquigarrow\!}y\rrbracket, \llbracket y{\!\rightsquigarrow\!}x\rrbracket \leq -1$, so we get $$\llbracket x{\!\rightsquigarrow\!}z\rrbracket
\geq 2 + [x\prec_v z]
+ [z\prec_v y]
+ [y\prec_v x].$$ The only way the right hand side can be negative is if $$[x\prec_v z] = [z\prec_v y]= [y\prec_v x] = -1,$$ but this is equivalent to having $z\prec_v x \prec_v y \prec_v z$, which is impossible. Hence $\llbracket x{\!\rightsquigarrow\!}z\rrbracket\geq 0$. But since $x\neq z$ we must have in fact that $x \prec z$.
Derandomising Local Algorithms {#app:randomness}
==============================
As discussed in Section \[sec:tricky-ids\], unbounded outputs require special care. In this Appendix we note that even though Naor and Stockmeyer [@naor95what] assume bounded outputs, their result on derandomising local algorithms applies in our setting, too.
Let ${\ensuremath{\mathcal{A}}}$ be a randomised $t(\Delta)$-time algorithm that computes a maximal [[FM]{}]{}on graphs of maximum degree $\Delta$ or possibly fails with some small probability. Given an assignment of random bit strings $\rho\colon V(G)\to\{0,1\}^*$ to the nodes of a graph $G$, denote by ${\ensuremath{\mathcal{A}}}^\rho$ the *deterministic* algorithm that computes as ${\ensuremath{\mathcal{A}}}$, but uses $\rho$ for randomness.
The proof of Theorem 5.1 in [@naor95what] is using the following fact whose proof we reproduce here for convenience.
For every $n$, there is an $n$-set $S_n\subseteq {\ensuremath{\mathbb{N}}}$ of identifiers and an assignment $\rho_n\colon S_n\to \{0,1\}^*$ such that ${\ensuremath{\mathcal{A}}}^{\rho_n}$ is correct on all graphs that have identifiers from $S_n$.
Denote by $k=k(n)$ the number of graphs $G$ with $V(G)\subseteq\{1,\ldots,n\}$. Let $X_1,\ldots,X_q\subseteq {\ensuremath{\mathbb{N}}}$ be any $q$ disjoint sets of size $n$. Suppose for the sake of contradiction that the claim is false for each $X_i$. That is, for any assignment $\rho\colon X_i\to\{0,1\}^*$ of random bits, ${\ensuremath{\mathcal{A}}}^\rho$ fails on at least one of the $k$ many graphs $G$ with $V(G)\subseteq X_i$. By averaging, this implies that for each $i$ there is a particular graph $G_i$, $V(G_i)\subseteq X_i$, on which ${\ensuremath{\mathcal{A}}}$ fails with probability at least $1/k$. Consider the graph $G$ that is the disjoint union of the graphs $G_1,\ldots,G_q$. Since ${\ensuremath{\mathcal{A}}}$ fails independently on each of the components $G_i$, the failure probability on $G$ is at least $1-(1-1/k)^q$. But this probability can be made arbitrarily close to $1$ by choosing a large enough $q$, which contradicts the correctness of ${\ensuremath{\mathcal{A}}}$.
The deterministic algorithms ${\ensuremath{\mathcal{A}}}^{\rho_n}$ allow us to again obtain a $t(\Delta)$-time $\OI$-algorithm, which establishes the $\Omega(\Delta)$ lower bound for ${\ensuremath{\mathcal{A}}}$. Only small modifications to Section \[sec:oi-simulates-id\] are needed:
- [**Step (i).**]{} Instead of the infinite set $I\subseteq{\ensuremath{\mathbb{N}}}$ as previously provided by Lemma \[lem:ramsey\], we can use the finite Ramsey’s theorem to find arbitrarily large sets $I_n\subseteq S_n$ (i.e., $|I_n|\to\infty$ as $n\to\infty$) with the property that ${\ensuremath{\mathcal{A}}}^{\rho_n}$ fully saturates the nodes of a loopy $\OI$-neighbourhood that has identifiers from $I_n$ (Lemma \[lem:oi-saturate\]).
- [**Step (ii).**]{} Then, passing again to sufficiently sparse subsets $J_n\subseteq I_n$, we can reprove Lemma \[lem:loopy-neigh\] and Corollary \[cor:oi-simulates-id\], which only require that $J$ is large enough.
This concludes the lower bound proof for randomised $\LOCAL$ algorithms.
|
---
abstract: 'In this note, we introduce the generalization of opers (superopers) for a certain class of superalgebras, which have pure odd simple root system. We study in detail $SPL_2$-superopers and in particular derive the corresponding Bethe ansatz equations, which describe the spectrum of $osp(2|1)$ Gaudin model.'
address: ' Department of Mathematics,Columbia University, 2990 Broadway, New York,NY 10027, USA. Max Planck Institute for Mathematics,Vivatsgasse 7, Bonn, 53111, GermanyIPME RAS, V.O. Bolshoj pr., 61, 199178, St. Petersburg zeitlin@math.columbia.eduhttp://math.columbia.edu/$\sim$zeitlin http://www.ipme.ru/zam.html '
author:
- 'Anton M. Zeitlin'
title: Superopers on supercurves
---
Introduction
============
Opers are necessary ingredients in the study of the geometric Langlands correspondence (see e.g. [@FL]). They also play important role in many aspects of mathematical physics. For example, opers are very important in the theory of integrable systems, and recently they became a necessary component even in the modern Quantum Field Theory approaches to the knot theory (see e.g. [@W]).
Originally, opers were studied locally in the seminal paper of Drinfeld and Sokolov [@DS] as gauge equivalence classes of certain differential operators with values in some simple Lie algebra, which are the L-operators of the generalized Korteweg-de Vries (KdV) integrable models. Later, Beilinson and Drinfeld generalized this local object making it coordinate independent [@BD]. Namely, a $G$-oper on a smooth curve $\Sigma$, where $G$ is a simple algebraic group of the adjoint type with the Lie algebra $\mathfrak{g}$, is a triple $(\mathcal{F},\mathcal{F}_B,\nabla$), where $\mathcal{F}$ is a $G$-bundle over $\Sigma$, $\mathcal{F}_B$ is its $B$-reduction with respect to Borel subgroup $B$, and $\nabla$ is a flat connection, which behaves in a certain way with respect to $\mathcal{F}_B$. For example, in the case of $PGL_2$-oper, this condition just means that the reduction $\mathcal{F}_B$ is nowhere preserved by this connection. Moreover, it appears, following the results of Drinfeld and Sokolov, that the space of $G$-opers is equivalent to a certain space of scalar pseudodifferential operators. In the $PGL_2$ case, the resulting space of scalar operators is just a family of Sturm-Liouville operators and the connection transformation properties allows to consider them on all $\Sigma$ as projective connections.
A really interesting story starts when we allow opers to have regular singularities. It turns out that the opers on the projective line can be described via the Bethe ansatz equations for the Gaudin model corresponding to the Langlands dual Lie algebra [@Fe], [@Fb]. An important object on the way to understand this relation is the so-called Miura oper, which was introduced by E. Frenkel [@Fb]. A $Miura$ $oper$ is an oper with one extra constraint: the connection preserves another $B$-reduction of $\mathcal{F}$, which we call $\mathcal{F}'_B$. The space of the Miura opers, associated to a given $G$-oper with trivial monodromy, is isomorphic to the flag manifold $G/B$. If the reduction ${\mathcal{F}'_B}$ corresponds to the point in a big cell of $G/B$, then such a Miura oper is called $generic$. It was shown by E. Frenkel that any Miura oper is generic on the punctured disc and that there is an isomorphism between the space of generic Miura opers on the open neighborhood with certain $H$-bundle connections ($H=B/[B,B]$) [@Fb]. The map from $H$-connections to $G$-opers is just a generalization of the standard Miura transformation in the theory of KdV integrable models.
By means of the above relation with the $H$-connections, it was proved for $PGL_2$-oper in [@Fe] and then generalized to the higher rank in [@FFR], [@Fb] that the eigenvalues of the Gaudin model for a Langlands dual Lie algebra $\mathfrak{g}^L$ can be described by the $G$-opers on $\mathbb{C}P^1$ with given regular singularities and trivial monodromy. Namely, the consistency conditions for the $H$-connections underlying such opers coincide with the Bethe ansatz equations for the Gaudin Model.
In this article, we are trying to generalize some of the above notions and results on the level of superalgebras. We define an analogue of the oper in the case of supergroups which allow the pure fermionic family of simple roots on a super Riemann surface, following some local considerations of [@inami], [@DG], [@kz]. We call such objects $superopers$, and in some sense they turn out to be “square roots” of standard opers. Unfortunately for all other superalgebras, the resulting formalism allows only locally defined objects (on a formal superdisc). We study in detail the simplest nontrivial case of superoper, related to the group $SPL_2$ (see e.g. [@crane]), related to superprojective transformations, and explicitly establish the relation between $osp(2|1)$ Gaudin model studied in [@kulish] and the $SPL_2$-oper on super Riemann sphere with given regular singularities.
In section 2 we explain the relation between super projective structures on super Riemann surface and the supersymmetric version of the Sturm-Liouville operator. Then we relate it to the flat connection on $SPL_2$-bundle which will give us the first example of superoper.
In Section 3 we use this experience to generalize the notion of superoper to the case of higher rank simple supergroups. However, only the supergroups which permit a pure fermionic system of simple roots allow us to construct a globally defined object on a super Riemann surface. We define Miura superopers and superopers with regular singularities in section 4. There we study the consistency conditions for the superopers on the superconformal sphere and derive the corresponding Bethe equations. We compare the results with the $osp(2|1)$ Gaudin model and find that the Bethe ansatz equations coincide with the “body” part of the consistency condition for corresponing $SPL_2$ Miura superopers.
Some remarks and open questions are given in section 5.\
[**Acknowledgments.**]{} I am very grateful to I. Penkov for useful discussions and to D. Leites for pointing out important references. I am indebted to E. Frenkel and E. Vishnyakova for comments on the manuscript.
Superprojective structures, super Sturm-Liouville operator and\
$osp(2|1)$ superoper
===============================================================
[**2.1. Super Riemann surfaces and superconformal transformations.**]{} We remind that a $supercurve$ of dimension $(1|1)$ (see e.g. [@br]) over some Grassman algebra $S$ is a pair $(X,\mathcal{O}_{X})$, where $X$ is a topological space and $\mathcal{O}_X$ is a sheaf of supercommutative $S$-algebras over $X$ such that $(X,\mathcal{O}^{\rm{red}}_{X})$ is an algebraic curve (where $\mathcal{O}^{\rm{red}}_{X}$ is obtained from $\mathcal{O}_X$ by quoting out nilpotents) and for some open sets $U_{\alpha}\subset X$ and some linearly independent elements $\{\theta_{\alpha}\}$ we have $\mathcal{O}_{U_{\alpha}}=\mathcal{O}^{\rm{red}}_{U_{\alpha}}\otimes S[\theta_{\alpha}]$. These open sets $U_{\alpha}$ serve as coordinate neighborhoods for supercurves with coordinates $(z_{\alpha}, \theta_{\alpha})$. The coordinate transformations on the overlaps $U_{\alpha}\cup U_{\beta}$ are given by the following formulas $z_{\alpha}=F_{\alpha\beta}(z_{\beta}, \theta_{\beta})$, $\theta_{\alpha}=\Psi_{\alpha\beta}(z_{\beta}, \theta_{\beta})$, where $F_{\alpha\beta}$, $\Psi_{\alpha\beta}$ are even and odd functions correspondingly. A super Riemann surface $\si$ over some Grassmann algebra $S$ (for more details see e.g. [@Wl]) is a supercurve of dimension $1|1$ over $S$, with one more extra structure: there is a subbundle $\mathcal{D}$ of $T\Sigma$ of dimension $0|1$, such that for any nonzero section $D$ of $\mathcal{D}$ on an open subset $U$ of $\si$, $D^2$ is nowhere proportional to $\mathcal{D}$, i.e. we have the exact sequence: $$\begin{aligned}
\label{exact}
0\to \mathcal{D}\to T\si\to \mathcal{D}^2\to 0.\end{aligned}$$ One can pick the holomorphic local coordinates in such a way that this odd vector field will have the form $f(z,\theta)D_{\theta}$, where $f(z,\theta)$ is a non vanishing function and: $$\begin{aligned}
D_{\theta}=\partial_{\theta}+\theta\partial_z, \quad D_{\theta}^2=\partial_z.\end{aligned}$$ Such coordinates are called $superconformal$. The transformation between two superconformal coordinate systems $(z, \theta)$, $(z', \theta')$ is determined by the condition that $\mathcal{D}$ should be preserved, i.e.: $$\begin{aligned}
D_{\theta}=(D_{\theta}\theta') D_{\theta'},\end{aligned}$$ so that the constraint on the transformation coming from the local change of coordinates is $D_{\theta} z'-\theta'D_\theta \theta'=0$. An important nontrivial example of a super Riemann surface is the Riemann super sphere $SC^*$: there are two charts $(z, \theta)$, $(z, \theta')$ so that $$\begin{aligned}
z'=-\frac{1}{z},\quad \theta'=\frac{\theta}{z}.\end{aligned}$$ There is a group of superconformal transformations, usually denoted as $SPL_2$ which acts transitively on $SC^*$ as follows: $$\begin{aligned}
\label{transff}
&&z\to \frac{az+b}{cz+d}+\theta\frac{\gamma z+\delta}{(cz+d)^2}, \nonumber\\
&&\theta\to \frac{\gamma z+\delta}{cz+d}+\theta\frac{1+\2 \delta\gamma}{cz+d}, \end{aligned}$$ where $a, b, c,d$ are even, $ad-bc=1$, and $\gamma, \delta$ are odd. The Lie algebra of this group is isomorphic to $osp(2|1)$.
Let us introduce two more notions which we will use in the following. From now on let us call the sections of $\mathcal{D}^n$ the $superconformal$ $fields$ of dimension $-n/2$. In particular, taking the dual of the exact sequence \[exact\], we find that a bundle of superconformal fields of dimension 1 (i.e. $\mathcal{D}^{-2}$) is a subbundle in $T^*\si$. Considering the superconformal coordinate system, a nonzero section of this bundle is generated by $\eta=dz-\theta d\theta$, which is orthogonal to $D_{\theta}$ under standard pairing.
At last, we introduce one more notation. For any element $A$ which belongs to some free module over $S[\theta]$, where $\theta$ is a local odd coordinate, we denote the body of this element (i.e. $A$ is stripped of the dependence on the odd variables) as $\bar{A}$.\
[**2.2. Superprojective structures and superprojective connections.**]{} Let us at first define what a superprojective connection is. We consider the following differential operator, defined locally with coordinates $(z,\theta)$: $$\begin{aligned}
\label{loc}
D^3_{\theta}-\omega(z,\theta).\end{aligned}$$ The following proposition holds.\
[**Proposition 2.1.**]{} [@mathieu] [*Formula [(\[loc\])]{} defines the operator $L$, such that $$\begin{aligned}
L: \mathcal{D}^{-1}\to \mathcal{D}^{2}\end{aligned}$$ iff the transformation of $\omega$ on the overlap of two coordinate charts $(z,\theta)$, $(z', \theta')$ is given by the following expression: $$\begin{aligned}
\omega(z, \theta)=\omega(z',\theta')(D_{\theta}\theta')^3+\{\theta';z,\theta\} \end{aligned}$$ where $$\begin{aligned}
\{\theta';z,\theta\}=\frac{\partial^2_z\theta'}{D_{\theta}\theta}-
2\frac{\partial_z\theta D^3_{\theta}\theta'}{(D_{\theta}\theta')^2}\end{aligned}$$ is a supersymmetric generalization of Schwarzian derivative.*]{}\
One can show that the only coordinate transformations for which the super Schwarzian derivative vanishes, are the fractional linear transformations [(\[transff\])]{}.
Let us consider the covering of $\si$ by open subsets, so that the transition functions are given by [(\[transff\])]{}. Two such coverings are considered equivalent if their union has the same property of transition functions. The corresponding equivalence classes are called [*superprojective structures*]{}.
It appears that like in the pure even case, there is a bijection between super projective connections and super projective structures. For a given super projective structure one can define a superprojective connection by assigning operator $D_{\theta}^3$ in every coordinate chart. From Proposition 2.1 we find that the resulting object is defined globally on $\si$. On the other hand, given a super projective connection on $\si$, one can consider the following linear problem: $$\begin{aligned}
(D^3_{\theta}-\omega(z,\theta))\psi(z, \theta)=0.\end{aligned}$$ From the results of [@arvis] we know that this equation has 3 independent solutions: two even $x(z,\theta)$, $y(z,\theta)$ and one odd $\xi(z,\theta)$. Defining $C=y/x$, $\alpha=\xi/x$, we find that $\omega(z,\theta)$ is expressed via super Schwarzian derivative, i.e. $w(z,\theta)=\{\alpha; \theta,z\}$ and the consistency conditions on $C$ and $\alpha$ are such that $C$ can be represented in terms of $\alpha$ in the following way: $$\begin{aligned}
C=cA+\gamma A \alpha +\delta\alpha,\end{aligned}$$ where $A$ is such a function that $(z,\theta)\to (A,\alpha) $ is a superconformal transformation. In a different basis $(A, \alpha)$ will be transformed via $SPL_2$ [(\[transff\])]{} and hence $(A, \alpha)$ form natural coordinates for a projective structure on $\si$. Therefore we have the following proposition.\
[**Proposition 2.2**]{} [*There is a bijection between the set of superprojective structures and the set of superprojective connections on $\si$.*]{}\
[**2.3. Connections for vector bundles over super Riemann surfaces.**]{} Let us consider a vector bundle $V$ over the super Riemann surface with the fiber $\mathbb{C}^{m|n}_S$. Let $\mathcal{E}^0(\si, V)$ be the space of sections on $V$ over $\si$ and let $\mathcal{E}^1(\si, V)$ be the space of 1-form valued sections. As usual, the connection is a differential operator $$\begin{aligned}
d_A(fs)=df\otimes s+(-1)^{|f|}fd_As,\end{aligned}$$ where $f$ is a smooth even/odd function on $\si$ and $s\in \mathcal{E}^0(\si, V)$. Locally, in the chart $(z,\theta)$ the connection has the following form: $$\begin{aligned}
&&d_A=d+A=d+(\eta A_z+d\theta A_{\theta})+(\bar{\eta}A_{\b z}+d\b\theta A_{\b\theta})=\nonumber\\
&&({\partial}+\eta A_z+d\theta A_{\theta})+(\b {\partial}+\b \eta A_{\b z}+d\b \theta A_{\b \theta})=\nonumber\\
&&(\eta D^A_z+d\theta D_\theta^A)+(\b \eta D^A_{\b z}+d\b \theta D_{\b \theta}^A).\end{aligned}$$ We note that we used here the fact that $d={\partial}+\b {\partial}$ and ${\partial}=\eta{\partial}_z+d\theta D_{\theta}$. The expression for the curvature is: $$\begin{aligned}
&&F=d_A^2=\nonumber\\
&&d\theta d\theta F_{\theta\theta}+\eta d\theta F_{z\theta}+ d\b \theta d\b \theta F_{\b \theta\b \theta}+ \b\eta d\b \theta
F_{\b z\b \theta}+\nonumber\\
&&\eta\b \eta F_{z\b z}+ \eta d\b \theta F_{z\b \theta}+\b \eta d\theta F_{\b z\theta}+d\theta d\b \theta F_{\theta\b \theta},\end{aligned}$$ where $F_{\theta\theta}=- {D^A_{\theta}}^2+D^A_z$, $F_{z\theta}=[D^A_z, D^A_{\theta}]$, $F_{z, \b z}=[D^A_z, D^A_{\b z}]$, $F_{z\b \theta}=
[D^A_z, D^A_{\b \theta}]$, $F_{\theta\b \theta}=-[D^A_{\theta}, D^A_{\b \theta}]$, etc.
It appears that if the connection $d_A$ offers partial flatness, which implies $F_{\theta\theta}=F_{z\theta}=F_{\b\theta\b \theta}=F_{\b z
\b \theta}=0$, then there is a superholomorphic structure on $V$ (i.e. transition functions of the bundle can be made superholomorphic) [@RT]. We are interested in the flat superholomorphic connections. In this case, since $F_{\theta\theta}=0$, the connection is fully determined by the $D^A_{\theta}$ locally. In other words it is determined by the following odd differential operator, which from now on will denote $\nabla$ and call $long$ $superderivative$: $$\begin{aligned}
\label{fc}
\nabla=D_{\theta}+A_{\theta}(z, \theta), \end{aligned}$$ which gives a map: $\mathcal{D}\to End{V}$ so that the transformation properties for $A_{\theta}$ are: $A_{\theta}\to gA_{\theta}g^{-1}-D_{\theta}g g^{-1}$, where $g$ is a superholomorphic function providing change of trivialization.\
[**2.4. $SPL_2$-opers.**]{} In this subsection, we give a description of the first nontrivial superoper. Suppose we have a superprojective structure on $\si$. Naturally we have a structure of a flat $SPL_2$-bundle $\mathcal{F}$ over $\si$, since on on the overlaps there is a constant map to $SPL_2$. Let us study the corresponding flat connection on $\si$. Since $SPL_2$ is a group of superconformal automorphisms of $SC^*$, one can form an associated bundle $SC^*_{\mathcal{F}}=\mathcal{F}\times_{SPL_2} SC^*$. This bundle has a global section which is just given by the superprojective coordinate functions $(z, \theta)$ on $\si$. We note that it has nonvanishing (super)derivative at all points.
One can view $SC^*$ as a flag supermanifold. Namely, consider the group $SPL_2$ acting in $\mathbb{C}^{2|1}=span(e_1,\xi, e_2)$, where we put the odd vector in the middle. Then $e_1$ is stabilized by the Borel subgroup of upper triangular matrices. Therefore, one can identify $SC^*$ with $SPL_2/B$. Since we have a nozero section of $SC^*_{\mathcal{F}}$, we have a $B$-subbundle $\mathcal{F}_B$ of a $G$-bundle, where $G$ stands for $SPL_2$. Hence, a superprojective structrure gives the flat $SPL_2$-bundle $\mathcal{F}$ with a reduction $\mathcal{F}_B$. However, there is one more piece of data we can use: it is the condition that the (super)derivative of the section of $SC^*_{\mathcal{F}}$ is nowhere vanishing. It means that the flat connection on $\mathcal{F}$ does not preserve the $B$-reduction anywhere. Let us figure out which conditions does it put on the connection if we choose a trivialization of $\mathcal{F}$ induced from the $\mathcal{F}_B$ trivialization. As we discussed above, the connection is determined by the following odd differential operator: $$\begin{aligned}
\nabla=D_\theta+
\begin{pmatrix}
\alpha(z,\theta) & b(z,\theta) & \beta(z,\theta) \\
- a(z,\theta) & 0 & b(z,\theta) \\
\gamma(z,\theta) & a(z,\theta) & -\alpha(z,\theta),
\end{pmatrix},\end{aligned}$$ so that the matrix is in the defining representation of the Lie superalgebra of $SPL_2$, namely $osp(2|1)$. This operator and its square describe even and odd directions for the tangent vector to $SC^*$. Since we have the condition that both of them are nonvanishing, and identifying tangent space with $osp(2|1)/\mathfrak{b}$ (where $\mathfrak{b}$ is the Borel subalgebra), we obtain that $a$ is nonvanishing. It is possible to make $\gamma=0$, by redefining $\nabla$ by adding $\mu(\nabla)^2$ with appropriate odd function $\mu$, which just corresponds to the choice of superconformal coordinates on $SC^*$. We call such a triple $(\mathcal{F}, \mathcal{F}_B, \nabla)$ a $superoper$. We notice that taking the square of the odd operator $\nabla$, reducing such even operator from $\si$ to the underlying curve $\si^0$ and getting rid of all the odd variables, we obtain the oper connection for the $PGL_2$-bundle. Thus superopers can be thought about as “square roots” of opers.
Using $B$-valued gauge transformations one can bring $\nabla_{\theta}$ to the canonical form: $$\begin{aligned}
\nabla=D_\theta+
\begin{pmatrix}
0 & 0 & \omega(z,\theta) \\
-1 & 0 & 0 \\
0 & 1 & 0
\end{pmatrix}.\end{aligned}$$ Therefore on a superdisc with coordinate $(z,\theta)$ the space of $SPL_2$ superopers can be identified with the space of differential operators $D^3_{\theta}-\omega(z,\theta)$. We will see in the next section that the coordinate transformations of $\omega$ are the same as in Proposition 2.1.
Therefore we see that there is a full analogy with the bosonic case, where the space of $PGL_2$-superopers was identified with the set of projective connections or equivalently with the set of projective structures.
Let us summarize the results of this section in the following theorem.\
[**Theorem 2.3.**]{} [*There are bijections between the following three sets on a super Riemann surface $\si$:\
i) Superprojective structures\
ii) Superprojective connections\
iii) $SPL_2$-opers.*]{}\
Superopers for higher rank superalgebras
========================================
[**3.1. The definition of superopers.**]{} In this section we generalize the results of the previous section to higher rank. Suppose $G$ is a simple algebraic supergroup [@berezin] of adjoint type over Grassmann algebra $S$, $B$ is its Borel subgroup, $N=[B, B]$, so that for the corresponding Lie superalgebras we have ${\mathfrak n}\subset{\mathfrak b}\subset{\mathfrak g}$. Note that $\mathfrak{g}=S\otimes \mathfrak{g}^{\rm{red}}$, where $\mathfrak{g}^{\rm{red}}$ is a simple Lie superalgebra over $\mathbb{C}$. As usual, $H=B/N$ with the Lie algebra $\mathfrak{h}$ and there is a decomposition: $\mathfrak{g}=\mathfrak{n}_-\oplus\mathfrak{h}\oplus\mathfrak{n}_+$. The corresponding generators of simple roots will be denoted as usual: $e_1, \dots, e_l$; $f_1, \dots, f_l$. We are interested in the superalgebras, which have a pure fermionic system of simple roots, namely $\mathfrak{psl}(n|n)$, $\mathfrak{sl}(n+1|n)$, $\mathfrak{sl}(n|n+1)$, $\mathfrak{osp}(2n\pm 1|2n)$, $\mathfrak{osp}(2n|2n)$, $\mathfrak {osp}(2n+ 2|2n)$ with $n\ge 0$ and $D(2,1;\alpha)$ with $\alpha\neq 0, \pm 1$. Moreover, a necessary ingredient for our construction is the presence of the embeddining of superprincipal $osp(1|2)$ subalgebra [@FRSp], [@delduc], namely that for $\chi_{-1}=\sum_i f_i$ and $\check \rho=\sum_i\check{\omega}_i$, where $\check {\omega}_i$ are fundamental coweights, there is such $\chi_{1}$ that makes a triple $(\chi_1, \chi_{-1}, {\check{\rho}})$ an $osp(1|2)$ superalgebra. Almost all series of superalgebras from the list above allow such an embedding, however, $\mathfrak {psl}(n|n)$ does not and we do not consider these series in this article.
As in the standard bosonic case we define an open orbit ${\bf O}\subset[\mathfrak{n}, \mathfrak{n}]^{\perp}/\mathfrak{b}$ consisting of vectors, stabilized by $N$ and such that all the negative root components of these vectors with respect to the adjoint action of $H$ are non-zero.
Let us consider a principal $G$-bundle $\mathcal{F}$ over $X$, which can be a super Riemann surface $\si$ or a formal superdisc $SD_x={\rm Spec} S[\theta][[z]]$, or a punctured superdisc ${D^S_x}^{\times}={\rm Spec} S[\theta] ((z))$ (see e.g. [@leites] or [@kapranov] for the definitions of the spectra of supercommutative rings), and its reduction $\mathcal{F}_B$ to the Borel subgroup $B$. We assume that it has a flat connection determined by a long superderivative $\nabla$ (see [(\[fc\])]{}). According to the example, considered in section 2 we do not want $\nabla$ to preserve $\mathcal{F}_B$. However, in the higher rank case this is not enough, so we have to specify extra conditions. Namely, suppose $\nabla'$ is another long superderivative, which preserves $\mathcal{F}_B$. Then we require that the difference $\nabla'-\nabla$ has a structure of superconformal field of dimension $1/2$ with values in the associated bundle $\mathfrak{g}_{\mathcal{F}_B}$. We can project it onto $(\mathfrak{g}/\mathfrak{b})_{\mathcal{F}_B}\otimes \mathcal{D}^{-1}$. Let us denote the resulting $(\mathfrak{g}/\mathfrak{b})_{\mathcal{F}_B}$-valued superconformal field as $\nabla/\mathcal{F}_B$. Now we are ready to define the superoper, which is a natural generalization of the oper.
A $G-superoper$ on $X$ is the triple $(\mathcal{F}, \mathcal{F}_B, \mathcal{\nabla})$, where $\mathcal{F}$ is a principle $G$-bundle, $\mathcal{F}_B$ is its $B$-reduction and $\nabla$ is a long superderivative on $\mathcal{F}$, such that $\nabla/\mathcal{F}_B$ takes values in ${\bf O}_{\mathcal{F}_B}$.
Locally this means that in the coordinates $(z, \theta)$ and with respect to the trivialization of $\mathcal{F}_B$, the structure of the long superderivative is: $$\begin{aligned}
\label{sops}
D_{z,\theta}+\sum^l_{i=1}a_i(z, \theta)f_i+\mu(z,\theta),\end{aligned}$$ where each $a_i(z, \theta)$ is an even nonzero function (meaning that these functions have nonzero body and are invertible) and $\mu(z, \theta)$ is an odd $\mathfrak{b}$-valued function. Therefore locally on the open subset $U$, where we chose coordinates $(z, \theta)$, the space of $G$-superopers on $U$, which will be denoted as ${s{\rm Op}_{G}}(U)$, can be characterized the space of all odd operators of type ${(\ref{sops})}$ modulo gauge transformations from $B(R)$ group, where $R$ are either analytic or algebraic functions on $U$.\
[**3.2. Coordinate transformations and other properties.**]{} Let us notice that one can use the $H$-action to make the operator [(\[sops\])]{} look as follows: $$\begin{aligned}
\label{sopst}
D_{\theta}+\sum^l_{i=1}f_i+\mu(z,\theta),\end{aligned}$$ where $\mu\in \mathfrak{b}(R)$. Therefore the space $s{\rm Op}_G(U)$ can be considered as the quotient of the space of operators of the form [(\[sopst\])]{} (denoted as $\widetilde{\rm{sOp}}_G(U)$) by the action of $N(R)$. As in the pure bosonic case, $\check{\rho}$ gives a principal gradation (for those classes of superalgebras we consider), i.e. we have a direct sum decomposition $\mathfrak{b}=\oplus_{i\ge 0}\mathfrak{b}_i$. Moreover, let us remind that we denoted $\chi_{-1}=\sum^l_{i=1} f_i$ and there exists a unique element $\chi_{1}$ of degree $1$ in $\mathfrak{b}$, such that $\chi_{\pm 1},
\check{\rho}$ generate $osp(1|2)$ superalgebra. Let $\tilde{\chi}_k$ $(k=1,\dots, l)$ (which can be either odd or even), so that $\tilde{chi}_2=\chi_1^2$, be the basis of the space of the $ad(\chi_1)$ invariants. We note, that the decompositions of $\mathfrak{g}$ with respect to the adjoint action of such $osp(1|2)$ triple were studied in [@FRSp]. Based on that, we have the following Lemma which is proved in a similar way as in [@DS] (see also Lemma 4.2.2 of [@FL]).\
[**Lemma 3.1.**]{} [*The gauge action of $N(R)$ on $\widetilde{s\rm{Op}_G}(U)$ is free and each gauge equivalence class contains a unique operator of the form [(\[sopst\])]{} with $$\begin{aligned}
\mu(\theta, z)=\sum^l_{i=1}g_i(z, \theta)\tilde{\chi}_i,\end{aligned}$$ where $g_i$ has opposite parity to $\chi_i$.*]{}\
Now let us discuss the transformation properties of operators $\widetilde{s{\rm Op}_{G}}(U)$. Assume we have a superconformal coordinate change $(z, \theta)=(f(w,\xi), \alpha(w, \xi))$. Then according to the transformations of the long derivative we have $$\begin{aligned}
&&\nabla=\\
&&D_{\xi}+(D_{\xi}\alpha)(w,\xi)\chi_{-1}+(D_{\xi}\alpha)(w, \xi)(\mu(f((w,\xi),\alpha(w, \xi)).\nonumber\end{aligned}$$ Considering 1-parameter subgroup ${\mathbb{C}_S^\times}^{1|1}\to H$ which corresponds to $\check{\rho}$, applying adjoint transformation with $\check{\rho}(D_{\xi}\alpha)$ we obtain: $$\begin{aligned}
&&D_{\xi}+\\
&&\chi_{-1}+(D_{\xi}\alpha)(w, \xi)Ad_{\check{\rho}(D_{\xi}\alpha)}\cdot\mu(f((w,\xi), \alpha(w, \xi))-
\frac{{\partial}_w\alpha(w,\xi)}{D_{\xi}\alpha(w, \xi)}\check{\rho}.\nonumber\end{aligned}$$ This gives us the gluing formula for superopers on any super Riemann surface $\si$.
Consider the $H$-bundle $\mathcal{D}^{-\check\rho}$ on $\si$, which is determined by the property that the line bundle $\mathcal{D}^{-{\check{\rho}}}\times \mathbb{C}_\lambda$ is $\mathcal{D}^{-\langle{\check{\rho}}, \lambda\rangle}$, where $\lambda$ is from the lattice of characters and $\mathbb{C}_{\lambda}$ is the corresponding 1-dimensional representation.
The coordinate transformation formulas for superoper connection immediately lead to another characterization of this bundle via $\mathcal{F}_B$-reduction. The following statement is the supersymmetric version of Lemma 4.2.1 of [@FL].\
[**Lemma 3.2.**]{} [*The H-bundle $\mathcal{F}_H=\mathcal{F}_B\times_B H=\mathcal{F}_B/N$ is isomorphic to $\mathcal{D}^{-{\check{\rho}}}$.*]{}\
Now one can derive the transformation properties for the canonical representatives of opers from Lemma 3.1, which will provide the transformation formulas for $g_1,\dots, g_n$. In order to do that, one needs to apply to the operator [(\[sopst\])]{} the gauge transformation of the form $$\begin{aligned}
\label{tranfun}
\exp\Big({\kappa\chi_1-\frac{1}{2}(D\kappa)[\chi_1, \chi_1]}\Big)\check{\rho}(D_{\xi}\alpha),\end{aligned}$$ where $\kappa=\frac{{\partial}_w\alpha(w,\xi)}{D_{\xi}\alpha}$. Then we have that $$\begin{aligned}
\label{transf}
&&\tilde g_1(w, \xi)= g_1(w, \xi)(D_\xi\alpha)^2,\nonumber\\
&&\tilde g_2(w, \xi)=g_2(w,\xi)(D_\xi\alpha)^3+\{\alpha; w,\xi\},\nonumber\\
&&\tilde g_j(w, \xi)=g_j(w,\xi)(D_\xi\alpha)^{d_j+1}, \quad j>2.\end{aligned}$$ Therefore [(\[tranfun\])]{} are transition functions for $\mathcal{F}_B$ and $\mathcal{F}$ bundles.\
[**Remark.**]{} Note that the $g_1$-term is absent in the $osp(1|2)$, however it often appears in the higher rank. The first example is $sl(2|1)\cong osp(2|2)$.\
The formulas [(\[transf\])]{} give the following description of the space of superopers: $$\begin{aligned}
s{\rm{Op}}_G(\si)\cong sProj(\si)\times\oplus^l_{j=1, j\neq 2}\Gamma(\si, \mathcal{D}^{-d_j-1}),\end{aligned}$$ where $sProj(\si)$ stands for superprojective connections on $\si$.
In the previous section we indicated that in the $osp(1|2)$ one can introduce the oper related to a superoper, by considering $\nabla^2$, then stripping it from the $\theta$ and $S$ dependence, we obtain that the resulting $\overline{\nabla^2}$ has all the needed properties of $sl(2)$ oper on the curve $X$ which is a base manifold for $\si$.
A similar construction is possible in the higher rank case. Let $^0G$ be the reductive group, which is a base manifold for $G$. Due to the structure of the coordinate transformations we derived above, we find out that indeed $\overline{\nabla^2}=\nabla^2$ defines an oper on $X$. We refer to this object as $^0G$-$oper$, $associated$ $with$ $the$ $G$-superoper, which we will denote as triple $($ ${^0\mathcal{F}},\overline{\nabla^2}$,$ {^0{\mathcal{F}}_B})$, where $^0{\mathcal{F}}$, $^0{\mathcal{F}}_B$ denote the appropriate pure even reductions of the principal bundles.
Superopers with regular singularities, Miura superopers and Bethe ansatz equations
==================================================================================
[**4.1. Superopers with regular singularities**]{}. Consider a point $x$ on on the superc Riemann surface $\si$ and the formal superdisc $SD_x$ around that point with the coordinates $(z,\theta)$. We define a $G$-superoper with regular singularity on $SD_x$ as an operator of the form $$\begin{aligned}
\label{sinso}
D_{\theta}+\sum a_i(z, \theta)f_i+\Big(\mu_1(z)+\frac{\theta}{z}\mu_0(z)\Big),\end{aligned}$$ modulo the $B(\mathcal{K}_x)$-transformations ($\mathcal{K}_x=\mathbb{C}[\theta]((z))$), where $a_i(z, \theta)\in \mathcal{O}_x$ are nowhere vanishing and invertible, $\mu_i(z, \theta)\in \mathfrak{b}(\mathcal{K}_x)$ $(i=0,1)$, such that the bodies of $\mu_i(z, \theta)$, i.e. $\overline{\mu_i}
\in \mathfrak{b}^{\rm{red}}(\mathcal{O}^{\rm{red}}_x)$. As before, one can eliminate $a_i$-dependence via $H$-transformations, therefore we can talk about $N(\mathcal{K}_x)$ equivalence class of operators of the type [(\[sinso\])]{} with $a_i=1$. Let us denote by $sOp_G^{RS}(SD_x)$ the space of superopers with regular singularity. Clearly, we have the embedding: $sOp^{RS}_G(SD_x)\subset sOp_G(SD^\times_x)$.
The $^0G$-oper, corresponding to $G$-superoper [(\[sinso\])]{} is the oper with regular singularity. It has the following form: $$\begin{aligned}
{\partial}+\chi^2_{-1}+[\chi_{-1},\overline{\mu_1}]+ (\overline{\mu_1})^2+\frac{1}{z}(\overline{\mu_0}),\end{aligned}$$ which can be transformed to the standard form via the gauge transformation by means of $\frac{\rho}{2}(z)$: $$\begin{aligned}
{\partial}_z+\frac{1}{z}\Big(\chi^2_{-1}-\frac{\check\rho}{2}+Ad_{\frac{\check{\rho}}{2}(z)}\cdot\bar{\mu}_0\Big)+v(z),\end{aligned}$$ where $v(z)$ is regular.
Denoting $-\check\lambda$ the projection of $\mu_0$ on $\mathfrak{h}$, we find that the residue of this differential operator is equal to $\chi_{-1}^2-\lambda-\2\check\rho$, however since this is an oper, only the corresponding class in $\mathfrak{h}/W$ is well defined, and we denote it as $(-\lambda-\2\check\rho)_W$, i.e. this oper belongs to ${\rm Op}^{RS}_G(D_x)_{\check\lambda}$, see e.g. [@Fe].
Let us refer to the space of superopers with regular singularity such that $\bar{\mu_0}(0)=\check\lambda$, as $s{\rm Op}_G^{RS}(D_x)_{\lambda}$.
If we consider the representation $V$ of $G$ one can talk about a system of differential equations $\nabla\cdot \phi_V(z, \theta)$ and their monodromy like in the pure even case.
Let $\check{\lambda}$ be the dominant integral coweight and let us introduce the following class of operators: $$\begin{aligned}
\label{lsop}
\nabla=D_{\theta}+\Big(\sum a_i(z, \theta)f_i+\mu(z, \theta)\Big),\end{aligned}$$ where $a_i=z^{\langle \alpha_i, \check{\lambda}\rangle}(r_i(\theta)+z(\dots))$, so that the body of $r_i$ is nonzero, and $\mu(z, \theta)\in \mathfrak{b}(\mathcal{O}_x)$. We call the quotient of the space of operators above by the action of $B(\mathcal{O}_x)$ as ${s{\rm Op}_{G}}(SD_x)_{\lambda}$.
The following Lemma is an analogue of Lemma 2.4. of [@Fe].\
[**Lemma 4.1.**]{} [*There is an injective map i : ${s{\rm Op}_{G}}(SD_x)_{\check\lambda}\to sOp(SD^{\times}_x)$, so that $\rm{Im}$i $ \subset {s{\rm Op}_{G}}^{RS}(SD_x)_{\check\lambda}$. The image of i is a subset in the set of those elements of ${s{\rm Op}_{G}}^{RS}(SD_x)_{\check\lambda}$, such that the resulting oper has a trivial monodromy around x.*]{}\
[**Remark.**]{} Notice that the superopers corresponding to $s{\rm Op}_G(SD_x)_{\check\lambda}$ belong to ${\rm Op}_G(D_x)_{\check\lambda}$. However, here $\check\lambda$ is the integral dominant weight for Lie superalgebra. If we consider $\lambda$ to be an integral dominant weight for the underlying Lie algebra, the monodromy for the corresponding superoper would not be necessarily trivial: the expression will include the half-integer powers of $z$ and the monodromy will correspond to the reflection: $\theta\to -\theta$.\
[**4.2. Miura superopers.**]{} Miura superoper is defined in complete analogy with the pure even case. Namely, [*Miura G-superoper*]{} is a quadruple $(\mathcal{F}, \nabla, \mathcal{F}_B, \mathcal{F}'_B)$ where the triple $(\mathcal{F}, \nabla, \mathcal{F}_B)$ is a $G$-superoper and ${\mathcal{F}}'_B$ is another B-reduction preserved by $\nabla$. Let us denote the space of such superopers as ${s{\rm MOp}_G}(\si)$.
Such $B$-reductions of $\mathcal{F}$ are completely determined by the B-reduction of the fiber $\mathcal{F}_x$ at any point $x$ on $\si$ and a set of all such reductions is given by $(G/B)_{\mathcal{F}_x}=\mathcal{F}_x\times_G G/B=(G/B)_{\mathcal{F}'_x}
$. Then if superoper $\xi$ has the regular singularity and a trivial monodromy, then there is an isomorphism between the space of Miura opers for such $\xi$ and $(G/B)_{\mathcal{F}'_x}$.
The structure of the flag manifold $G/B$ is usually quite complicated [@penkov],[@manin], however we just need the structure determined by its “body”, i.e. $^0G/^0B$. For the pure even flag variety $^0G/$ $^0B$, we have the standard Schubert cell decomposition, where cells $^0S_w$=$^0Bw_0$$w$ $^0B$ are labeled by the Weyl group elements $w\in W$ and $w_0$ is the longest element of the Weyl group (from now on when we say Weyl group, we mean only the Weyl group corresponding to pure even Weyl reflections of the $^0G$ root system).
Let us denote $S_w$ the preimage of $P:G/B$$\to$ $^0G/$ ${^0B}$. We assume that the preimage of a big cell $^0Bw_0^0B$ allows factorization $Bw_0B$. The B-reduction $\mathcal{F}'_B$ defines a point in $G/B$. We say that B-reductions $\mathcal{F}_{B,x}$ and $\mathcal{F}'_{B,x}$ are in relative position $w$ if $\mathcal{F}_{B,x}$ belongs to $\mathcal{F'}_x\times_B S_w$. When $w$=1, we say that ${\mathcal{F}}_x$, ${\mathcal{F}}_x'$ are in generic position. A Miura superoper is called generic at a given point $x\in \si$ if the $B$-reductions ${\mathcal{F}}_{B,x}$, ${\mathcal{F}}'_{B,x}$. are generic. Notice that if a Miura superoper is generic at $x$, it is generic in the neighborhood of x. We denote the space of Miura superopers on U as ${s{\rm MOp}_G}(U)_{gen}$. It is clear that the reduction of Miura superoper to $(^0\mathcal{F}, \overline{\nabla^2}, ^0\mathcal{F}_B, ^0\mathcal{F}'_B)$ gives a Miura oper.
Therefore the following Proposition holds, which follows directly from the reduction to the pure even case, although one can also go along the lines of the proof of Lemma 2.6. and Lemma 2.7 of [@Fe].\
[**Proposition 4.2.**]{} [*i) The restriction of the Miura superoper to the punctured disk is generic.\
ii) For a generic Miura superoper $(\mathcal{F}, \nabla, \mathcal{F}_B, \mathcal{F}'_B)$ the $H$-bundle $\mathcal{F}'_H$ is isomorphic to $w_0^*(\mathcal{F}_h)$* ]{}\
As in the even case we can define an $H$-connection associated to Miura superoper on $\mathcal{F}_H\cong \mathcal{D}^{-\check\rho}$, which is determined by $\tilde\nabla=D_{\theta}+u(z,\theta)$, where $u$ is $\mathfrak{h}$-valued function. Under the change of coordinates $(z, \theta)=(f(w,\xi),
\alpha(w,\xi))$, the long superderivative transforms as follows: $$\begin{aligned}
D_{\xi}+D_{\xi}\alpha \cdot u(f(w, \xi), \theta(w,\xi))-
\frac{{\partial}_w\alpha(w,\xi)}{D_{\xi}\alpha)}\check{\rho}.\end{aligned}$$ Let us call the resulting morphism ${s{\rm MOp}_G}(U)_{gen}$ to the space $Conn_U$ of the described above flat $H$-connections on $U$ as $\mathbf{a}$. Now suppose we are given a long superderivative $\tilde\nabla$ on $H$-bundle $\mathcal{D}^{-\check\rho}$, one can construct a generic superoper as follows. Let us set $\mathcal{F}=\mathcal{D}^{-\check \rho}\times_H G$, ${\mathcal{F}}_B=\mathcal{D}^{-\check \rho}\times_H B$. Then, defining ${\mathcal{F}}_B'$ as $\mathcal{D}^{-\check \rho}\times_H w_0 B$ and the long superderivative on $\mathcal{F}$ as $\nabla=\chi_{-1}+\tilde\nabla$, we see that the constructed quadruple $({\mathcal{F}}, \nabla, {\mathcal{F}}_B, {\mathcal{F}}_B')$ is a generic Miura oper.
Therefore, we obtained the following statement which is analogue of Proposition 2.8 of [@Fe].\
[**Proposition 4.3.**]{} [*The morphism ${\bf a}: {s{\rm MOp}_G}(U)_{gen}\to Conn_U$ is an isomorphism of algebraic supervarieties.*]{}\
Similarly one can define the space of Miura $G$-superopers of coweight $\check\lambda$ on $SD_x$ via the same definition applied to ${s{\rm Op}_{G}}(SD_x)_{\check\lambda}$. Again, we have isomorphism ${s{\rm MOp}_G}(SD_x)_{\lambda}\cong{s{\rm Op}_{G}}(SD_x)_{\lambda}\times (G/B)_{\mathcal{F}_x'}$. We define relative positions as in the case of standard Miura superopers ( $\check\lambda=0$) and let ${s{\rm MOp}_G}(SD_x)_{\check\lambda, gen}$ denote the variety of generic Miura opers of weight $\lambda$.
Finally, there is an analogue of Proposition 4.3 in this case. Let $Conn^{RS}_{SD_x, \check\lambda}$ denote the set of of long derivatives on the H-bundle $\mathcal{D}^{-\check\rho}$ with regular singularity and residue $-\check\lambda$, namely the long derivatives of the form: $$\begin{aligned}
\tilde \nabla=D_{\theta}+\frac{\theta}{z}\check\lambda+u(z,\theta),\end{aligned}$$ where $u(z,\theta)\in \mathfrak{h}[[z,\theta]]$. Then as before, one can construct a connection $\nabla=\tilde\nabla+\chi_{-1}$ and making the gauge transformation with $\check\lambda(z)$ we obtain the connection from ${s{\rm Op}_{G}}(SD_x)_{\check\lambda}$. Therefore, there is an isomorphism between $Conn^{RS}_{SD_x, \lambda}$ and ${s{\rm MOp}_G}(SD_x)_{gen,\check\lambda}$.\
[**4.3. Miura superopers with regular singularities on $SC^*$.**]{} First, let us consider a Miura superoper of coweight $\hat\lambda$ on the disc $SD_x$. Assume, it is not generic, but ${\mathcal{F}}'_{B,x}$ has the relative position $w$ with ${\mathcal{F}}_{B,x}$ at $x$. Let us denote the space of all such Miura superopers by ${s{\rm MOp}_G}{(SD_x)}_{\check\lambda,w}$.
From previous subsection we know that each such Miura superoper corresponds to some $H$-connection on $\mathcal{D}^{-\check\rho}$ over $SD_x^{\times}$. Using the results from the pure even case, one can show that the corresponding $H$-connection has the form $$\begin{aligned}
\label{nu}
D_{\theta}+\frac{\theta}{z}\check\nu+f(z,\theta) ,\end{aligned}$$ where $\check\nu-\2\check\rho=w(-\check\lambda-\2\check\rho)$, $w$ defines the relative position at $x$, $f(u,\theta)$ is such that the body of its superderivative is regular in $z$, i.e. $\overline{D_{\theta}f(z,\theta)}\in \mathfrak{h}^{\rm red}[[z]]$. Let us call the space of such connections by $Conn^{RS}_{SD_x, \check\lambda, w}$.
Therefore, we can construct a map ${\bf b}^{RS}_{\lambda,w}: Conn^{RS}_{SD_x, \check\lambda, w}\to {s{\rm Op}_{G}}^{RS}(SD_x)$ similarly to the previous subsection, by constructing the triple $({\mathcal{F}}, \nabla, {\mathcal{F}}_B)$ via identification ${\mathcal{F}}=\mathcal{D}^{-{\check{\rho}}}\times_H G$, ${\mathcal{F}}_B=\mathcal{D}^{-{\check{\rho}}}\times_H B$ and $\nabla={\tilde}\nabla+\chi_{-1}$, where ${\tilde}\nabla\in Conn^{RS}_{SD_x, \check\lambda, w}$. We denote by $Conn^{reg}_{SD_x,\check\lambda,w}$ the preimage of ${s{\rm Op}_{G}}{(SD_x)}_{\check\lambda,w}$ under this morphism, therefore we have the map: ${\bf b}_{\lambda,w}:
Conn^{reg}_{SD_x, \check{\lambda}, w}\to {s{\rm MOp}_G}^{RS}(SD_x)_{\check{\lambda}}$, so that in the quadruple $({\mathcal{F}}, \nabla, {\mathcal{F}}_B, {\mathcal{F}}'_B)$ first three terms are as above and ${\mathcal{F}}'_B=\mathcal{D}^{-{\check{\rho}}}\times_H w_0B$. If we denote ${s{\rm MOp}_G}^{RS}(SD_x)_{\check{\lambda}, w}$ those Miura superopers of coweight $\check{\lambda}$ which have the relative position $w$ at $x$, then the following Proposition is true, based on the results from the pure even case (see Proposition 2.9 of [@Fe]).\
[**Proposition 4.4.**]{} [*For each $w\in W$, ${\bf b}_{{\check{\lambda}}, w}$ is an isomorphism of supervarieties $Conn^{reg}_{SD_x,\check\lambda,w}$ and ${s{\rm MOp}_G}{(SD_x)}_{\check\lambda,w}$*]{}\
Let us now consider the case of $\check\lambda=0$ and assume that the relative position is given by $s_{2\alpha_i}$, where $\alpha_i$ is a simple black root. In local coordinates, the corresponding $H$-connection will be given by the differential operator: $$\begin{aligned}
\label{simple}
\tilde \nabla=D_{\theta}+\frac{\theta}{2z}\check{\alpha}_i+u(z,\theta),\end{aligned}$$ where $u(z,\theta) \in \mathfrak{h}[\theta]((z))$ and $u(z,\theta)=u_1(z)+\theta u_0(z)$ and $\overline{u_0}(z)\in \mathfrak{h}^{\rm red}[[z]]$. Then applying the gauge transformation $$\begin{aligned}
\exp\Big(-\frac{\theta}{2z}e_{i}+\frac{1}{4z}e^2_{i}\Big)\end{aligned}$$ to the Miura superoper $\tilde \nabla+\chi_{-1}$, we obtain that the resulting element of ${s{\rm Op}_{G}}{(SD_x)}_{\check\lambda,s_{2\alpha_i}}$ gives the element ${\rm Op}_G{D_x}_{\check\lambda,s_{2\alpha_i}}$ if $\langle\check\alpha_{i}, \overline{u}_0(0)\rangle=0$. If we consider the associate bundle corresponding to the 3-dimensional representation of the $osp(2|1)$ triple $\{e_i, f_i, \check\alpha_i\}$ , writing explicitly all the solutions we find that this condition is also a necessary one. Namely, the following Proposition holds.\
[**Proposition 4.5.**]{} [*A superoper corresponding to the H-connection given by [(\[simple\])]{} corresponds to ${\rm Op}_G{(D_x)}_{\check\lambda,s_{2\alpha_i}}$ if and only if $\langle\check\alpha_{i}, \overline{u_0}(0)\rangle=0$.*]{}\
Now we are ready to study superopers with regular singularities over the super Riemann surface $SC^*$. Let us consider $\mathcal{Z}_1=(z_1,\theta_1),\dots,\mathcal{Z}_N=(z_N, \theta_N)$ on $SC^*$. Also, let $\check\lambda_1,\dots, \check\lambda_N,\check\lambda_{\infty}$ be the set of dominant coweights of $\mathfrak{g}$. Let us consider the H-connections on $SC^*$ with regular singularities at the points $\mathcal{Z}_1, \dots, \mathcal{Z}_N, (\infty,0)$ and a finite number of other points ${\mathcal{W}}_1=(w_1, \xi_1), \dots, {\mathcal{W}}_n=(w_m,\xi_m)$ such that the residues of the corresponding even $H$-connection at $z_i$, $w_j$, $\infty$ are equal to $-y_i({\check{\lambda}}+\2{\check{\rho}})+{\check{\rho}}$, $-y'_j({\check{\rho}})+\2{\check{\rho}}$, $-y_i({\check{\lambda}}_{\infty}+\2{\check{\rho}})+\2{\check{\rho}}$, where $y_i, y_j', y_{\infty}\in W$. In other words, we are considering the H-connections determined by the differential operator of the following type: $$\begin{aligned}
&&D_{\theta}-\Big(\sum^N_{i=1}\frac{\theta-\theta_i}{z-z_i+\theta\theta_i} (y_i({{\check{\lambda}}}+\frac{{\check{\rho}}}{2})-\frac{{\check{\rho}}}{2})+\nonumber\\
&&\sum^m_{j=1}\frac{\theta-\xi_j}{z-w_j+\theta\theta_j} (y'_j(\frac{{\check{\rho}}}{2})-\frac{{\check{\rho}}}{2}\Big) +\rm{nilp} \end{aligned}$$ on $SC^*{\backslash}\infty$, where [nilp]{} stands for elements $f(z,\theta)$ from $\mathfrak{h}[\theta]((z))$ such that $\overline{f(z,\theta)}=
\overline{D_{\theta}f(z,\theta)}=0$. Let us study its behaviour at infinity. Any connection $D_{\theta}+\alpha(\theta,u)$ on $\mathcal{D}^{-{\check{\rho}}}$ has the following expansion with respect to the coordinates $(u,\eta)=(\frac{-1}{z}, \frac{\theta}{z})$: $$\begin{aligned}
D_{\eta}+u^{-1}\alpha(-u^{-1}, -\eta u^{-1})+u^{-1}{\eta}{\check{\rho}}.\end{aligned}$$ Therefore, considering $\frac{\eta}{u}$-coefficient in the expansion, we obtain the following constraint: $$\begin{aligned}
\label{constraint}
&&\sum^{N}_{i=1}(y_i({\check{\lambda}}+\frac{{\check{\rho}}}{2})-\frac{{\check{\rho}}}{2})+
\sum^{m}_{i=1}(y'_i(\frac{{\check{\rho}}}{2})-\frac{{\check{\rho}}}{2})=\nonumber\\
&&y'_{\infty}(-w_0({\check{\lambda}}_{\infty})+\frac{\check{\rho}}{2})-\frac{\check{\rho}}{2},\end{aligned}$$ where $y'_{\infty}w_0=y_{\infty}$. This expression is expected from the consideration of the pure even case [@Fe].
Let us denote the set of the considered above $H$-connections by $Conn(SC^*)^{RS}_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}$.
Now one can associate to any such connection a $G$-oper on $SC^*$ with regular singularities at the points $({\mathcal{Z}}_i)$, $({\mathcal{W}}_j)$, $(\infty,0)$ by setting, in familar way, $\mathcal{F}=\mathcal{D}^{-{\check{\rho}}}\times_H G$, $\mathcal{F}_B=\mathcal{D}^{-{\check{\rho}}}\times_H B$.
Let us denote the set of superopers with regular singularities at ${\mathcal{Z}}_1\dots {\mathcal{Z}}_N, (\infty,0)$, whose restriction to the formal superdisc at any point ${\mathcal{Z}}_i$ or $(\infty,0)$ belongs to the space ${s{\rm Op}_{G}}(SD_{{\mathcal{Z}}_i})_{\check \lambda}$ or ${s{\rm Op}_{G}}(SD_{(\infty,0)})_{{\check{\lambda}}_{\infty}}$, by ${s{\rm Op}_{G}}(SC^*)_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}$.
Then let $Conn(SC^*)_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}\subset Conn(SC^*)^{RS}_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}$ be those $H$-connections with regular singularities, which are associated to\
${s{\rm Op}_{G}}(SC^*)_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}$ under the above correspondence. Therefore we have the map $$Conn(SC^*)_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}\to
{s{\rm Op}_{G}}(SC^*)_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}.$$ We can construct a Miura superoper associated with the image of this map, namely ${\mathcal{F}}'_B=\mathcal{D}^{-\check{\rho}}\times_H w_0B$. Therefore, this map can be lifted to $$\begin{aligned}
&&{\bf b}_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}:\nonumber\\
&&Conn(SC^*)_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}\to
{s{\rm MOp}_G}(SC^*)_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}.\end{aligned}$$ Similarly to the pure even case, one can argue that this map is an isomorphism. Notice that for a given superoper $\tau\in {s{\rm Op}_{G}}(SC^*)_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}$ (because of the absence of nontrivial monodromy), the space ${s{\rm MOp}_G}(SC^*)_{\tau}$ of the corresponding Miura superopers is isomorphic to $G/B$.
Similarly to the argument in the pure even case, we obtain the following theorem, which is an analogue of Theorem 3.1 of [@Fe].\
[**Theorem 4.6.**]{}[*The set of all connections $Conn(SC^*)_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}$, which correspond to a given oper $\tau\in {s{\rm Op}_{G}}(SC^*)_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}$, is isomorphic to the set of points of the flag variety G/B.*]{}\
[**4.4. $SPL_2$-superopers and super Bethe ansatz equations.**]{} In this section we return back to the simplest nontrivial example of the superoper, related to supergroup $SPL_2$. In the previous section we obtained that for a fixed superoper $\tau$ one can trivialize $\mathcal{F}$ by using the fiber at $(\infty, 0)$. Therefore we have the trivialization of $G/B$- bundle and the map: $\phi_{\tau}:SC^*\to G/B$, so that $(\infty,0)$ maps into the point orbit of $G/B$. Also, in the case $G=SPL_2$, $G/B\cong SC^*$.
Similar to the pure even case, let us call the superoper $\tau$ $non-degenerate$ if i) $\phi_{\tau}(\mathcal{Z}_i)$ is in generic position with $B$, for any $i=1,\dots, N$, ii) The relative position of $\phi_{\tau}(x)$ and $B$ is either generic or corresponds to a reflection for all $x\in SC^*\backslash {(\infty,0)}$. Since $PGL_2$ opers are non-degenerate for the generic choice of $z_i$, and those are the opers corresponding to $SPL_2$-superopers, then any $\tau\in s{\rm Op}_{SPL(2)}(SC^*)_{({\mathcal{Z}}_i), (\infty,0); \check\lambda_i, \check{\lambda}_{\infty}}$ for the generic choice of ${\mathcal{Z}}_i$ is non-degenerate. Also, let us consider the unique Miura superoper structure for $\tau$, such that ${\mathcal{F}}_{B, (\infty,0)}$ and ${\mathcal{F}}'_{B, (\infty,0)}$ coincide, i.e. correspond to the point orbit in $G/B$.
The corresponding $H$-connections will have the following form: $$\begin{aligned}
\tilde{\nabla}=D_{\theta}-\sum^N_{i=1}\frac{\theta-\theta_i}{z-z_i-\theta\theta_i} {{\check{\lambda}}}_i+
\sum^m_{j=1}\frac{\theta-\xi_j}{z-w_j-\theta\xi_j} \frac{\check{\alpha}}{2}+\rm{nilp}, \end{aligned}$$ where $\check{\lambda}_i=l_i \check\omega$, so that $l_i\in\mathbb{Z}_+$. Imposing the constraint from Proposition 4.5, we obtain that the following equations should hold for the corresponding oper to be monodromy free: $$\begin{aligned}
\label{gbae}
\sum^N_{i=1}\frac{2l_i}
{w_j-z_i}-
\sum^m_{s=1}
\frac{2}{w_j-w_s}=0\end{aligned}$$ Also, let us recall that the coweights ${\check{\lambda}}_i$ should also satisfy [(\[constraint\])]{}, which in our case simpifies to: $$\begin{aligned}
\sum^N_{i=1}l_i-m=l_{\infty}\end{aligned}$$ Note, that the corresponding $PGL_2$-oper coweights, i.e. $2l_i$ are even: superopers associated with the odd weights will have a monodromy which will correspond to a reflection in $\theta$ variable, as it was explained above. The equations [(\[gbae\])]{} are exactly the Bethe ansatz equations for $osp(2|1)$ Gaudin model studied in [@kulish].\
Some remarks
============
In this article, we studied superopers for superalgebras with pure odd simple root system. However, one can define a similar object for other types of superalgebras, just in such case it can be only locally defined (i.e. on a superdisc). The analogue of the expression [(\[sops\])]{} will be: $$\begin{aligned}
\nabla=D_{z,\theta}+\sum_{e}a_e(z, \theta)f_e+\sum_{o} \theta a_o(z, \theta)f_o+
\mu(z,\theta),\end{aligned}$$ where the summation is over even and odd roots correspondingly and $a_{e,f}(z,\theta)$ are the even functions of $z, \theta$ with nonzero body. The resulting connection cannot be defined globally on the super Riemann surface, however the operator $\nabla^2\vline_{\theta=0}$ can give rise to a connection for a G-bundle over a smooth curve underlying the super Riemann surface, while $\overline{\nabla^2}$ will give an oper for the underlying semisimple supergroup. This construction gives a generalization of opers in the case of any simple superalgebra.
In this paper we briefly considered an important relation between the spectrum of the Gaudin model and superopers on $SC^*$, which in fact could give an example of geometric Langlands correspondence in the case of superalgebras. For $SPL_2$-superopers and Gaudin model for $osp(2|1)$ the spectrum was determined in fact by the underlying $PGL_2$-oper. Unfortunately so far Gaudin models were not studied in the case of other superalgebras yet, so it is not clear whether such a relation holds for higher rank superalgebras.
We will address these and other important questions in the forthcoming publications.
[10]{} J.F. Arvis, [*Classical dynamics of supersymmetric Liouville theory*]{}, Nucl. Phys. [**B**]{} 212 (1983) 151-172. A. Beilinson, V. Drinfeld, [*Opers*]{}, arXiv: math.AG/0501398. F.A. Berezin, [*Introduction to Superanalysis*]{}, Springer (1987). M. J. Bergvelt and J. M. Rabin. [*Supercurves, their Jacobians, and super KP equations*]{}, Duke Math. Journal, 98(1), 1999. L. Crane, J.M. Rabin, [Super Riemann Surfaces: Uniformization and Teichmueller Theory]{}, Comm. Math. Phys. 113 (1988) 601-623. F. Delduc, E. Ragoucy, P. Sorba, [*Super-Toda Theories and W-algebras from Superspace Wess-Zumino-Witten Models*]{}, Commun. Math. Phys. [**146**]{} (1992) 403-426. F. Delduc, A. Gallot, [*Supersymmetric Drinfeld-Sokolov reduction*]{},arXiv: solv-int/9802013. V. Drinfeld, V. Sokolov, [*Lie Algebras and KdV type equations*]{}, J. Sov. Math. [**30**]{} (1985) 1975-2036. B. Feigin, E. Frenkel, N. Reshetikhin, [*Gaudin model, Bethe Ansatz and critical level*]{}, Comm. Math. Phys. [**166**]{} (1994) 27-62. L. Frappat, E. Ragoucy, P. Sorba, [*W-algebras and superalgebras from constrained WZW models: a group theeoretical classification*]{}, arXiv: hep-th/9207102. E. Frenkel, [*Affine Algebras, Langlands Duality and Bethe ansatz*]{}, in Proceedings of International Congress of Mathematical Physics, Paris, 1994, International Press, 606-642. E. Frenkel, [*Langlands correspondence for loop groups*]{}, CUP, 2007. E. Frenkel, [*Opers on the Projective Line, Flag Manifolds and Bethe Ansatz*]{}, arXiv: math/0308269. T. Inami, H. Kanno, [*Lie Superalgebraic Approach to Super Toda Lattice and Generalized Super KdV Equations*]{}, Commun. Math. Phys. [**136**]{} (1991) 519-542. M. Kapranov, E. Vasserot, [*Supersymmetry and the formal loop space*]{}, arXiv:1005.4466. P.P. Kulish, N. Manojlovic, [*Bethe vectors of the $osp(1|2)$ Gaudin model* ]{}, Lett.Math.Phys. 55 (2001) 77-95. P.P. Kulish, A.M. Zeitlin, [*Group Theoretical Structure and Inverse Scattering Method for super-KdV Equation*]{}, J. Math. Sci [**125**]{} (2005)203-214. D. A. Leites, [*Theory of supermanifolds*]{}, KF Akad. Mauk SSSR, Petrozavodsk (1983). Yu.I. Manin, A.A. Voronov, [*Supercellular partitions of flag superspaces*]{}, Itogi Nauki i Tekhniki. Ser. Sovrem. Probl. Mat. Nov. Dostizh., 32, VINITI, Moscow, 1988, 27�70. I.B. Penkov, [*Borel-Weil-Bott theory for classical Lie supergroups*]{}, Itogi Nauki i Tekhniki. Ser. Sovrem. Probl. Mat. Nov. Dostizh., 32, VINITI, Moscow, 1988, 71�124. P. Mathieu, [*Super Miura transformations, Super Schwarzian derivatives and Super Hill Operators*]{}, in: Integrable and Superintegrable systems, World Scientific (1991) 352-388 M. Rakowski, G. Thompson, [*Connections on Vector Bundles over Super Riemann Surfaces*]{}, Phys. Lett. [**B**]{} 220 (1989) 557-561. S.-J. Cheng, W. Wang, [*Dualities and Representations of Lie Superalgebras*]{}, Graduate Studies in Mathematics 144, AMS, 2012 E. Witten, [*Khovanov Homology and Gauge Theory*]{}, arXiv: 1108.3103 E. Witten, [*Notes on Super Riemann Surfaces and their Moduli*]{}, arXiv:1209.2459
|
---
abstract: 'We study the reduced fidelity between local states of lattice systems exhibiting topological order. By exploiting mappings to spin models with classical order, we are able to analytically extract the scaling behavior of the reduced fidelity at the corresponding quantum phase transitions out of the topologically ordered phases. Our results suggest that the reduced fidelity, albeit being a local measure, generically serves as an accurate marker of a topological quantum phase transition.'
author:
- Erik Eriksson
- Henrik Johannesson
title: Reduced fidelity in topological quantum phase transitions
---
[*Introduction $-$*]{} Electron correlations in condensed matter systems sometimes produce topologically ordered phases where effects from local perturbations are exponentially suppressed [@WenReview]. The most prominent examples are the fifty or so observed fractional quantum Hall phases, with the topological order manifested in gapless edge states and excitations with fractional statistics. The fact that the ground state degeneracy in phases with non-Abelian statistics cannot be lifted by local perturbations lies at the heart of current proposals for topological quantum computation [@Nayak].
The insensitivity to local perturbations invalidates the use of a local order parameter to identify a quantum phase transition out of a topologically ordered phase. Attempts to build a theory of topological quantum phase transitions (TQPTs) $-$ replacing the Ginzburg-Landau symmetry-breaking paradigm $-$ have instead borrowed concepts from quantum information theory, in particular those of [*entanglement entropy*]{} [@PreskillWen] and [*fidelity*]{} [@hamma], none of which require the construction of an order parameter.
Fidelity measures the similarity between two quantum states, and, for pure states, is defined as the modulus of their overlap. Since the ground state changes rapidly at a quantum phase transition, one expects that the fidelity between two ground states that differ by a small change in the driving parameter should exhibit a sharp drop. This expectation has been confirmed in a number of case studies [@Gu], including several TQPTs [@hamma; @abasto; @fidelTQPT].
Suppose that one replaces the two ground states in a fidelity analysis by two states that also differ slightly in the driving parameter, but which describe only a local region of the system of interest. The proper concept that encodes the similarity between such mixed states is that of the [*reduced fidelity*]{}, which is the maximum pure state overlap between purifications of the mixed states [@uhlmann]. It has proven useful in the analysis of a number of ordinary symmetry-breaking quantum phase transitions [@fidelQPT]. But since the reduced fidelity is a local property of the system, similarly to that of a local order parameter, one may think that it would be less sensitive to a TQPT, which involves a global rearrangement of nonlocal quantum correlations [@WenReview]. However, this intuition turns out to be wrong. As we show in this paper, several TQPTs are accurately signaled by a singularity in the second-order derivative of the reduced fidelity. Moreover, the singularity can be even stronger than for the (pure state) global fidelity. The fact that a TQPT gets imprinted in a local quantity may at first seem surprising, but, as we shall see, parallels and extends results from earlier studies [@castelnovochamon; @trebst].
[*Fidelity and fidelity susceptibility $-$*]{} The fidelity $F(\beta, \beta')$ between two states described by the density matrices $\hat{\rho}(\beta)$ and $\hat{\rho}(\beta')$ is defined as [@uhlmann] $$\label{rf}
F(\beta,\beta') = \textrm{Tr}
\sqrt{\sqrt{\hat{\rho}(\beta)}\hat{\rho}(\beta') \sqrt{\hat{\rho}(\beta)}}.$$ When a system is in a pure state, $\hat{\rho}(\beta) =
|\Psi(\beta)\rangle \langle\Psi(\beta)|$, $F(\beta, \beta')$ becomes just the state overlap $|\langle \Psi(\beta')|\Psi(\beta)\rangle|$. When the states under consideration describe a subsystem, they will generally be mixed states, and we call the fidelity between such states *reduced fidelity*. In the limit where $\beta$ and $\beta'=\beta + \delta\beta$ are very close, it is useful to define the [*fidelity susceptibility*]{} [@you] $$\label{rfs}
\chi_F = \displaystyle \lim_{\delta\beta \to 0} \frac{-2 \ln F}{\delta\beta^2},$$ consistent with the pure state expansion $F \!\approx \!1 - \chi_F \delta\beta^2 / 2$.
[*The Castelnovo-Chamon Model $-$*]{} The first model we consider was introduced by Castelnovo and Chamon [@castelnovochamon], and is a deformation of the Kitaev toric code model [@kitaevtoric]. The Hamiltonian for $N$ spin-1/2 particles on the bonds of a square lattice with periodic boundary conditions is $$\label{cchamiltonian}
H=-\lambda_0 \displaystyle \sum_p B_p -\lambda_1 \sum_s A_s + \lambda_1 \sum_s
e^{-\beta\sum_{i\in s}\hat{\sigma}^{z}_i},$$ where $A_s=\prod_{i\in s} \hat{\sigma}^{x}_i$ and $B_p=\prod_{i\in p}
\hat{\sigma}^{z}_i$ are the star and plaquette operators of the original Kitaev toric code model. The star operator $A_s$ acts on the spins around the vertex $s$, and the plaquette operator $B_p$ acts on the spins on the boundary of the plaquette $p$. For $\lambda_{0,1}>0$ the ground state in the topological sector containing the fully magnetized state $|0\rangle$ is given by [@castelnovochamon] $$\label{ccgs}
|GS(\beta)\rangle = \displaystyle \sum_{g \in G} \frac{e^{\beta\sum_i
\sigma^z_i(g)/2}}{\sqrt{Z(\beta)}}g|0\rangle ,$$ with $Z(\beta) = \sum_{g \in G} e^{\beta\sum_i \sigma^z_i(g)}$, where $G$ is the Abelian group generated by the star operators $A_s$, and $\sigma^z_i(g)$ is the $z$ component of the spin at site $i$ in the state $g|0\rangle$. When $\beta=0$ the state in (\[ccgs\]) reduces to the topologically ordered ground state of the toric code model [@kitaevtoric]. When $\beta \to \infty$ the ground state (\[ccgs\]) becomes the magnetically ordered state $|0\rangle$. At $\beta_c= (1/2)\ln(\sqrt{2}+1)$ there is a second-order TQPT where the topological entanglement entropy $S_{topo}$ goes from $S_{topo}=1$ for $\beta < \beta_c$ to $S_{topo}=0$ for $\beta >
\beta_c$ [@castelnovochamon]. The global fidelity susceptibility $\chi_F$ close to $\beta_c$ was obtained in Ref. , and found to diverge as $$\label{globalF}
\chi_F \sim \ln|\beta_c/\beta - 1|.$$
We here calculate the single-site reduced fidelity between the ground states of a single spin at two different parameter values $\beta$ and $\beta'$. To construct the density matrix $\hat{\rho}_i$ for the spin at site $i$ we use the expansion $$\label{rdm}
\hat{\rho}_i = \frac{1}{2} \displaystyle \sum_{\mu=0}^3 \langle \hat{\sigma}_i^{\mu}
\rangle \hat{\sigma}_i^{\mu},$$ with $\hat{\sigma}_i^{0} \equiv \openone_i$, and with the expectation values taken with respect to the ground state in (\[ccgs\]). There is a one-to-two mapping between the configurations $\{g\}=G$ and the configurations $\{\theta\} \equiv \Theta$ of the classical 2D Ising model $H =
-J\sum_{<s,s'>}\theta_s\theta_{s'}$ with $\theta_s = -1 \ (+1)$ when the corresponding star operator $A_s$ is (is not) acting on the site $s$ [@castelnovochamon]. Thus $\sigma_i^z=\theta_s \theta_{s'}$, where $i$ is the bond between the neighboring vertices $\langle s,s' \rangle$, see Fig. \[fig:cclattice\]. This gives $\langle GS(\beta)| \hat{\sigma}_i^z
|GS(\beta)\rangle = (1/Z(\beta))\sum_{\theta \in \Theta} \theta_s \theta_{s'}
e^{\beta\sum_{\langle s'', s''' \rangle} \theta_{s''}\theta_{s'''}} =
E(\beta)/N$, where $\beta$ is identified as the reduced nearest-neighbor coupling $J/T\!=\!\beta$ of the Ising model with energy $E(\beta)$. The two expectation values $\langle GS(\beta)| \hat{\sigma}_i^x|GS(\beta) \rangle$ and $\langle GS(\beta)| \hat{\sigma}_i^y|GS(\beta) \rangle$ are both zero, since $\langle 0 | g \hat{\sigma}_i^x g' |0\rangle = 0$, $\forall g,g' \in G$, and similarly for $\hat{\sigma}_i^y$.
![(Color online.) Mapping between the Castelnovo-Chamon model and the 2D Ising model. The spins of the former reside on the lattice bonds (filled black circles), and the spins of the latter on the vertices. Left: $\sigma_i^z=\theta_s \theta_{s'}$, where $i$ is the bond between the neighboring vertices $\langle s,s' \rangle$. Middle and right: For $i$ and $j$ nearest (next-nearest) neighbors, the mapping gives $\langle \hat{\sigma}_i^{z}
\hat{\sigma}_j^{z} \rangle = \langle \theta_s \theta_{s'} \theta_{s''}
\theta_{s}\rangle = \langle \theta_{s'} \theta_{s''}\rangle$, where $\langle
s',s'' \rangle$ are next-nearest (third-nearest) neighbors.[]{data-label="fig:cclattice"}](Fig_1.eps){width="35.00000%"}
It follows that $\hat{\rho}_i = (1/2)\mbox{diag}\left(1+E(\beta)/N, 1-E(\beta)/N\right)$ in the $\hat{\sigma}_i^z$ eigenbasis. Since the density matrices at different parameter values $\beta$ and $\beta'$ commute, the reduced fidelity (\[rf\]) is $$\begin{aligned}
\label{fideig}
F(\beta,\beta') =
\textrm{Tr} \sqrt{\hat{\rho}_i(\beta) \hat{\rho}_i(\beta')} =
\sum_i \sqrt{\lambda_i \lambda'_i},\end{aligned}$$ where $\{\lambda_i\}$ ($\{\lambda'_i\}$) are the eigenvalues of $\hat{\rho}_i(\beta)$ ($\hat{\rho}_i(\beta')$). The energy $E(\beta)$ of the 2D Ising model in the thermodynamic limit $N\to \infty$ is given by $E(\beta)/N=-\coth(2\beta)\left[1+(2/\pi)(2\tanh^2(2\beta)-1)K(\kappa)\right]/2$, where $K(\kappa) = \int_0^{\pi/2} d\theta (1-\kappa^2 \sin^2\theta)^{-1/2}$ and $\kappa = 2\sinh (2\beta)\, / \cosh^2 (2\beta)$ [@onsager]. This gives us the plot of the single-site fidelity shown in Fig. \[fig:ccfig\], where we see that the TQPT at $\beta_c=(1/2)\ln(\sqrt{2}+1) \approx 0.44$ is marked by a sudden drop in the fidelity.\
![(Color online.) Single-site fidelity (a), single-site fidelity susceptibility (b), two-site fidelity (c) and two-site fidelity susceptibility (d) of the Castelnovo-Chamon model calculated with a parameter difference $\delta\beta
= 0.001$ and with $N \to \infty$. The reduced fidelity susceptibilities will diverge according to Eq. (\[logdivbeta\]) when $\delta\beta
\to 0$. In (c) and (d) we plot for both nearest (NN) and next-nearest (NNN) neighbors.[]{data-label="fig:ccfig"}](ccfig.eps){width="40.00000%"}
The single-site fidelity susceptibility $\chi_F$ is $$\label{rfsder}
\chi_F = \sum_i \frac{(\partial_{\beta} \lambda_i)^2}{4\lambda_i},$$ for commuting density matrices [@xiong]. Here $\partial_{\beta} \lambda_{1,2}\!=\!\pm
(2N)^{-1}\partial_{\beta}E(\beta)\!=\!\pm (2N\beta^2)^{-1} C(\beta)$, with $C(\beta)$ the specific heat of the 2D Ising model. Thus $\chi_F$ diverges as $$\label{logdivbeta}
\chi_F \sim \ln^2 |\beta_c / \beta -1|,$$ at $\beta_c$, faster than the global fidelity susceptibility in (\[globalF\]). In Fig. \[fig:ccfig\] we plot the single-site fidelity susceptibility using Eq. (\[rfs\]), but with finite $\delta\beta
= 0.001$.
The two-site fidelity can be obtained in a similar way. We expand the reduced density matrix $\hat{\rho}_{ij}$ as $$\label{rdm2}
\hat{\rho}_{ij} = \frac{1}{4} \displaystyle \sum_{\mu,\nu=0}^3 \langle
\hat{\sigma}_i^{\mu} \hat{\sigma}_j^{\nu}\rangle \hat{\sigma}_i^{\mu}
\hat{\sigma}_j^{\nu}.$$ The only non-zero expectation values in (\[rdm2\]) are $\langle
\hat{\sigma}_i^{0}\hat{\sigma}_j^{0}\rangle=1$, $\langle
\hat{\sigma}_i^{z}\hat{\sigma}_j^{0}\rangle= \langle \hat{\sigma}_i^{z}\rangle$, $\langle \hat{\sigma}_i^{0}\hat{\sigma}_j^{z}\rangle= \langle
\hat{\sigma}_j^{z}\rangle$ and $\langle
\hat{\sigma}_i^{z}\hat{\sigma}_j^{z}\rangle$. Translational invariance implies that $\langle \hat{\sigma}_j^{z}\rangle = \langle
\hat{\sigma}_i^{z}\rangle$, so that $$\label{rdm2cc}
\hat{\rho}_{ij} = \frac{1}{4} ( 1 + \langle \hat{\sigma}_i^{z}\rangle (
\hat{\sigma}_i^{z} + \hat{\sigma}_j^{z})
+ \langle \hat{\sigma}_i^{z}\hat{\sigma}_j^{z}\rangle \hat{\sigma}_i^{z}
\hat{\sigma}_j^{z} ).$$ The eigenvalues are seen to be $\lambda_{1,2} = (1/4)(1\pm 2\langle \hat{\sigma}_i^{z}\rangle + \langle
\hat{\sigma}_i^{z} \hat{\sigma}_j^{z} \rangle)$ and $\lambda_{3,4} = (1/4)(1- \langle \hat{\sigma}_i^{z} \hat{\sigma}_j^{z} \rangle)$. Now the fidelity can be calculated using Eq. (\[fideig\]). Here we focus on the cases where $i$ and $j$ are nearest and next-nearest neighbors. Then the mapping to the 2D Ising model gives that $\langle \hat{\sigma}_i^{z}\rangle = \langle
\theta_s \theta_{s'}\rangle$ and $\langle \hat{\sigma}_j^{z}\rangle = \langle
\theta_{s''} \theta_{s'''}\rangle$, where $i$ ($j$) is the bond between the neighboring vertices $\langle s,s' \rangle$ ($\langle s'',s''' \rangle$). When $i$ and $j$ are nearest (next-nearest) neighbors, we get $\langle \hat{\sigma}_i^{z}
\hat{\sigma}_j^{z} \rangle = \langle \theta_s \theta_{s'} \theta_{s''}
\theta_{s}\rangle = \langle \theta_{s'} \theta_{s''}\rangle$, where $\langle s',s''
\rangle$ are next-nearest (third-nearest) neighbors on the square lattice (cf. Fig. \[fig:cclattice\]). As before, $\langle \hat{\sigma}_i^{z}\rangle =
E(\beta)/N$. We obtain $\langle \theta_{s'}
\theta_{s''}\rangle$ from the equivalence between the 2D Ising model and the quantum 1D XY model $$H_{XY} = - \displaystyle \sum_n (
\alpha_{+}\hat{\sigma}_n^{x}\hat{\sigma}_{n+1}^{x} +
\alpha_{-}\hat{\sigma}_n^{y}\hat{\sigma}_{n+1}^{y} +
h\hat{\sigma}_{n}^{z}),$$ where $\alpha_{\pm} = (1\pm\gamma)/2$. This has been shown to give $$\langle \theta_{0,0}\theta_{n,n} \rangle = \langle
\hat{\sigma}_0^{x}\hat{\sigma}_n^{x} \rangle_{XY} |_{\gamma=1,h=(\sinh
2\beta)^{-2}}$$ for Ising spins on the same diagonal, and $$\begin{gathered}
\langle \theta_{n,m}\theta_{n,m'} \rangle = \cosh^2 (\beta^*)\langle
\hat{\sigma}_m^{x}\hat{\sigma}_{m'}^{x} \rangle_{XY}|_{\gamma =
\gamma_{\beta},h=h_{\beta}} \\
- \sinh^2 (\beta^*)\langle \hat{\sigma}_m^{y}\hat{\sigma}_{m'}^{y}
\rangle_{XY}|_{\gamma = \gamma_{\beta},h=h_{\beta}} \end{gathered}$$ for Ising spins on the same row (or, by symmetry, column), where $\tanh \beta^* =
e^{-2\beta}$, $\gamma_{\beta} = (\cosh 2\beta^*)^{-1}$ and $h_{\beta}=
(1-\gamma^2)^{1/2} / \tanh 2\beta$ [@suzuki]. Known results for the 1D XY model give [@barouchmccoy] $$\begin{aligned}
\label{corrxy}
\langle \hat{\sigma}_m^{x}\hat{\sigma}_{m+r}^{x} \rangle_{XY} &=& \left|
\begin{array}{cccc}
G_{-1} & G_{-2} & \ldots & G_{-r} \\
G_0 & G_{-1} & \ldots & G_{-r+1}\\
\vdots & \vdots & \ddots & \vdots \\
G_{r-2} & G_{r-3} & \ldots & G_{-1}
\end{array} \right|, \\
\langle \hat{\sigma}_m^{y}\hat{\sigma}_{m+r}^{y} \rangle_{XY} &=& \left|
\begin{array}{cccc}
G_{1} & G_{0} & \ldots & G_{-r+2} \\
G_2 & G_{1} & \ldots & G_{-r+3}\\
\vdots & \vdots & \ddots & \vdots \\
G_{r} & G_{r-1} & \ldots & G_{1}
\end{array} \right|,\end{aligned}$$ where $ G_{r'} =(1/ \pi ) \int_0^{\pi} d\phi \, (h-\cos \phi) \cos
(\phi r') / \Lambda_{\phi}(h)
+ (\gamma / \pi) \int_0^{\pi} d\phi \, \sin \phi \sin (\phi
r') / \Lambda_{\phi}(h)$ and $\Lambda_{\phi}(h) = ((\gamma \sin \phi)^2 + (h-\cos \phi)^2)^{1/2}$. These relations allow us to plot the two-site fidelity, and also the two-site fidelity susceptibility using Eq. (\[rfs\]), see Fig. \[fig:ccfig\]. Note that the two-site functions are only slightly different depending on whether the two sites are nearest neighbors or next-nearest neighbors. It follows from Eq. (\[rfsder\]) that also the two-site $\chi_F$ has a stronger divergence at criticality than the global fidelity susceptibility.\
It is interesting to note the slight asymmetry of the reduced fidelities around the critical point, seen in Fig. \[fig:ccfig\], indicating a somewhat smaller response to changes in the driving parameter in the topological phase.
[*The transverse Wen-plaquette model $-$*]{} We now turn to the transverse Wen-plaquette model, obtained from the ordinary Wen-plaquette model [@wenplaquette] for spin-1/2 particles on the vertices of a square lattice by adding a magnetic field $h$ [@transversewen], $$\label{twp}
H= g\sum_i \hat{F}_i + h \sum_i \hat{\sigma}_i^x ,$$ where $\hat{F}_i = \hat{\sigma}_i^x \hat{\sigma}_{i+\hat{x}}^y
\hat{\sigma}_{i+\hat{x}+\hat{y}}^x \hat{\sigma}_{i+\hat{y}}^y $ and $g<0$. The boundary conditions are periodic. At $h=0$ the ground state is the topologically ordered ground state of the Wen-plaquette model [@wenplaquette] and in the limit $h \to \infty$ the ground state is magnetically ordered. Since $\hat{F}_i$, $\hat{\sigma}_j^x$ have the same commutation relations as $\hat{\tau}_{i+\hat{x}/2+\hat{y}/2}^z$, $\hat{\tau}_{j-\hat{x}/2+\hat{y}/2}^x
\hat{\tau}_{j+\hat{x}/2-\hat{y}/2}^x$ (where the Pauli matrices $\hat{\tau}$ act on spin-1/2 particles at the centers of the plaquettes), the Hamiltonian (\[twp\]) can be mapped onto independent quantum Ising chains, $$\label{twpi}
H= -h \sum_a \sum_i \left( g_I \hat{\tau}_{a,i+\frac{1}{2}}^z +
\hat{\tau}_{a,i-\frac{1}{2}}^x \hat{\tau}_{a,i+\frac{1}{2}}^x \right),$$ with $g_I = g/h$, and where $\hat{\tau}_{i+\frac{1}{2}}^z$ and $ \hat{\tau}_{i-\frac{1}{2}}^x \hat{\tau}_{i+\frac{1}{2}}^x$ are the images of $\hat{\sigma}_i^x \hat{\sigma}_{i+\hat{x}}^y \hat{\sigma}_{i+\hat{x}+\hat{y}}^x
\hat{\sigma}_{i+\hat{y}}^y$ and $\hat{\sigma}_i^x$ respectively [@transversewen]. The index $a$ denotes the diagonal chains over the plaquette-centered sites, and $i$ is the site index on each diagonal chain. Known results for criticality in the quantum Ising chain imply that the transverse Wen-plaquette model has a TQPT at $g/h=1$ [@transversewen].
We now calculate the reduced fidelity. The mapping onto the quantum Ising chains immediately gives that $\langle
\hat{\sigma}_{i}^x \rangle = \langle \hat{\tau}_{i-\frac{1}{2}}^x
\hat{\tau}_{i+\frac{1}{2}}^x \rangle$. In the $\hat{\sigma}_{i}^x$ basis, the Hamiltonian (\[twp\]) only flips spins in pairs, therefore we get $\langle
\hat{\sigma}_{i}^y \rangle = 0$ and $\langle \hat{\sigma}_{i}^z \rangle = 0$. The single-site reduced density matrix (\[rdm\]) is therefore given by $\hat{\rho}_i =
(1/2)( 1 + \langle \hat{\tau}_{i-\frac{1}{2}}^x \hat{\tau}_{i+\frac{1}{2}}^x
\rangle \hat{\sigma}_{i}^x )$, which is diagonal in the $\hat{\sigma}_{i}^x$ basis, with eigenvalues $\lambda_{1,2} = (1/2)(1 \pm \langle \hat{\tau}_{i-\frac{1}{2}}^x
\hat{\tau}_{i+\frac{1}{2}}^x \rangle )$. The single-site fidelity is thus given by Eq. (\[fideig\]), and $\langle \hat{\tau}_{i-\frac{1}{2}}^x
\hat{\tau}_{i+\frac{1}{2}}^x \rangle$ is calculated using Eq. (\[corrxy\]) with $\gamma=1$ and $h = g_I$. The result reveals that the TQPT is accompanied by a sudden drop in the single-site fidelity. Now, $\partial_{g_I} \lambda_{1,2} = \pm
\frac{1}{2} \partial_{g_I} \langle \hat{\tau}_{i-\frac{1}{2}}^x
\hat{\tau}_{i+\frac{1}{2}}^x \rangle$, which diverges logarithmically at the critical point $g_I = 1$. Therefore Eq. (\[rfsder\]) implies that at $h/g=1$, $\chi_F$ diverges as $$\label{logdiv}
\chi_F \sim \ln^2 |g/h -1|,$$ as in Eq. (\[logdivbeta\]) for the Castelnovo-Chamon model.
We can also calculate the two-site fidelity for two nearest neighbor spins at sites $i,j$. All non-trivial expectation values in the expansion (\[rdm2\]) of the reduced density matrix, except $\langle
\hat{\sigma}_i^{x}\rangle$, $ \langle \hat{\sigma}_j^{x}\rangle$ and $\langle
\hat{\sigma}_i^{x}\hat{\sigma}_j^{x}\rangle$, will be zero, since only these operators can be constructed from those in the Hamiltonian (\[twp\]). The mapping onto the quantum Ising chains gives $\langle \hat{\sigma}_{i}^x \hat{\sigma}_{j}^x \rangle \!=\!
\langle \hat{\tau}_{i-\frac{1}{2}}^x \hat{\tau}_{i+\frac{1}{2}}^x
\hat{\tau}_{j-\frac{1}{2}}^x \hat{\tau}_{j+\frac{1}{2}}^x\rangle \!=\! (\langle
\hat{\tau}_{i-\frac{1}{2}}^x \hat{\tau}_{i+\frac{1}{2}}^x \rangle )^2$. Thus the two-site density matrix is given by $\hat{\rho}_{ij} = (1/4) ( 1 + \langle
\hat{\tau}_{i-\frac{1}{2}}^x \hat{\tau}_{i+\frac{1}{2}}^x \rangle
(\hat{\sigma}_{i}^x + \hat{\sigma}_{j}^x ) + (\langle \hat{\tau}_{i-\frac{1}{2}}^x
\hat{\tau}_{i+\frac{1}{2}}^x \rangle)^2 \hat{\sigma}_{i}^x \hat{\sigma}_{j}^x )$, which is diagonal in the $\hat{\sigma}_{i}^x\hat{\sigma}_{j}^x$ eigenbasis. The eigenvalues are $\lambda_{1,2}=(1/4)(1 \pm \langle \hat{\tau}_{i-\frac{1}{2}}^x
\hat{\tau}_{i+\frac{1}{2}}^x \rangle)^2$ and $\lambda_{3,4}=(1/4)(1 - (\langle
\hat{\tau}_{i-\frac{1}{2}}^x \hat{\tau}_{i+\frac{1}{2}}^x \rangle)^2)$. Taking derivatives of the eigenvalues $\lambda_{1,2,3,4}$ and inserting them into Eq. (\[rfsder\]) shows that also the two-site fidelity susceptibility diverges as $\chi_F \sim \ln^2 |g/h
-1|$ at $h/g = 1$. Contrary to the case of the Castelnovo-Chamon model, $\chi_F$ for one and two spins now diverges slower than the global fidelity susceptibility, which shows the $\chi_F \sim |g/h - 1|^{-1}$ divergence of the quantum Ising chain [@chen].
[*The Kitaev toric code model in a magnetic field $-$*]{} Adding a magnetic field $h$ to the Kitaev toric code model [@kitaevtoric] gives the Hamiltonian [@trebst] $$\label{toricmhamiltonian}
H=-\lambda_0 \displaystyle \sum_p B_p -\lambda_1 \sum_s A_s - h \sum_i
\hat{\sigma}^{x}_i,$$ where the operators $B_p$ and $A_s$ are the same as in Eq. (\[cchamiltonian\]). In the limit $\lambda_1 \gg \lambda_0 , h$, the ground state $|GS\rangle$ will obey $A_s|GS\rangle = |GS\rangle$, $\forall s$. Then there is a mapping to spin-1/2 operators $\hat{\tau}$ acting on spins at the centers of the plaquettes, according to $B_p \mapsto \hat{\tau}^x_p$, $\hat{\sigma}_i^x \mapsto \hat{\tau}^z_p
\hat{\tau}^z_q$. Here $i$ is the site shared by the two adjacent plaquettes $\langle
p,q\rangle$. This maps the Hamiltonian (\[toricmhamiltonian\]) onto [@trebst] $$\label{2dqimhamiltonian}
H=-\lambda_0 \sum_p \hat{\tau}^x_p - h \sum_{\langle p,q \rangle} \hat{\tau}^z_p
\hat{\tau}^z_q,$$ which is the 2D transverse field Ising model with magnetic field $\lambda_0 / h =
h'$. Now, the mapping tells us that $\langle\hat{\sigma}_i^x\rangle = \langle \hat{\tau}^z_p \hat{\tau}^z_q \rangle$, and the symmetries of the Hamiltonian (\[toricmhamiltonian\]) imply $\langle\hat{\sigma}_i^y\rangle = 0$ and $\langle\hat{\sigma}_i^z\rangle = 0$. The single-site reduced density matrix is therefore given by $\hat{\rho}_i = (1/2)
( 1 + \langle \hat{\tau}_{p}^z \hat{\tau}_{q}^z \rangle \hat{\sigma}_{i}^x )$, which has the same form as in the transverse Wen-plaquette model. Since numerical results have shown a kink in $\langle \hat{\tau}_{p}^z \hat{\tau}_{q}^z \rangle$ at the phase transition at $h'_c \approx 3$ [@trebst], it follows that the single-site fidelity will have a drop at this point. Further, the divergence of $\partial_{h'}\langle \hat{\tau}_{p}^z \hat{\tau}_{q}^z \rangle$ at the critical point implies a divergence of the single-site fidelity susceptibility at $h'_c$. Thus, the scenario that emerges is similar to those for the models above.
To summarize, we have analyzed the reduced fidelity at several lattice system TQPTs and found that it serves as an accurate marker of the transitions. In the case of the Castelnovo-Chamon model [@castelnovochamon], the divergence of the reduced fidelity susceptibility at criticality can explicitly be shown to be even stronger than that of the global fidelity [@abasto]. Our analytical results rely on exact mappings of the TQPTs onto ordinary symmetry-breaking phase transitions. Other lattice models exhibiting TQPTs have also been shown to be dual to spin- [@Feng] or vertex [@Zhou] models with classical order, suggesting that our line of approach may be applicable also in these cases, and that the property that a reduced fidelity can detect a TQPT may in fact be generic. While counterintuitive, considering that the reduced fidelity is a [*local probe*]{} of the topologically ordered phase, related results have been reported in previous studies. Specifically, in Refs. [@castelnovochamon] and [@trebst], the authors found that the local magnetization in the Castelnovo-Chamon model and the Kitaev toric code model in a magnetic field, while being continuous and non-vanishing across the transition out of topological order, has a singularity in its first derivative. The fact that local quantities can spot a TQPT is conceptually satisfying, as any physical observable is local in nature. Interesting open questions are here how the concept of reduced fidelity can be applied to TQPTs in more realistic systems, such as the fractional quantum Hall liquids, and how reduced fidelity susceptibility singularities depend on different topological and classical orders involved in the transitions.
We acknowledge the Kavli Institute for Theoretical Physics at UCSB for hospitality during the completion of this work. This research was supported in part by the National Science Foundation under Grant No. PHY05-51164, and by the Swedish Research Council under Grant No. VR-2005-3942.
[99]{}
X.-G. Wen, [*Quantum Field Theory of Many-Body Systems*]{} (Oxford University Press, Oxford, 2004).
C. Nayak [*et al.*]{}, Rev. Mod. Phys. [**80**]{}, 1083 (2008).
A. Hamma [*et al.*]{}, Phys. Rev. A [**71**]{}, 022315 (2005); A. Kitaev and J. Preskill, Phys. Rev. Lett. [**96**]{}, 110404 (2006); M. Levin and X.-G. Wen, Phys. Rev. Lett. [**96**]{}, 110405 (2006).
A. Hamma [*et al.*]{}, Phys. Rev. B [**77**]{}, 155111 (2008).
For a review, see S.-J. Gu, e-print arXiv:0811.3127.
D. F. Abasto, A. Hamma, and P. Zanardi, Phys. Rev. A [**78**]{}, 010301 (2008).
S. Yang [*et al.*]{}, Phys. Rev. A [**78**]{}, 012304 (2008); J.-H. Zhao and H.-Q. Zhou, e-print arXiv:0803.0814; S. Garnerone [*et al.*]{}, Phys. Rev. A [**79**]{}, 032302 (2009).
A. Uhlmann, Rep. Math. Phys. [**9**]{}, 273 (1976); R. Jozsa, J. Mod. Opt. [**41**]{}, 2315 (1994).
N. Paunković [*et al.*]{}, Phys. Rev. A [**77**]{}, 052302 (2008); J. Ma [*et al.*]{}, Phys. Rev. E [**78**]{}, 051126 (2008); H.-M. Kwok, C.-S. Ho and S.-J. Gu, Phys. Rev. A [**78**]{}, 062302 (2008); J. Ma [*et al.*]{}, e-print arXiv:0808.1816.
C. Castelnovo and C. Chamon, Phys. Rev. B [**77**]{}, 054433 (2008).
S. Trebst [*et al.*]{}, Phys. Rev. Lett. [**98**]{}, 070602 (2007).
W.-L. You, Y.-W. Li, and S.-J. Gu, Phys. Rev. E [**76**]{}, 022101 (2007).
A. Y. Kitaev, Ann. Phys. (N.Y.) [**303**]{}, 2 (2003).
L. Onsager, Phys. Rev. [**65**]{}, 117 (1944).
H.-N. Xiong [*et al.*]{}, e-print arXiv:0808.1817.
M. Suzuki, Phys. Lett. A [**34**]{}, 94 (1971); B. M. McCoy, in [*Statistical Mechanics and Field Theory*]{}, edited by V. V. Bazhanov and C. J. Burden (World Scientific, Singapore 1995), pp. 26-128, e-print arXiv:hep-th/9403084.
E. Barouch and B. M. McCoy, Phys. Rev. A [**3**]{}, 786 (1971).
X.-G. Wen, Phys. Rev. Lett. [**90**]{}, 016803 (2003).
J. Yu, S.-P. Kou, and X.-G. Wen, EPL [**84**]{}, 17004 (2008).
S. Chen [*et al.*]{}, Phys. Rev. A [**77**]{}, 032111 (2008).
X.-Y. Feng, G.-M. Zhang, and T. Xiang, Phys. Rev. Lett. [**98**]{}, 087204 (2007); H. D. Chen and J. Hu, Phys. Rev. B [**76**]{}, 193101 (2007).
H.-Q. Zhou, R. Orus, and G. Vidal, Phys. Rev. Lett. [**100**]{}, 080601 (2008).
|
---
abstract: 'The far-ultraviolet (UV) counts and the deep optical spectroscopic surveys have revealed an unexpected number of very blue galaxies (vBG). Using constraints from the UV and optical, we apply the galaxy evolution model PEGASE (Fioc & Rocca-Volmerange 1997, hereafter FRV) to describe this population with a cycling star formation. When added to normally evolving galaxy populations, vBG are able to reproduce UV number counts and color distributions as well as deep optical redshift distributions fairly well. Good agreement is also obtained with optical counts (including the Hubble Deep Field). The number of vBG is only a small fraction of the number of normal galaxies, even at faintest magnitudes. In our modelling, the latter explain the bulk of the excess of faint blue galaxies in an open Universe. The problem of the blue excess remains in a flat Universe without cosmological constant.'
author:
- |
Michel Fioc$^1$[^1] and Brigitte Rocca-Volmerange$^{1,2}$\
$^1$Institut d’Astrophysique de Paris, CNRS, 98 bis Bd. Arago, F-75014 Paris, France\
$^2$Institut d’Astrophysique Spatiale, Bât. 121, Université Paris XI, F-91405 Orsay, France
title: 'Bursting dwarf galaxies from the far-UV and deep surveys'
---
galaxies: evolution – galaxies: starburst – galaxies: luminosity function, mass function – ultraviolet: galaxies – cosmology: miscellanous
Introduction
============
The apparent excess of the number of galaxies at faint magnitudes in the blue relative to predictions of non-evolving models, even in the most favourable case of an open Universe, is a longstanding problem of cosmology. Various scenarios have been proposed to solve this problem in a flat Universe, as a strong number density evolution of galaxies via merging (Rocca-Volmerange & Guiderdoni 1990; Broadhurst, Ellis & Glazebrook 1992) or with a cosmological constant (Fukugita et al. 1990). In the framework of more conservative pure luminosity evolution models in an open Universe, two solutions were advocated. Either these blue galaxies are intensively star forming galaxies at high redshift, or counts are dominated by a population of intrinsically faint blue nearby galaxies. Looking for the optimal luminosity functions (LF) fitting most observational constraints, Gronwall & Koo (1995) have introduced in particular [*non-evolving populations*]{} of faint [*very*]{} blue galaxies (see also Pozzetti, Bruzual & Zamorani (1996)), contributing significantly to faint counts. Such blue colors require however that [*individual*]{} galaxies have recently been bursting and are thus rapidly evolving. With a modelling of the spectral evolution of these galaxies taking also in consideration post-burst phases, Bouwens & Silk (1996) concluded that the LF adopted by Gronwall & Koo (1995) leads to a strong excess of nearby galaxies in the redshift distribution and that vBG may thus not be the main explanation of the blue excess.
On the basis of considerable observational progress in collecting deep survey data, it is timely to address the question of the nature of the blue excess anew, with the help of our new model PEGASE (FRV). In this paper, we propose a star formation scenario and a LF respecting the observational constraints on vBG. Far-UV and optical counts are well matched with the classical Hubble Sequence population and that bursting population extension. The importance of vBG relative to normal galaxies and the physical origin of bursts are finally discussed in the conclusion.
Observational evidences of very blue galaxies
=============================================
In contrast with the so-called ‘normal’ galaxies of the Hubble Sequence, supposed to form at high redshift with definite star formation timescales, bursting galaxies are rapidly evolving without clear timescales. Specifically, in the red post-burst phases, they might be undistinguishable from normal slowly evolving galaxies. The bluest phases during the burst should, however, allow to recognize them and to constrain their evolution and their number.
The existence of galaxies much bluer than normal and classified as starbursts has been recently noticed at optical wavelengths by Heyl et al. (1997). At fainter magnitudes ($B=22.5-24$), Cowie et al. (1996) deep survey has revealed two populations of blue ($B-I<1.6$) galaxies (Figs. \[cowie\] and \[nz\]). Normal star forming galaxies, as predicted by standard models, are observed at high redshift ($z>0.7$) but another, clearly distinct population of blue galaxies is identified at $0<z<0.3$, among which some of them are very blue.
The best constraint on the weight of these vBG comes from the far-UV (2000 Å) bright counts observed with the balloon experiment FOCA2000 (Armand & Milliard 1994). By using a standard LF, the authors obtain a strong deficit of predicted galaxies in UV counts all along the magnitude range ($UV=14-18$) and argue in favour of a LF biased towards later-type galaxies. With the star formation scenarios and the LF of Marzke et al. (1994) fitting optical and near-infrared bright counts (FRV), we confirm that this UV deficit reaches a factor 2 (Fig. \[UV\]). Moreover, the $UV-B$ color distributions show a clear lack of blue galaxies and notably of those with $UV-B<-1.5$ (Fig. \[UV\]). A 10 Gyr old galaxy which formed stars at a constant rate, would however only have $UV-B\sim-1.2$. Although a low metallicity may lead to bluer colors, it will still be too red and a population of bursting galaxies is clearly needed to explain UV counts and the Cowie et al. (1996) data.
Modeling very blue galaxies
===========================
Star formation scenario
-----------------------
Very blue colors are possible only in very young galaxies or in galaxies currently undergoing enhanced star formation. Two kinds of models are thus possible and have been advanced by Bouwens & Silk (1996) to maintain such a population over a wide range of redshifts. In the first one, new blue galaxies are continually formed and leave red fading remnants whereas in the second one, star formation occurs recurrently. We adopt the last scenario and will discuss in the conclusion the reasons for this choice. For the sake of simplicity, we assume that all vBG form stars periodically. In each period, a burst phase with a constant star formation rate (SFR) $\tau_{\rm b}$ and the same initial mass function as in FRV is followed by a quiescent phase without star formation. A good agrement with observational constraints is obtained with 100 Myr long burst phases occuring every Gyr.
Luminosity function
-------------------
Because bursting galaxies rapidly redden and fade during inter-burst phases, we may not assign a single LF by absolute magnitude, independently on color. We therefore prefer to adopt for vBG a Schechter function determined by $\tau_{\rm b}$.
The lack of vBG at $z\ga0.4$ in Cowie et al. (1996) redshift distribution is particularly constraining for the LF. It may be interpreted in two ways. Either vBG formed only at low redshifts ($z<0.3$) or the lack is due to the exponential cut-off in the Schechter LF. Physical arguments for such low formation redshifts are weak. Scenarios invoking a large population of blue dwarf galaxies, as proposed by Babul & Rees (1992), generally predict a higher redshift of formation ($z\sim 1$). Adopting the last solution, we get $M^{\ast}_{\rm b_j}\sim-17$ ($H_0=100\,{\rm km.s^{-1}.Mpc^{-1}}$) for galaxies with $B-I<1.6$ and may constrain the other parameters of the LF. As noticed by Bouwens & Silk (1996), a steep LF extending to very faint magnitudes leads to a large local ($z<0.1$) excess in the redshift distribution. A steep slope ($\alpha<-1.8$) is however only necessary to reconcile predicted number counts with observations in a flat universe. In an open Universe, a shallower slope is possible. In the following, we adopt $\alpha=-1.3$ for vBG. The normalization is taken in agreement with UV counts and the Cowie et al. (1996) redshift distribution.
Galaxy type $M^{\ast}_{\rm b_j}/\tau^{\ast}_{\rm b}$ $\alpha$ $\phi^{\ast}$
------------- ------------------------------------------ ---------- -----------------
E -20.02 -1. $1.91\,10^{-3}$
S0 -20.02 -1. $1.91\,10^{-3}$
Sa -19.62 -1. $2.18\,10^{-3}$
Sb -19.62 -1. $2.18\,10^{-3}$
Sbc -19.62 -1. $2.18\,10^{-3}$
Sc -18.86 -1. $4.82\,10^{-3}$
Sdm -18.86 -1. $9.65\,10^{-3}$
vBG $3.95\,10^5$ -1.3 $6.63\,10^{-2}$
: Luminosity functions parameters ($H_0=100\,{\rm km.s^{-1}.Mpc^{-1}}$). For vBG, we give the SFR during the burst phase $\tau_{\rm b}^{\ast}$ at the LF knee in $M_{\odot}.{\rm Myr}^{-1}$.[]{data-label="FL"}
Galaxy counts
=============
Galaxy counts and the amplitude of the projected correlation function by color in an open Universe ($\Omega_0=0.1$, $\lambda_0=0$, $H_0=65\,{\rm km.s^{-1}.Mpc^{-1}}$), obtained with our modelling of vBG and the standard scenarios[^2] discussed in FRV, are presented in Fig. \[UV\] to \[Aw\]. For ‘normal’ types, we use the $z=0$ SSWML LF of Heyl et al. (1997), after deduction of the contribution of vBG. Characteristics of the LF finally adopted are given in table \[FL\]. Though faint in the blue, vBG play an essential role in UV bright counts thanks to their blue $UV-B$ colors and give a much better agreement on Fig. \[UV\], both in number counts and color distributions.
Their contributions to counts at longer wavelengths is however much smaller. They represent less than 10 per cent of the total number of galaxies at $B=22.5-24$ in Cowie et al. (1996) redshift survey and may thus not be the main explanation of the excess of faint blue galaxies observed over the model without evolution. High redshift, intrinsically bright galaxies forming stars at a higher rate in the past are the main reason as it clearly arises from the $z>1$ tail of normally blue galaxies. In an open Universe, these galaxies reproduce the faint $B$ and even $U$ counts, assuming a normalization of the LF fitting the bright counts of Gardner (1996) as discussed in FRV.
The agreement with the Hubble Deep Field (HDF, Williams et al. 1996) in the blue is notably satisfying. Though a small deficit may be observed in the $F300W$ band (3000 Å), the $F300W-F450W$ (3000Å–4500Å) color distribution is well reproduced (Fig. \[HDF\]). The fraction of vBG at these faint magnitudes is still small; they are therefore not the main reason for the agreement with HDF data.
From this previous analysis, it is clear that vBG are difficult to constrain in the visible from broad statistics like number counts and even color distributions. The angular correlation function might be promising since it is more directly related to the redshift distribution. In a $B_{\rm J}=20-23.5$ sample, Landy, Szalay & Koo (1996) recently obtained an unexpected increase of the amplitude $A_w$ of the angular correlation function with galaxy colors $U-R_{\rm F}<-0.5$, and suggested that this might be due to a population of vBG located at $z<0.4$. We compute $A_w$ from our redshift distributions, assuming the classical power law for the local spatial correlation function and no evolution of the intrinsic clustering in proper coordinates. A slope $\gamma=1.8$ and a single correlation length $r_0=5.4h^{-1}\,{\rm Mpc}$ (see Peebles (1993)) are adopted for all types. The increase of $A_w$ in the blue naturally arises from our computations (Fig. \[Aw\]) and is due to vBG. The interval of magnitude, the faint $M^{\ast}$ and the color criterion conspire to select galaxies in a small range of redshift. In spite of the simplicity of our computation of $A_w$, the trend we obtain is very satisfying. Modelling improved by extra physics or type effects might better fit the $A_w$-color relation, but at the price of an increased number of parameters.
Conclusion
==========
We modelled the vBG appearing notably in UV counts with cycling star formation. Our modelling agrees well with the constraints brought by the 2000Å bright counts (Armand & Milliard 1994), the redshift survey of Cowie et al. (1996) and the angular correlation function of Landy et al. (1996). The cycling star formation provides very blue colors in a more physical way than by assuming a population of unevolving galaxies. The continual formation of new bursting galaxies might lead to similar predictions in the UV-optical, but would produce a high number of very faint red remnants. Future deep near-infrared surveys should provide discriminations between these scenarios. The hypothesis of cycling star forming galaxies has however some theoretical support. The feedback of supernovae on the interstellar medium, may lead to oscillations of the SFR (Wiklind 1987; Firmani & Tutukov 1993; Li & Ikeuchi 1988). Since the probability of propagation of star formation increases with galaxy mass (Coziol 1996), according to the stochastic self propagation star formation theory (Gerola, Seiden & Schulman 1980), this behaviour should be more frequent in small galaxies. More regular SFR might be attained in more massive ones. The nature of vBG is poorly constrained, but we tentatively identify them from their typical luminosity and ${\rm H}\alpha$ equivalent width ($\sim 200$ Å) with H[ii]{} galaxies (Coziol 1996).
Very blue galaxies, as modelled in this paper, are only a small fraction of the number of galaxies predicted at faint magnitudes in the visible and are not the main reason for the excess of blue galaxies, although they may cause some confusion in the interpretation of the faint surveys. In an open Universe, the population of normal high redshift star forming galaxies, even with a nearly flat LF, reproduces fairly well the counts till the faintest magnitudes observed by the Hubble Space Telescope. As is now well established, this population is however, unable to explain the excess of faint blue galaxies in a flat Universe. Increasing strongly the number of vBG (for example, with a steeper slope of the LF) may not be the solution since it would lead to an excess of galaxies at very low redshift which is not observed. This result depends however on the hypotheses of pure luminosity evolution and null cosmological constant. A flat Universe might still be possible if other evolutionary scenarios are favoured by new observations in the far-infrared and submillimeter.
Armand C., Milliard B., 1994, A&A 282, 1 Arnouts S., de Lapparent V., Mathez G., Mazure A., Mellier Y., Bertin E., Kruszewski A., 1996, A&AS (in press) Babul A., Rees M. J., 1992, MNRAS 255, 346 Bertin E., Dennefeld M., 1997, A&A 317, 43 Bouwens R. J., Silk J., 1996, ApJ 471, L19 Broadhurst T. J., Ellis R. S., Glazebrook K., 1992, Nature 355, 55 Coziol R., 1996, A&A 309, 345 Cowie L. L., Songaila A., Hu E. M., Cohen J. G., 1996, AJ 112, 839 Fioc M., Rocca-Volmerange B., 1997 (accepted) Firmani C., Tutukov A. V., 1994, A&A 288, 713 Fukugita M., Yamashita K., Takahara F., Yoshii Y., 1990, ApJ 361, L1 Gardner J. P., Sharples R. M., Carrasco B. E., Frenk C. S., 1996, MNRAS 282, L1 Gerola H., Seiden P. E., Schulman L. S., 1980, ApJ 242, 517 Gronwall C., Koo D. C., 1995, ApJ 440, L1 Guhathakurta P., Tyson J. A., Majewski S. R., 1990, in Kron R. G., ed, Astronomical Society of the Pacific, San Francisco, Evolution of the Universe of Galaxies, p. 304 Heyl J., Colless M., Ellis R. S., Broadhurst T., 1997, MNRAS 285, 613 Hogg D. W., Pahre M. A., McCarthy J. K., Cohen J. G., Blandford R., Smail I., Soifer B. T., 1997 (astro-ph/9702241) Jones L. R., Fong R., Shanks T., Ellis R. S., Peterson, B. A., 1991, MNRAS 249, 481 Koo D. C., 1986, ApJ 311, 651 Landy S. D., Szalay A. S., Koo D. C., 1996, ApJ 460, 94 Li F., Ikeuchi S., 1989, PASJ 41, 221 Lilly S. J., Cowie L. L., Gardner J. P., 1991, ApJ 369, 79 Maddox S. J., Sutherland W. J., Efstathiou G., Loveday J., Peterson B. A., 1990, MNRAS 241, 1p Majewski S. R., 1989, in Frenk C.S. et al., eds, Proc. NATO, The Epoch of Galaxy Formation, Dordrecht, Kluwer, p. 86 Marzke R. O., Geller M. J., Huchra J. P., Corwin H. G., 1994, AJ 108, 437 Metcalfe N., Shanks T., Fong, R., Roche N., 1995a, MNRAS 273, 357 Metcalfe N., Fong R., Shanks T., 1995b, MNRAS 274, 769 Peebles P. J. E., 1993, Principles of Physical Cosmology, Princeton University Press, Princeton Pozzetti L., Bruzual A. G., Zamorani G., 1996, MNRAS 281, 953 Rocca-Volmerange B., Guiderdoni B., 1990, MNRAS 247, 166 Tyson J. A., 1988, AJ 96, 1 Wiklind T., 1987, in Thuan T. X., Montmerle T., Tran Thanh Van J., eds, Starbursts and Galaxy Evolution, Frontières, Gif-sur-Yvette, p. 495 Williams R. E. et al., 1996, AJ 112, 1335
[^1]: E-mail: fioc@iap.fr
[^2]: A constant SFR and $z_{\rm for}=2$ are assumed for Sd-Im galaxies.
|
---
bibliography:
- 'biblio.bib'
---
[**The Accounting Network: how financial institutions react to systemic crisis** ]{}\
Michelangelo Puliga$^{1,2.\ast}$, Andrea Flori$^{1}$, Giuseppe Pappalardo$^{1}$ Alessandro Chessa$^{1,2}$ Fabio Pammolli$^{1}$\
**[1]{} IMT, School for Advanced Studies, Lucca, Italy\
**[2]{} Linkalab, Complex Systems Computational Laboratory, 09129 Cagliari, Italy\
**[\*]{}******
Abstract {#abstract .unnumbered}
========
The role of Network Theory in the study of the financial crisis has been widely spotted in the latest years. It has been shown how the network topology and the dynamics running on top of it can trigger the outbreak of large systemic crisis. Following this methodological perspective we introduce here the Accounting Network, i.e. the network we can extract through vector similarities techniques from companies’ financial statements. We build the Accounting Network on a large database of worldwide banks in the period 2001-2013, covering the onset of the global financial crisis of mid-2007. After a careful data cleaning, we apply a quality check in the construction of the network, introducing a parameter (the Quality Ratio) capable of trading off the size of the sample (coverage) and the representativeness of the financial statements (accuracy). We compute several basic network statistics and check, with the Louvain community detection algorithm, for emerging communities of banks. Remarkably enough sensible regional aggregations show up with the Japanese and the US clusters dominating the community structure, although the presence of a geographically mixed community points to a gradual convergence of banks into similar supranational practices. Finally, a Principal Component Analysis procedure reveals the main economic components that influence communities’ heterogeneity. Even using the most basic vector similarity hypotheses on the composition of the financial statements, the signature of the financial crisis clearly arises across the years around 2008. We finally discuss how the Accounting Networks can be improved to reflect the best practices in the financial statement analysis.
Introduction {#introduction .unnumbered}
============
Network Theory has been used to establish how contagion, through a variety of channels (mutual exposures, social networks of board members, moral hazard from permissive regulations, financial instruments like swaps and derivatives, etc.), triggered the outbreak of the 2007-08 crisis. Scholars suggest that financial systems may affect positively economic development and its stability ([@Beck09; @Beck11; @Lev05]), although they may represent a source of distress which leads to bank failures and currency crises, or greater contraction for those sectors that depend more on external finance during banking crisis ([@arr; @RR]). As a response to the recent financial turmoil, the banking sector has been affected by a substantial reorganization ([@BISANNUAL14]). For instance, as highlighted by the European Central Bank for the Euro area *the main* *findings reflect the efforts by banks to rationalize banking businesses, pressure to cut costs, and the deleveraging process that the banking sector has been undergoing since the start of the financial crisis in 2008* ([@ECB]). This implies that market pressure and regulatory amendments induce banks to reduce their levels of debt, through cost containment and stricter capital requirements. In addition, a gradual improvement in bank capital positions aims to enhance the capacity of the system to absorb shocks arising from financial and economic distresses. This limits the risk of spillover effects from the financial sector to the real economy and put the financial system in a better condition to reap the benefits of economic recovery. In particular, as the financial boom turned to a bust, banks’ stability deteriorated abruptly and the economy entered a *balance sheet recession*, which depressed spending levels through a reduction in consumption by households and investments by firms. Therefore, although at an uneven pace across regulations, the need to strengthen fundamentals has influenced the banking sector, and differences in banks’ portfolio allocations, financial performances, and capitalizations might be interpreted as the combined results of policy decisions and sectoral responses to changes in the regulatory framework (see e.g. [@ALLEN], [@DIARANJ]).\
This paper relates to the literature on banking development and performance evaluation during the recent crisis (see e.g. [@ASHIN], [@BERBOW],[@BRUNN]). We consider a large data set of worldwide banks retrieved from *Bloomberg*, focusing on financial statements spanning from 2001 to 2013. We introduce a network based on similarities between banks’ financial statement compositions (hereinafter *Accounting Network*). Due to data limitations, the reference sample is restricted to banks for which a continuum and stable set of variables is available for the entire period. The introduction of a methodology (*Quality Ratios*) to measure banks’ data coverage aims to prevent that missing values for some variables or lack of annual financial statements for some banks affect the overall picture. We then exploit the maximum amount of available information from financial statements without further reducing the set of variables through an arbitrary selection of the financial statements fields. This choice aims to avoid any selection bias. Moreover, total assets (as a proxy for size) for each bank is applied to normalize banks’ financial statements measures to prevent the emergence of “size effects” as the sizes of institutions are spanning for various orders of magnitude.\
The introduction of Accounting Networks establish a bridge between the external perspective arising from market data and the internal one based on banking activities indicators. We study how Accounting Networks can be exploited to provide a description of the banking system during the crisis. This part sheds light on whether banks under different regulatory frameworks and diversification degrees have reacted to the crisis by strengthening their business peculiarities or by converging towards similar practices ([@BelStu],[@DeHui]). We rely on the assumption that market data alone, although highly representative of investors’ perception of the banking sector, might be dis-informative during periods of distressed market conditions. This, in turn, stimulates a broader exploitation of the information on banking activities, thus pointing to a more comprehensive investigation which takes into account also the internal perspective arising from financial statements data. In addition, the use of accounting data allows a partition of business activities where banks are involved in, providing therefore an approximation of the state of the system related to several potential channels through which the financial distress might propagate. This is appealing also for regulators, since authorities are interested in a wide set of economic indicators in order to prevent the systemic relevance of financial institutions and they introduce specific requirements and constraints which affect directly financial statements measures. For these reasons, we believe that enriching the debate on financial stability by means of the Accounting Networks might give new clues about the resilience of the banking system.\
Another important result is the possibility of getting a neutral partition of banks in “network communities” (i.e. clusters) that results from the analysis of the network through community detection algorithms like the *Louvain* modularity maximization. The results indicates that regional communities evolve in time and the crisis has a clear role in weakening geographically determined structures. Furthermore, we focus on proxies for leverage, size and performance in order to understand if these variables have played a key role among the set of economic measures usually applied to classify banks (see e.g. [@BLU],[@HUI]). Hence, we aim to answer the question whether the collapse of financial markets has weakened these relationships, limiting therefore the power of traditional indicators to identify clusters of homogeneous banks. Correlation diagrams applied to show how network variables are related to economic measures suggest a turning point in correspondence of the outbreak of the crisis, which influenced the role of proxies for leverage, size or performance to group similar banks. This preliminary results motivated the last section, where by means of Principal Component Analysis we investigate which economic features are more likely to characterise the heterogeneity of the communities before, during and after the collapse of 2007-08.\
The remaining part of the work discusses open issues and future lines of research, such as open questions on how to improve the building of the Accounting Networks. In particular, the effectiveness of this approach can be enhanced by means of a careful variable selection based on the best financial practices applied in the evaluation of the financial statements structures. In addition, a more accurate normalization of the variables and caring about national regulations may increase the usefulness of the methodology. Furthermore, matrix filtering techniques and missing data reconstruction for financial statements information can enhance the extraction of meaningful clusters. Then, more advanced and focused tools could be conceived to analyse banks evolution towards similar business configurations or, alternatively, their divergent patterns as a response to changing market conditions.
Methods {#methods .unnumbered}
=======
Dataset preparation {#dataset-preparation .unnumbered}
-------------------
The dataset we analysed covers the set of banks provided by *Bloomberg* which were active (i.e. with traded instruments) at the end of the first quarter of 2014. Although quarterly information is available, we prefer to focus on annual balance sheets and income statements for accounting standard reasons, as different countries can have different obligations in terms of the provision of quarterly financial statements and this can lead to a mismatch and a poor variables coverage. Data are collected during the reference period from 2001 to end of 2013.\
As regards financial statements data, we select a large set of variables among those available in *Bloomberg* and related to the current regulatory framework ([@BASEL]). We rely on the existing literature for the selection process, although providing a neutral approach. We focus the analysis on proxies for banking business models (see e.g. [@ALTU],[@CAL]). In particular, balance sheet data provide a year-by-year picture of stock variables in terms of assets and liabilities for different instruments and maturities, while income statement data describe annual economic performances by partitioning profits and losses according to banking activities ranging for instance from interests to fees. Since national regulations allow firms to fix a different end of fiscal year, we extend the “end of year” definition and the relative financial statements according to a window in the range between three months before and after the end of the solar year. Solving overlapping issues in variables definitions, as well as the base currency choice, constitute the first step in the data pre-processing procedure. Firstly we discard total and sub-total measures (as they are redundant measures), and secondly we choose US dollars as currency base, thus facilitating banks comparisons.\
Working with financial statements data often leads to limitations in data coverage and completeness. Therefore, the starting point of our analysis is represented by the selection of a stable set of banks in terms of data availability during the sample period. In particular, banks might change the composition of their financial statements or they might be excluded by the *Bloomberg* provider due to several reasons, such as for instance a new regulation or a change in the bank’s economic activities. This, in turn, might cause *missing values* for some variables or lack of financial statements for several banks in certain years. In order to limit the impact of these issues on our findings, we define a methodology to measure the coverage of available variables for each bank in the reference period. We refer to the *Quality Ratios (QRs)* as the proportion of available and usable variables $V_{OK}$ over the maximum of all possible ones $V_{ALL}$ in the sample period: $QR=V_{OK}/V_{ALL}$. The tuning of this indicator, combined with two more filters on the frequency of financial reporting, provides a stable set of banks identified by their QR. The two additional criteria are: a minimum number of financial statements of ten out of thirteen possible fiscal years and a maximum gap period between two consecutive annual reports equal to seven hundred days. Once selected those banks that report almost continuously their financial statements, we study them according to their respective QR.\
Actually, individual QRs, as empirically computed on the entire perimeter, lie in the range between 0.3 (low accuracy/coverage) and 0.8 (high accuracy/coverage). Interestingly, many measures computed on the sets of banks obtained by fixing the QR do not seem to be significantly affected by its choice (except, as expected, for high QRs, where the size of the sample reduces significantly). With greater values of the QR parameter we have less available banks to be considered, since only few of them have a large set of variables present in many of their financial statements. As the estimates are stable in a reasonable QR range, in this work we decide to use the set arising from the case of QR = 0.5 that, even if arbitrary, still represents a good compromise between the accuracy of the estimate and the size of the sample (see Figure \[figQR\]).\
Accounting networks {#accounting-networks .unnumbered}
-------------------
For every year a vector of financial statement variables is assigned to each bank and used to compute the cosine similarities between pairs of banks/nodes. Here the intuition is that the most similar banks (as from their financial statements) must stay closer in the network and form a cluster. Then, the measure “cosine similarity” is transformed into a metrics (as triangular inequality must hold, the square root is used). The definition is the following: we compute the cosine of the angle between each pair of vectors with the dot product and then we apply the simple transformation $w_{i,j} = 1 - \sqrt{1 - C_{i,j}^2}$, where $w_{i,j} \in [0, 1]$ and $C_{i,j}$ is the cosine similarity between *i* and *j*. In network terms $w_{i,j}$ is the weight. This transformation (see [@DongenEnright] for an introduction to similarity measures and relative metrics) is used to move from the cosine similarities defined in the space \[-1,1\] to weights in the interval \[0,1\]. With this transformation the more two nodes are similar (or anti-similar) the larger is the weight, while a weight of 0 is assigned to a pair of nodes having totally dissimilar financial statements (actually, in our networks cosine similarities range mainly between 0 and +1).\
In addition, before the computation of the metrics, we need to take care of the size distribution of banks, as it spans over several orders of magnitude. To avoid a bias toward large institutions, for each bank we divide all variables in its vector by the respective total assets in such a way that the attributes of the vector refer to economic and financial *ratios*. This operation ensures that clusters will be formed by banks with similar business activities regardless their sizes.\
An important methodological choice of our study is the “neutral” approach used for the selection of the variables within the financial statements. A part from removing related and redundant measures (total and subtotals), we used all the available information applying the same weight to each variable in the vectors. This agnostic approach is in line with the goal of the paper, i.e. introducing the concept of Accounting Network, although we are aware that practitioners can give a different importance to each variable of the financial statement. In our perspective we expect that the relevant information will emerge in a bottom up process, as a spontaneous feature selection carried by our methodology. Finally, we introduce a confidence level (95%) during the link formation. By using a Montecarlo sampling test, if the cosine similarity is statistically significant with 95% of confidence we retain the link otherwise we discard it. As a result of this filtering procedure, we observe that the networks tend to be very dense and almost complete. The most of the information is carried by the weights of the links and less by the simple topology (degrees and other structural features).
Community detection {#community-detection .unnumbered}
-------------------
A classical method to investigate the structure of a network is the search of communities, i.e. regions of the network with larger *internal* links density. Intuitively, these regions are formed by clusters of nodes with higher degrees or, for weighted networks, with larger strengths. Several methods were proposed to find network communities without imposing a priori the number of communities but letting them emerging from the network itself. Among others we cite the optimization of the modularity that is a measure of how much the link structure differs from the random network where links are assigned with uniform probability and internal communities are not present (a part from fluctuation). For weighted networks, the modularity is defined by the following formula:\
$$Q_{w} = \frac{1}{2W} \cdot \sum_{ij} \left( w_{ij} - \frac{s_i s_j}{2W} \right) \delta (c_i,c_j)$$ where $s_i = \sum_{j} w_{ij}$ and $s_j = \sum_{i} w_{ij}$ are the strengths (sum of weights) of the nodes $i$ and $j$ respectively, $W=\sum_{ij} w_{ij}$ is the total sum of the weights and the function $\delta (c_i, c_j)$ is equal to 1 if $(i,j)$ belong to the same community or 0 if they are members of different communities. The maximum modularity value is 1 (an ideal case for which the communities are isolated) and can also take negative values. The 0 value coincides with a single partition that will correspond to the whole graph. A negative value means that there is no particular advantage in separating the nodes in that particular clusters and so there is not community structure whatsoever.\
To study the presence of communities it is often necessary to prune the network cutting the links if their weight is below a certain threshold. In our case we intend to consider only the links formed by nodes having a large similarity/weight $w$ of their financial statement vectors. The procedure of pruning can be guided by the use of the tools related to the community detection methodology ([@Fortunato:2010]). In particular, working with the modularity optimization function ([@Newman:2004]), with the Louvain technique ([@Blondel:2008]), it is possible to look at the *significance* associated to the threshold (as in [@Traag2013]), where the modularity is introduced as a parameter to check for the best resolved community structure. We use this parameter to help finding a reasonable pruning threshold range of values for the networks. A rule of thumb in this process is indeed avoiding network fragmentation, i.e. keeping the graph connected while removing not significant links. We made extensive tests computing quality/significance of the partitions (looking at the modularity parameter) using different pruning thresholds (i.e. removing the links having a low weight), determining a range of weights thresholds ($0.35< w_{i,j}<0.5$) that helps to prune the original networks to an optimal level. In this interval, communities are stable and the interpretation of each region can be seen as a result of the financial statement similarities across banks in different countries.
Network measures vs. economic indicators {#network-measures-vs.-economic-indicators .unnumbered}
----------------------------------------
Comparisons among network measures and economic indicators are provided to describe the correlation between nodes’ network topology and economic behavior. We study these features by means of extensive linear correlation tests (Pearson correlation) for the overall set of banks for each year and we verify the significance of the estimates by means of parametric tests. These estimates are based on the filtered networks, which are themselves based on the significance and the quality of the community detection algorithm. This analysis shows how nodes’ network properties (e.g. *Strength* or *Clustering Coefficient*) are associated to basic economic indicators (e.g. *Return on Assets*, *Total Assets* and *Total Debts to Total Assets*), thus showing whether nodes’ topological properties are positively or negatively related to certain economic features and how these relationships have weakened or reinforced during the crisis.\
Clustering coefficient is a measure of the local tendency of the nodes to form small regions of fully connected nodes, it is an average measure of the local clustering coefficient (actual number of triangles centered in each node over the total). Return on assets (ROA) is the net income over total assets and is a measure of the bank performance. Total debts to total assets is an indicator of the leverage of the bank and it is computed as the ratio between debts and its size (measured by total assets).
Principal Component Analysis {#principal-component-analysis .unnumbered}
----------------------------
Once communities are identified, we attempt to describe which financial statement variables are more likely to characterise these clusters. In order to facilitate comparability, we focus on those indicators more popular within the set of variables utilised to compute the cosine similarities (i.e. those indicators appearing with larger frequency in the entire dataset). In fact the inclusion of very poorly represented measures across different banks would have made the comparisons less effective with potential biases related to e.g. different regulations frameworks or geographical memberships. Hence, since we are interested in disentangling potential similarities/peculiarities across different communities, we prefer to rely on common and well-diffused measures of banking activities among those present in banks’ financial statements. In addition, we enrich this set by means of indicators such as ratios (e.g. *Return on cap* and *Total debts to total assets*) and aggregated measures (e.g. *Total assets*). Community detection identifies four main clusters, whose constituents are more numerous and stable in time. For the sake of conciseness, the *Result* section will focus mainly on these communities. In particular, for each year we describe by means of Principal Component Analysis (PCA) which economic features are more (less) able to contribute to the explained variability of communities’ members.\
PCA is a multivariate technique that analyses observations described by several inter-correlated variables. PCA extracts the important information from the data and expresses it as a set of new orthogonal variables (principal components). In our exercise, since measures present different ranges of dispersion (e.g. by construction some ratios are bounded) we rely on a scaled version of PCA; finally, we consider only principal components with eigenvalues greater than $1$ (in almost all cases they correspond to the first $3$ components). Then, we compute the proportion of the variance of each original economic measure that can be explained by the selected principal components. This, in turn, leads to a ranking of the original economic measures in terms of their ability to describe a certain community’s variability. In particular, since we are interested in how the onset of financial crisis has affected the banking system, we split this analysis in three periods: from $2001$ to $2006$ (before the crisis), from $2007$ to $2009$ (the onset of the crisis), and from $2010$ to $2013$ (after the breakdown of the markets). For each period we decided to characterise each community by the top and the bottom three measures, thus analysing how these ranks have evolved over time and across communities.
Results {#results .unnumbered}
=======
This section shows how Accounting Networks represent a complementary technique to traditional financial networks for the study of the banking system.\
While financial networks reflect the view from the market, related to e.g. the pairwise correlations of stock prices, Accounting Networks capture the effects of business decisions on financial statements measures and on business models of different institutions. An “ideal" investigation of the financial system would involve also a detailed analysis of the money flows among companies, which determine the so called “mutual exposures" (an important contagion channel). Unfortunately, these high granular and detailed data are usually not available. However, financial statements provide an aggregated view of mutual exposures and obligations for different maturities and types of instrument. This is an important point in favour of Accounting Networks as they report summarised information for e.g. phenomena occurring with different time scales and contractual terms, as opposite to the financial networks that rely only on homogeneous (daily or intraday) market data.\
Community Detection Results {#community-detection-results .unnumbered}
---------------------------
In this sub-section we focus our attention on the bottom up clusterization of the network from the application of the community detection algorithm and on the presence of geographical structures arising when we label each bank with its country. Therefore, we describe whether banks belonging to different countries (as a proxy for different regulations and/or level playing fields) have shown the tendency to be part of separate or, alternatively, common clusters and we verify, by analysing communities’ evolution over time, whether the crisis influenced these configurations. In particular, our community detection analysis on Accounting Networks shows these main results.\
It exhibits the presence over time of a clear community representing US banks and another one composed by Japanese banks, although for both regions there is also an additional smaller second group quite persistent in time. By contrast, it is not possible to identify a single and an unambiguous European community, since banks belonging to European countries seem to be likely to form national or sub-regional communities or to be included in a vast and geographically heterogeneous cluster (hereinafter the *Mixed* community). In addition, Asian banks are fragmented in several sub-regions where, in particular, the Arab and the Indian-Pakistan groups emerge. Therefore, the detection of communities within Accounting Networks reveals the presence of two homogeneous clusters corresponding to US and Japanese banks surrounded by a more diversified cloud of banks belonging to different countries; remarkably, European banks are not able to clusterise together in a single community, while it persists over time a certain level of separation based also on national borders. Hence, an interesting contribution of the paper points to the presence of a large and geographically heterogeneous community, which can be related to the fact that the globally established regulatory framework might have indeed accelerated the tendency of banking activities of different countries to converge into more uniform banking practices. This is shown for instance in Figure \[figCD\_PCA\] where we also observe that the outbreak of financial markets contributed to make the Mixed community more cohesive; furthermore, although still representing separate communities, both US and JP clusters result topologically closer to the Mixed community after the breakdown of 2007-08, thus supporting the interpretation of a gradual convergence of different areas into more similar patterns. In addition, the application of the community detection on Accounting Networks allows to identify even small communities, such as those related to African or Scandinavian banks. This represents a quite promising aspect of the methodology, since it ensures the detection of local reliable communities although the approach taken so far is eminently agnostic.\
It is not simple to explain the reasons behind the emergence and evolution of these communities, however it is possible to advance some intuitions based on the impact of globally recognized accounting standards ([@FASB]), the establishment of supranational supervisory and regulatory authorities, and on the role of the harmonization process of banking practices which have been implemented through e.g. the various Basel regulations ([@BASEL]). These contributions point to a common level playing field, which might have facilitated the emergence of a large and geographically heterogeneous community and its increasing topological proximity to both US and JP clusters. However, latter communities highlight the persistence of regional peculiarities. In Japan a deregulation process, known as the ’Japanese Big Bang’, was formulated during the 1990s to transform the traditional bank-centered system into a market-centered financial system characterised by more transparent and liberalised financial markets ([@JP1]). In fact, peculiar features of Japanese banking sector were the over-reliance on intermediated bank lending, the absence of a sufficient corporate bond market and a marginal role for non-bank financial institutions, whose main consequences were an abundance of non-performing loans, excess in liquidity, scarce investments and low banks profitability (see e.g. [@BATTEN]). Although this program was intended to cover the period 1996-2001, the goals have not yet been achieved and policy makers’ continuing reform efforts to remove past practices by market participants confirm the slowing convergence of the Japanese regulatory framework to a capital-market based financial system ([@JP2]). Thus, the presence of the JP community which gradually tends to the Mixed cluster is in line with evidences from the Japanese financial sector reforms aimed to change its reliance on indirect finance into a system of direct finance related to capital markets. Furthermore, it is remarkable the presence of a US community quite stable over time, which seems to be progressively attracted by the Mixed cluster. The US financial system presents peculiar features compared to other geographical areas. It is characterised by a relatively greater role of capital market-based intermediation, a higher importance of the ’shadow banking system’, and differences in the accounting standards ([@ECB]). The impact of non-bank financial intermediation relates to the use of originate-to-distribute lending models, which determine the direct issuance of asset-backed securities and the transfers of loans to government-sponsored enterprises (GSEs, e.g. Fannie Mae and Freddie Mac). Financial innovation played a key role and the increasing use of securitisation explains the low percentage of loans to households on banks’ balance sheets ([@ECB]). In addition, the US ’shadow banking system’ is highly dependent on the presence of finance companies, money market funds, hedge funds and investment funds, which influenced the growth of total assets in the US financial sector during the last decades ([@shin2012global],[@shadow]). The presence of a distinct community is probably also due to differences in accounting standards which mainly involve the treatment of derivatives positions between the US Generally Accepted Accounting Principles (US GAAP) and the International Financial Reporting Standards (IFRSs). In particular, US GAAP allows to report the net value of derivative positions with the same counterparty under the presence of a single master agreement, thus impacting on the size representation of balance sheets items. However, in Figure \[figCD\_PCA\] we observe that the US community (similarly to the JP community) is gradually approaching the Mixed community, and the consequences of the breakdown of 2007-08 seem to have enhanced this behaviour. Among the possible several reasons, it is worthwhile to consider the impacts of the reform on the OTC derivatives market (embedded in the Dodd-Frank Act) and the Basel III new banking regulation, which may have facilitated similarities among US institutions and their peers in the Mixed cluster.
Relationships between Economic Indicators and Network Properties {#relationships-between-economic-indicators-and-network-properties .unnumbered}
----------------------------------------------------------------
In this Section we provide a preliminary investigation of the relationships between banks’ economic indicators and their network properties. In order to characterise banks, we consider three common proxies for their classification: *Return on Assets* (for the *Performance*), *Total Assets* (for the *Size*) and *Total Debts to Total Assets* (for the *Leverage*). Then, comparisons are presented against two basic network measures: the *Strength* and the *Clustering Coefficient*. For each year from 2001 to 2013, we provide some insights for these relationships by estimating for the overall sample the correlations between banks’ economic indicators and network measures. As explained in the Method, in this exercise we consider the network filtered according to the quality/significance of the *Louvain* community detection algorithm. This helps us in the assessment of the significance of our results. Below, we show some examples to discuss how these relationships have evolved over time.\
In particular, we investigate whether once the effects of the crisis have spread throughout the financial sector, the capacity of traditional economic indicators (e.g. leverage, size, performance) to group banks could result undermined. For instance, the onset of the financial crisis clearly affects the relationships between *Total Debts to Total Assets* and network properties. Although the correlation between *Strength* and *Total Debts to Total Assets* remains negative during the entire sample period, the breakdown of financial markets seems to further enhance this effect for subsequent years (Figure \[figCorrelations\], plot on the left). Thus, this relationship suggests that, after the onset of the crisis, the use of leverage became on average more anti-correlated to the *Strength*. This implies that banks that are more dissimilar in terms of their financial statements (i.e. with lower values of *Strength*) are those that turned out to be less capitalised (i.e. with higher values of *Total Debts to Total Assets*). Furthermore, one might be interested in understanding the role played by the *Size* which represents a typical indicator utilised to classify banks. The correlation between *Strength* and *Total Assets* is almost flat and negative even after the collapse of 2007-08, but it shows an increasing trend in the recent period (Figure \[figCorrelations\], plot on the middle). Hence, it seems that after the outbreak of the crisis the *Size* became less correlated to the similarity among banks, as estimates pointing sharply to zero seem to suggest. We finally analyse the relationship between *Performance* and network properties (Figure \[figCorrelations\], plot on the right). In particular, in order to mimic how the presence/absence of more connected groups of banks is related to economic results we consider the *Clustering Coefficient* for determining the level of structure in the system. Although poorly statistically significant in the early 2000s, correlations with *Return on Assets* exhibit a decreasing pattern before the onset of the crisis and then remain negative although slightly erratic. The negative relationship between *Clustering Coefficient* and *Return on Assets* seems to suggest that the presence of well connected areas in the network (nodes with higher clustering coefficients) do not foster economic performance.\
These basic examples suggest that a clear investigation on the relationships between economic indicators and network properties might be not always conclusive. Moreover, once we consider the entire set of banks, there might be some cases where estimates are poorly significant. Still, some remarkable effects arise from this investigation strategy and preliminary results point to a turning point in the correlations across the outbreak of the financial crisis. In particular, diagrams confirm that leverage is an useful indicator for differentiating banks, hence deviations to a lower capitalization are associated to increasing dissimilarity with the rest of the system and the impact of the crisis suggests a reinforcement in this relationship. By contrast, it seems that size does not contribute too much on the similarity between banks after the breakdown of 2007-08, while it played a greater role before and during the crisis. Finally, the relationship between performance and the structure of the system is less clear and prevents straightforward conclusions.\
The identification of economic features potentially able to characterise specific portions of the system is addressed in the next sub-section.
PCA results {#pca-results .unnumbered}
-----------
Community detection shows the presence of three large clusters (Mixed, US, and JP) and an additional quite stable and persistent but smaller community (mostly US+EU banks). In this Section we provide a way to describe how these communities can be represented in terms of economic features (see Figure \[figCD\_PCA\]). Given the multi-dimensionality of the set of measures utilised to build the networks, we adopt a Principal Component Analysis approach to identify those measures which contribute more (less) to the explained variance within each community. For the sake of simplicity, we propose the ranking of the top (bottom) three measures for each community during the following intervals: pre-crisis ($2001-2006$), crisis ($2007-2009$), and post-crisis ($2010-2013$). In particular, for each year we compute the contribution of the original measures to explained variance; then, we average within each sub-period and we determine the rankings based on the mean period values. Below, we name the community with a mixed geographical composition as *C0*, while we refer to the communities with a prevalence of US, JP and European plus US banks as *C1*, *C2* and *C3*, respectively.\
This representation allows us to compare communities’ features over time and across different groups. For instance, we observe that *Total Assets* and *Interest Income* are quite frequent among top measures contributors, while *Total Debts to Total Assets* is recurrent among measures in the bottom rankings. This is not surprising given banks heterogeneity in terms of the size (*Total Assets*) and the economic results (*Interest Income*) distributions, in contrast with the tight constraints on leverage (*Total Debts to Total Assets*) due to regulatory requirements. By focusing on the top rankings we notice that *C0* and *C1* have fairly stable top contributors, while communities *C2* and *C3* are more affected by the wave of financial turmoil. Furthermore, bottom rankings seem to be on average only slightly influenced by the choice of different sub-periods. In addition, differences between mean values among the set of top three and the set of bottom three contributors are quite stable over time with only few exceptions, while the middle part of the distribution of measures’ contributions (not reported, available from authors upon request) is in general quite sparse. For these reasons, we prefer to focus on the top and the bottom rankings to describe communities’ features.\
One might be interested in how the outbreak of financial crisis have influenced these rankings. Top composition of *C1* is unaffected by the 2007-08 financial breakdown, while *C0* is only partially modified by the onset of the crisis (*Interest Income* is replaced by *Net Interest Income*). Conversely, *C2* presents a quite different configuration during the crisis sub-period when it exhibits a relevant role for expenses measures (i.e. Non Interest Expenses and Operating Expenses). Similarly, income statement measures become more relevant among top contributors also within the *C3* community. Interestingly, community *C0*, which is characterised by a mixed geographical composition, and the US community (*C1*) reach identical top contributors after the outbreak of 2007-08, while the JP community (*C2*), which shows the same top contributors as community *C0* in the first sub-period, seems to react differently during the crisis, although in the third sub-period it shows again top contributors similar to *C0* (and to *C1*). By contrast, community *C3* seems to present a peculiar pattern over time.\
Therefore, the crisis sub-period coincides with remarkable differences in the top contributors, while the recent sub-period points to a renewed tendency to get similar contributors for a wider set of banks (*C0* and *C1*, and partially *C2*). This seems to be in line with the above discussion on community detection results, where we highlighted a gradual proximity between clusters over time. Hence, these results suggest that heterogeneity within clusters is driven by similar economic measures after the crisis, although specific differences persist. This is the case for instance of loans, which are not present among top contributors in the US community while they are in the top ranking of both the Mixed and the JP community (as expected according to the above discussion). We also notice that the crisis seems to suggest an increasing importance of income statement measures in terms of contribution to the explained variance within communities. The breakdown of financial markets affected banks’ results and this justifies the high level of heterogeneity expressed by income statements indicators. This can also be related to the impact of the crisis on financial statement measures and on the different ways banks update their balance sheet structures compared to the recognition of economic results as reported in the income statements items. Similar comparisons can involve also the bottom three measures, but for conciseness we omit this part.
Discussion {#discussion .unnumbered}
==========
In this paper, we depict the banking system through banks’ financial statements. Our main contribution is represented by the introduction of a methodology to exploit balance sheets and income statements data to construct Accounting Networks. We show some relationships between economic indicators and network properties, which might provide some new useful insights for banking classification practices. Having depicted some effects of the recent financial crisis by using a simple framework is an encouraging sign for further extensions. We rely on “neutral” and “naive” techniques to build the Accounting Networks. In particular, among common approaches usually applied to describe similarities concepts, we adopt one of the basic method, i.e. the cosine similarity. Future works can exploit more advanced methodologies. Moreover, our selection of variables utilised to compute cosine similarities assumes that each component has the same importance. This is quite a naive hypothesis, which could be enriched by measures discrimination based on economic literature and/or practitioners practices. Finally, for accounting reasons we limit our study on annual financial statements, while a more detailed description of the system might easily involve the use of quarterly data. Despite these simplifying assumptions, our approach has the merit of introducing a novelty in the debate on banking networks, and we believe that future improvements in the directions outlined above will enforce Accounting Networks’ ability to describe the evolution of banking systems.
Acknowledgments {#acknowledgments .unnumbered}
===============
In this paper we thank the financial support of the Italian project CRISISLAB and the support of the Linkalab Laboratory for its open discussions and precious suggestions.
Figure Captions {#figure-captions .unnumbered}
===============
![This picture shows the number of nodes and edges along the sample period for different QR values. It is clear to see how for small values of the Quality Ratio parameter the curves belong to a stricter range.[]{data-label="figQR"}](1_figQualityRatios.png)
![In the upper panels it is shown the Community Structure for the three periods. The impact of the financial down-turn of 2007-08 seems to be reflected more heavily after the crisis, with the emergence of many sub-region communities as a response against the deteriorated market conditions. In the lower panel the most important financial statements components by the PCA analysis.[]{data-label="figCD_PCA"}](3_RadarNetPCA_mod_new.png)
![In these plots we present the correlations between banks’ Strength versus the Total Debts to Total Assets (Leverage) (plot on the left), Strength versus Total Assets (Size) (plot on the middle) and Clustering Coefficient versus Return on Assets (Performance) (plot on the right). The correlation is computed across the years 2001-13. It is clear the effect of the financial crisis across the outbreak of 2007-08. Red points stand for no-significant estimates, while blue points refer to significant estimates.[]{data-label="figCorrelations"}](4_1_leverage__strength.png "fig:") ![In these plots we present the correlations between banks’ Strength versus the Total Debts to Total Assets (Leverage) (plot on the left), Strength versus Total Assets (Size) (plot on the middle) and Clustering Coefficient versus Return on Assets (Performance) (plot on the right). The correlation is computed across the years 2001-13. It is clear the effect of the financial crisis across the outbreak of 2007-08. Red points stand for no-significant estimates, while blue points refer to significant estimates.[]{data-label="figCorrelations"}](4_2_tot_assets__strength.png "fig:") ![In these plots we present the correlations between banks’ Strength versus the Total Debts to Total Assets (Leverage) (plot on the left), Strength versus Total Assets (Size) (plot on the middle) and Clustering Coefficient versus Return on Assets (Performance) (plot on the right). The correlation is computed across the years 2001-13. It is clear the effect of the financial crisis across the outbreak of 2007-08. Red points stand for no-significant estimates, while blue points refer to significant estimates.[]{data-label="figCorrelations"}](4_3_return_on_assets__clustering.png "fig:")
Tables {#tables .unnumbered}
======
|
---
abstract: 'The family of Vicsek fractals is one of the most important and frequently studied regular fractal classes, and it is of considerable interest to understand the dynamical processes on this treelike fractal family. In this paper, we investigate discrete random walks on the Vicsek fractals, with the aim to obtain the exact solutions to the global mean-first-passage time (GMFPT), defined as the average of first-passage time (FPT) between two nodes over the whole family of fractals. Based on the known connections between FPTs, effective resistance, and the eigenvalues of graph Laplacian, we determine implicitly the GMFPT of the Vicsek fractals, which is corroborated by numerical results. The obtained closed-form solution shows that the GMFPT approximately grows as a power-law function with system size (number of all nodes), with the exponent lies between 1 and 2. We then provide both the upper bound and lower bound for GMFPT of general trees, and show that the leading behavior of the upper bound is the square of system size and the dominating scaling of the lower bound varies linearly with system size. We also show that the upper bound can be achieved in linear chains and the lower bound can be reached in star graphs. This study provides a comprehensive understanding of random walks on the Vicsek fractals and general treelike networks.'
author:
- 'Zhongzhi Zhang$^{1,2}$'
- 'Bin Wu$^{1,2}$'
- 'Hongjuan Zhang$^{2,3}$'
- 'Shuigeng Zhou$^{1,2}$'
- 'Jihong Guan$^{4}$'
- 'Zhigang Wang$^{5}$'
title: |
Determining global mean-first-passage time of random walks on Vicsek fractals\
using eigenvalues of Laplacian matrices
---
Introduction
============
Fractals are an important concept characterizing the features of real systems, because they can model a broad range of objects in nature and society [@Ma82]. Over the past few decades, fractals have attracted considerable interest from the physics community [@HaBe87; @BeHa00]. Among numerous fractal classes, the so-called regular fractals are an integral family of fractals. Examples include the Sierpinski gasket [@Si1915], the Koch snowflake [@Ko1906], the Vicsek fractals [@Vi83], and so on. These structures have received much attention [@Ma82; @HaBe87; @BeHa00], and continue to be an active object of research [@Fa03]. One of the main reasons for studying regular fractals is that one can obtain explicit closed-form solutions on a finite structure. Another justification is that various problems intractable on Euclidean lattices become solvable on regular fractals [@ScScGi97]. On the other hand, the exact solutions on regular fractals can provide useful insight different from that given by the approximate solutions for random fractals.
A central issue, still debated, is to understand how the underlying geometrical and structural features influence various dynamics defined on complex systems, which has been considered to be an important problem in many interdisciplinary fields, e.g. network science [@Ne03; @BoLaMoChHw06; @DoGoMe08]. Amongst a plethora of fundamental dynamical processes, random walks are crucial to a lot of branches of sciences and engineering and have appealed much interest [@HaBe87; @MeKl00; @MeKl04; @BuCa05]. A basic quantity relevant to random walks is first-passage time (FPT) [@Re01], which is the expected time to hit a target node for the first time for a walker staring from a source node. It is a quantitative indicator to characterize the transport efficiency, and carries much information of random walks since many other quantities can be expressed in terms of it. Thus, a growing number of studies have been concentrated on this interesting quantity [@Mo69; @NoRi04; @CoBeMo05; @SoRebe05; @CoBeTeVoKl07; @BaCaPa08; @ZhZhZhYiGu09; @TeBeVo09].
In view of the significance of regular fractals and random-walk dynamics, many authors have devoted their endeavors to study random walk on regular fractals [@HaRo08], such as the Sierpinski gasket [@KaBa02PRE; @KaBa02IJBC], the $T-$fractal [@KaRe86; @KaRe89; @Ag08], the Vicsek fractals [@Vi84; @Vo09], as well as the hierarchical lattice fractals [@BeOs79; @ZhXiZhLiGu09]. The results of these investigations unveiled many unusual and exotic phenomena of random walks on regular fractals. But in the aspect of FPT, these studies only addressed the mean of FPTs between part of the node pairs, e.g., between a given node and all other nodes [@KaBa02PRE; @KaBa02IJBC; @Ag08; @ZhXiZhLiGu09], while the scaling for the FPT averaged over all pairs of nodes, often called global mean first-passage time (GMFPT), in the regular fractals is still not well understood [@CoBeTeVoKl07], in spite that GMFPT provides comprehensive information of random walks on fractals and other media.
In this paper, we study analytically the discrete random walks on a class of treelike fractals—Vicsek fractals, which are typical candidates for exact mathematical fractals and have received extensive interest [@WeGr85; @BlJuKoFe03; @WaLi92; @StFeBl05; @ZhZhChYiGu08]. We determine exactly the GMFPT between two nodes over the whole fractal family, which is verified by numerical results. The closed-form formula for the GMFPT is achieved iteratively by using the advantage of the specific construction of the Vicsek fractals. The obtained explicit expression indicates that for large systems the GMFPT increases algebraically with the size of the systems. In the second part of this work, we provide the rigorous upper and lower bounds for GMFPT as a function of system size for general treelike media. We show that of all trees linear chains have the largest value of GMFPT and the star graphs have the smallest GMFPT.
Brief introduction to the Vicsek fractals
=========================================
The so-called Vicsek fractals are constructed in an iterative way [@Vi83; @BlJuKoFe03]. Let $V_{f,g}$ ($f\geq 2$, $g \geq 1$) denote the Vicsek fractals after $g$ iterations (generations). The construction starts from ($g=1$) a star-like cluster consisting of $f+1$ nodes arranged in a cross-wise pattern, where $f$ peripheral nodes are connected to a central node. This corresponds to $V_{f,1}$. For $g\geq 2$, $V_{f,g}$ is obtained from $V_{f, g-1}$. To obtain $V_{f,2}$, we generate $f$ replicas of $V_{f,1}$ and arrange them around the periphery of the original $V_{f,1}$, then we connect the central structure by $f$ additional links to the corner copy structure. These replication and connection steps are repeated infinitely, with the needed Vicsek fractals obtained in the limit $g
\rightarrow \infty$, whose fractal dimension is $\ln (f+1)/\ln3$. In Fig. \[net\], we show schematically the structure of $V_{4,3}$. According to the construction algorithm, at each step the number of nodes in the systems increases by a factor of $f+1$, thus, we can easily know that the total number of nodes (i.e., network order or system size) of $V_{f,g}$ is $N_{g}= (f+1)^{g}$. Since the whole family of Vicsek fractals has a treelike structure, the total number of links in $V_{f,g}$ is $E_{g}= N_{g}-1=(f+1)^{g}-1$.
![Illustration of the first several iterative processes of a particular Vicsek fractal $V_{4,3}$. The open circles denote the starting structure $V_{4,1}$.[]{data-label="net"}](Vicsek){width=".85\linewidth"}
GMFPT in the Vicsek fractals
============================
After introducing the Vicsek fractals $V_{f,g}$, we will continue to study numerically and analytically random walks performed on them, which is the primary topic of this present paper. The random-walk model we study is a simple one. Assuming the time to be discrete, at each time step, the walker (or particle) jumps uniformly from its current location to one of its neighbors. The highly desirable quantity related to random walks is the GMFPT starting from a source point to a given target point, averaged over all node pairs of source and target points.
The GMFPT can be obtained numerically but exactly via the pseudoinverse [@BeGr03; @RaMi71] of the Laplacian matrix, $\textbf{L}_g$, of $V_{f,g}$. The entries $L_{ij}^{g}$ of $\textbf{L}_g$ are defined as follows: the off-diagonal element $L_{ij}^{g}=-1$ if the pair of nodes $i$ and $j$ are linked to each other, otherwise $L_{ij}^{g}=0$; while the diagonal entry $L_{ii}^{(g)}=d_i$ (degree of node $i$). The pseudoinverse (denoted by $\textbf{L}_g^\dagger$) of $\textbf{L}_g$ is a variant of its inverse matrix and is defined to be $$\label{Pinverse01}
\textbf{L}_g^\dagger=\left(\textbf{L}_g-\frac{\textbf{e}_g\textbf{e}_g^\top}{N_g}\right)^{-1}+\frac{\textbf{e}_g\textbf{e}_g^\top}{N_g}\,,$$ where $\textbf{e}_g$ is the $N_g$-dimensional “one" vector, i.e., $\textbf{e}_g=(1,1,\cdots,1)^\top$.
The FPT between any pair of nodes in $V_{f,g}$ can be expressed in terms of the elements, $L_{ij}^{\dagger,g}$, of $\textbf{L}_g^\dagger$. Let $T_{ij}(g)$ stand for the FPT for random walks in $V_{f,g}$, starting from node $i$ to node $j$. Then [@CaAb08] $$\label{Hitting01}
T_{ij}(g)=\sum_{n=1}^{N_g}\left(L_{in}^{\dagger,g}-L_{ij}^{\dagger,g}-L_{jn}^{\dagger,g}+L_{jj}^{\dagger,g}\right)L_{nn}^{g}\,,$$ where $L_{nn}^{g}$ is the $n$th diagonal entry of $\textbf{L}_g$. Thus, the sum, $T_{\rm sum}(g)$, for FPTs between all node pairs in $V_{f,g}$ reads as $$\label{Hitting02}
T_{\rm sum}(g)=\sum_{i\neq j}\sum_{j=1}^{N_g}T_{ij}(g)\,,$$ and the GMFPT, $\langle T \rangle_g$, is $$\label{Hitting03}
\langle T \rangle_g=\frac{T_{\rm
sum}(g)}{N_g(N_g-1)}=\frac{1}{N_g(N_g-1)}\sum_{i\neq
j}\sum_{j=1}^{N_g}T_{ij}(g)\,.$$
Using Eqs. (\[Hitting01\]) and (\[Hitting03\]), we can compute directly the GMFPT $\langle T \rangle_g$ of the Vicsek fractals (see Fig. \[Time01\]). From Fig. \[Time01\], we can see that $\langle
T \rangle_g$ approximately grows exponentially in $g$. In other words, $\langle T \rangle_g$ is a power-law function of network order $N_g$ obeying the scaling as $\langle T \rangle_g \sim
(N_g)^{\theta}$ since $N_{g}= (f+1)^{g}$. It should be mentioned that although the expression of Eq. (\[Hitting03\]) seems compact, it requires computing the inversion of a matrix of order $N_{g}
\times N_{g}$ \[see Eq. (\[Pinverse01\])\], which make heavy demands on time and computational resources for large networks. Thus, one can calculate directly from Eq. (\[Hitting03\]) the GMFPT only for the first iterations. On the other hand, by using the method of pseudoinverse matrix it is difficult and even impossible to obtain the leading behavior of the exponent $\theta$ characterizing the random walks. It is thus of significant practical importance to seek for a computationally cheaper method for computing the GMFPT. Fortunately, the particular construction of the Vicsek fractals and the connection [@ChRaRuSm89; @Te91] between effective resistance and the FPTs for random walks allow us to calculate analytically the GMFPT and the exponent $\theta$ to obtain rigorous solutions.
![\[Time01\] (Color online) Global mean-first-passage time $\langle T \rangle_g$ as a function of the iteration $g$ on a semilogarithmic scale for different parameter $f$. The filled symbols are the numerical results obtained by direct calculation from Eqs. (\[Hitting01\]) and (\[Hitting03\]), while the empty symbols correspond to the exact values from Eq. (\[Hitting10\]), both of which are consistent with each other.](TrapTime.eps){width="0.85\linewidth"}
Below we will show how to avoid the computational complexity of inverting a matrix. To this end, we view $V_{f,g}$ as resistor networks [@DoSn84] by considering each edge to be a unit resistor. Let $R_{ij}(g)$ be the effective resistance between two nodes $i$ and $j$ in the electrical networks obtained from $V_{f,g}$. Then, according to the relation between FPTs and effective resistance [@ChRaRuSm89; @Te91], we have $$\label{Hitting04}
T_{ij}(g)+T_{ji}(g)=2\,E_g\,R_{ij}(g)\,.$$ Therefore, Eq. (\[Hitting02\]) can be rewritten as $$\label{Hitting06}
T_{\rm sum}(g)=E_{g}\,\sum_{i\neq
j}\sum_{j=1}^{N_g}R_{ij}(g)\,.$$ Using the previously obtained results [@GuMo96; @ZhKlLu96], the sum term on the right-hand side of Eq. (\[Hitting06\]) denoted by $R_{\rm sum}(g)$ can be recast as $$\label{Hitting08}
R_{\rm sum}(g)=\sum_{i\neq
j}\sum_{j=1}^{N_g}R_{ij}(g)=2\,N_g\,\sum_{i=2}^{N_g}\frac{1}{\lambda_i^{(g)}}\,,$$ where $\lambda_i^{(g)}$ ($i=2,\ldots, N_g$) are all the nonzero eigenvalues of Laplacian matrix, $\textbf{L}_g$, of the Vicsek fractals $V_{f,g}$. Then, we have $$\label{Hitting09}
\langle T \rangle_g=2\,\sum_{i=2}^{N_g}\frac{1}{\lambda_i^{(g)}}\,.$$ Having $\langle T \rangle_g$ in terms of the sum of the reciprocal of all nonzero Laplacian eigenvalues, the next step is to determine this sum.
The determination of all eigenvalues of $\textbf{L}_g$ can be resolved by using the real-space decimation method [@DoAlBeKa83; @Ra84]. Assuming that one has the eigenvalues $\lambda_i^{(g)}$ ($\lambda_i^{(g)}\neq 0$) at generation $g$, then the eigenvalues $\lambda_i^{(g+1)}$ of the next generation $g+1$ can be obtained through the relation [@JaWu92; @JaWu94; @BlFeJuKo04] $$\label{EigVal01}
\lambda_i^{(g+1)}(\lambda_i^{(g+1)}-3)(\lambda_i^{(g+1)}-f-1)=\lambda_i^{(g)}\,.$$ By solving Eq. (\[EigVal01\]), each eigenvalue $\lambda_i^{(g)}$ ($\lambda_i^{(g)}\neq 0$) at generation $g$ gives rise to three new and different ones at generation $g+1$, denoted by $\lambda_{i,1}^{(g+1)}$, $\lambda_{i,2}^{(g+1)}$, and $\lambda_{i,3}^{(g+1)}$, respectively. Moreover, the newly generated eigenvalues keep the degeneracy of their ancestors. Considering that all the nonzero eigenvalues of $V_{f,1}$ are $\lambda_i^{(1)}=1$ ($i=2,3,\ldots,f$) and $\lambda_{f+1}^{(1)}=f+1$, one can obtain all nonzero eigenvalues $\lambda_i^{(g)}$ of $\textbf{L}_g$ by iteratively solving Eq. (\[EigVal01\]) $g-1$ times.
It should be stressed that although we can provide $\lambda_i^{(g)}$ in a recursive way, it is difficult to write $\lambda_i^{(g)}$ in an explicit formula. However, in what follows we will show that the recursive solution to $\lambda_i^{(g)}$ allows to obtain a closed-form expression for the sum of the reciprocal of all nonzero eigenvalues of $\textbf{L}_g$, denoted by $\Lambda_g$. By definition $$\label{EigVal02}
\Lambda_g = \sum_{i=2}^{N_g} \frac{1}{\lambda_i^{(g)}}\,.$$ A main goal of the following text is to explicitly determining this sum.
Let $\Omega_g$ express the set of all the $N_g$ eigenvalues of $\textbf{L}_g$, i.e., $\Omega_g=\{\lambda_1^{(g)},
\lambda_2^{(g)},\cdots,\lambda_{N_g}^{(g)}\}$, where the distinctness of the elements has been ignored. Notice that all these eigenvalues are either nondegenerate or degenerate [@BlFeJuKo04]. The set of the former is denoted by $\Omega_g^{(1)}$, while the set of the latter is denoted by $\Omega_g^{(2)}$. That is to say, $\Omega_g=\Omega_g^{(1)} \cup
\Omega_g^{(2)}$. $\Omega_g^{(1)}$ includes $0$, $f+1$ and other eigenvalues generated by the “seed" $\lambda_{f+1}^{(1)}=f+1$; and $\Omega_g^{(2)}$ includes $1$ and other eigenvalues derived from $1$. Furthermore, the degeneracy of the degenerate eigenvalues rests with the generation at which they appeared at the first time. At a given generation $j$, the degeneracy of eigenvalues $1$ is $\Delta_j=(f-2)(f+1)^{j-1}+1$, a degeneracy that their descendants keep. In what follows, for convenience we use $\Omega_g^{(1)}$ to represent the nondegenerate eigenvalues of $\textbf{L}_g$ other than $0$.
We now return to derive $\Lambda_g$, which can be evidently recast as $$\label{EigVal03}
\Lambda_g = \sum_{\lambda_i^{(g)} \in
\Omega_g^{(1)}}\frac{1}{\lambda_i^{(g)}}+\sum_{\lambda_i^{(g)} \in
\Omega_g^{(2)}}\frac{1}{\lambda_i^{(g)}}\,.$$ We denote the two sums on the right-hand side of Eq. (\[EigVal03\]) by $\Lambda_g^{(1)}$, and $\Lambda_g^{(2)}$, respectively. Below we will calculate the two quantities $\Lambda_g^{(1)}$ and $\Lambda_g^{(2)}$.
We first calculate $\Lambda_g^{(1)}$. At the initial generation $1$, there is only one nondegenerate eigenvalue $f+1$, which produces three different nondegenerate eigenvalues at generation $2$. We call these three eigenvalues the first-generation descendants of $f+1$, which give rise to $3^2$ second-generation descendants of $f+1$ at the third generation. Thus, at $i$th generation, $3^{i-1}$ $(i-1)$th-generation descendants of $f+1$ are produced. Since all eigenvalues (degenerate or nondegenerate) which appeared at one generation will still appear in all subsequent generations [@JaWu92; @JaWu94; @BlFeJuKo04], we have $\Omega_{g-1}^{(1)} \subset \Omega_g^{(1)}$. Hence, as noted above, $\Omega_g^{(1)}$ consists of $f+1$ and all its offspring produced after generation 1.
Let $\Gamma_{i}^{(1)}$ be the sum of the reciprocal of all the $(i-1)$th-generation descendants of $f+1$. Then, $\Lambda_g^{(1)}$ can be rewritten in terms of $\Gamma_{i}^{(1)}$ as $$\label{EigVal04}
\Lambda_g^{(1)} = \sum_{i=0}^{g-1}\Gamma_{i}^{(1)}\,,$$ where $\Gamma_{0}^{(1)}=\Lambda_1^{(1)}=1/(f+1)$.
Note that for each nonzero eigenvalue (degenerate or nondegenerate) $\lambda_i^{(g)} \in \Omega_g$, Eq. (\[EigVal01\]) can be rewritten in an alternative way as $$\label{EigVal05}
(\lambda_i^{(g+1)})^{3}-(f+4)(\lambda_i^{(g+1)})^{2}+3(f+1)\lambda_i^{(g+1)}-\lambda_i^{(g)}=0\,.$$ According to the Vieta’s formulas, the three roots (i.e., $\lambda_{i,1}^{(g+1)}$, $\lambda_{i,2}^{(g+1)}$, and $\lambda_{i,3}^{(g+1)}$) of Eq. (\[EigVal05\]) satisfy the following two relations: $\lambda_{i,1}^{(g+1)} \cdot
\lambda_{i,2}^{(g+1)}\cdot \lambda_{i,3}^{(g+1)}=\lambda_i^{(g)}$ and $\lambda_{i,1}^{(g+1)} \cdot
\lambda_{i,2}^{(g+1)}+\lambda_{i,1}^{(g+1)} \cdot
\lambda_{i,3}^{(g+1)}+\lambda_{i,2}^{(g+1)} \cdot
\lambda_{i,3}^{(g+1)}=3(f+1)$. Thus, $1/\lambda_{i,1}^{(g+1)}+1/\lambda_{i,2}^{(g+1)}+1/\lambda_{i,3}^{(g+1)}=3(f+1)/\lambda_i^{(g)}$. Based on the results obtained above, we have $$\label{EigVal06}
\Gamma_g^{(1)} =\sum_{\lambda_i^{(g)} \in \Omega_g^{(1)}\backslash
\Omega_{g-1}^{(1)}}\frac{3(f+1)}{\lambda_i^{(g)}}=3(f+1)\Gamma_{g-1}^{(1)}\,,$$ which together with the initial condition $\Gamma_{0}^{(1)}=1/(f+1)$ leads to $\Gamma_{g}^{(1)}=3^g(f+1)^{g-1}$. Inserting this result into Eq. (\[EigVal04\]), we get $$\label{EigVal07}
\Lambda_g^{(1)} =
\sum_{i=0}^{g-1}\left[3^i(f+1)^{i-1}\right]=\frac{1}{f+1}\frac{3^{g}(f+1)^{g}-1}{3f+2}\,.$$
After obtaining $\Lambda_g^{(1)}$, all that is left to find an expression for $\Lambda_g$ is to evaluate $\Lambda_g^{(2)}$. For each eigenvalue 1, applying an approach similar to that used above, we can compute the sum of the reciprocal of its $(i-1)$th-generation descendants, which we represent by $\Upsilon_{i}^{(2)}$. After some simple algebra, we obtain $\Upsilon_{i}^{(2)}=3^{i}(f+1)^{i}$ ($ 0
\leq i \leq g-1$), where $\Upsilon_{0}^{(2)}=1$ express the reciprocal of the “seed” eigenvalue $1$ itself. It has been shown that [@JaWu92; @JaWu94; @BlFeJuKo04] in the Vicsek fractals $V_{f,g}$, the degeneracy of eigenvalues $1$ is $\Delta_g=(f-2)(f+1)^{g-1}+1$, and the degeneracy of each of its $i$th-generation ($0 \leq i\leq g-1$) offspring is $\Delta_{g-i}=(f-2)(f+1)^{g-1-i}+1$. Then, the quantity $\Lambda_g^{(2)}$ is evaluated as follows: $$\begin{aligned}
\label{EigVal08}
\Lambda_g^{(2)} &=& \sum_{i=0}^{g-1}\left(\Delta_{g-i}\cdot
\Upsilon_{i}^{(2)}\right)\nonumber \\
&=&\frac{(f-2)(f+1)^{g-1}(3^{g}-1)}{2}+\frac{3^{g}(f+1)^{g}-1}{3f+2}\,,\end{aligned}$$
Plugging Eqs. (\[EigVal07\]) and (\[EigVal08\]) into Eq. (\[EigVal03\]) yields $$\label{EigVal09}
\Lambda_g =
\frac{(f-2)(f+1)^{g-1}(3^{g}-1)}{2}+\frac{f+2}{f+1}\frac{3^{g}(f+1)^{g}-1}{3f+2}\,.$$ Using the relation $\langle T \rangle_g=2\Lambda_g$, we have $$\label{Hitting10}
\langle T
\rangle_g=(f-2)(f+1)^{g-1}(3^{g}-1)+\frac{2(f+2)}{f+1}\frac{3^{g}(f+1)^{g}-1}{3f+2}\,.$$ We have confirmed this closed-form expression for $\langle T
\rangle_g$ against direct computation from Eqs. (\[Hitting01\]) and (\[Hitting03\]). For all range of $g$ and different $f$, they completely agree with each other, which shows that the analytical formula provided by Eq. (\[Hitting10\]) is right. Figure \[Time01\] shows the comparison between the numerical and predicted results, with the latter plotted by the full expression for the sum in Eq. (\[Hitting10\]).
We can also support the validity of Eq. (\[Hitting10\]) by using another method. In fact, the correctness of Eq. (\[Hitting10\]) depends on all the nonzero Laplacian eigenvalues, the exactness for derivation of which can be established according to the relation between the Laplacian eigenvalues and the number of spanning trees of a graph. It has been established that the number of spanning tress on a connected graph $G$ with order $N$, $N_{\rm{st}}(G)$, is related to all its nonzero Laplacian eigenvalues $\lambda_i$ (assuming $\lambda_1=0$ and $\lambda_i\neq 0$ for $i=2,\cdots, N$), obeying the relation $N_{\rm{st}}(G)=\frac{1}{N}\prod_{i=2}^{N}\lambda_i$ [@TzWu00]. Since the Vicsek fractals $V_{f,g}$ are trees, the product of all nonzero Laplacian eigenvalues for $V_{f,g}$, denoted by $\Theta_g$, should equal $N_g$, which can be corroborated by the following argument. By definition, $\Theta_g=\Theta_g^{(1)}\cdot\Theta_g^{(2)}$, where $\Theta_g^{(i)}$ ($i=1,2$) is the product of Laplacian eigenvalues in $\Omega_g^{(i)}$. Applying the Vieta’s formulae, we can easily obtained the product of the $i$th-order ($0 \leq i \leq g-1$) offspring of the “seed” eigenvalue $f+1$ is $f+1$, which is independent of $i$. Then $\Theta_g^{(1)}=(f+1)^g$. Similarly, we have $\Theta_g^{(2)}=1$. Hence, $\Theta_g=(f+1)^g=N_g$, which proves the correctness of the computation on the Laplacian eigenvalues for $V_{f,g}$.
We proceed to show how to represent GMFPT, $\langle T\rangle_g$, as a function of the network order $N_g$, with the aim to obtain the relation between these two quantities. Recalling $N_{g}= (f+1)^{g}$, we have $3^g=(N_g)^{\ln 3/\ln (f+1)}$ that enables one to write $\langle T \rangle_g$ in the following form: $$\begin{aligned}
\label{Hitting11}
\langle T \rangle_g=&\quad&\frac{f-2}{f+1}N_{g}[(N_g)^{\ln 3/\ln
(f+1)}-1]\nonumber\\
&+&\frac{2(f+2)}{(f+1)(3f+2)}[(N_g)^{1+\ln 3/\ln (f+1)}-1]\,.\end{aligned}$$
Equation (\[Hitting11\]) unveils the explicit dependence relation of GMFPT on network order $N_g$ and parameter $f$. For large systems, i.e., $N_g\rightarrow \infty$, we have following expression for the dominating term of $\langle T \rangle_g$: $$\begin{aligned}
\label{Hitting12}
\langle T \rangle_g &\sim& \frac{f(3f-2)}{(f+1)(3f+2)}(N_g)^{1+\ln
3/\ln
(f+1)}\nonumber \\
&=&\frac{f(3f-2)}{(f+1)(3f+2)}(N_g)^{\theta}\nonumber \\
&=&\frac{f(3f-2)}{(f+1)(3f+2)}(N_g)^{2/\tilde{d}}\,\end{aligned}$$ where $\tilde{d}=2\ln(f+1)/\ln(3f+3)$ is the spectral dimension of the Viskek fractals [@BlFeJuKo04]. Thus, in the large limit of $g$, the GMFPT grows approximately as a power-law function of network order $N_g$ with the exponent $\theta=1+\ln 3/\ln (f+1)$ being a decreasing function of $f$. It is easy to see that the exponent $\theta$ is larger than 1 but not greater than 2. Particularly, when $f=2$, $\theta$ reduces to 2, which is the highest one reported thus far. In fact, 2 is largest exponent for GMFPT of random walks defined on treelike media, the rigorous proof of which will be given in the next section. In addition, it should be mentioned that the obtained superlinear dependence of GMFPT on the network order is in contrast with the other scalings previously observed for other media, e.g., linear scaling for the Apollonian networks [@HuXuWuWa06] and the pseudofractal scale-free web [@Bobe05], a logarithmic correction to the linear dependence for small-world trees [@BaCaPa08; @TeBeVo09]. Figure \[Time02\] shows how the GMFPT scales with the network order for various parameter $f$.
![\[Time02\] (Color online) Global mean first-passage time $\langle T \rangle_g$ versus the network order $N_g$ on a log-log scale. The filled symbols described the analytic results shown in Eq. (\[Hitting10\]). The solid lines represent the corresponding leading scaling given by Eq. (\[Hitting12\]).](MFPT.eps){width="0.85\linewidth"}
Bounds for GMFPT in trees
=========================
In Sect. III, we have shown that the GMFPT in the Vicsek fractals scales as a power-law function of network order. Previous studies exhibited that GMFPT in other trees may depend on network order $N$ following different scalings. For example, in the deterministic uniform recursive trees, their GMFPT varies with network order $N$ as $N\ln N$ [@ZhQiZhGaGu10]; in the $T-$fractal, the GMFPT grows as $N^{1+\ln 2/\ln 3}$ [@ZhLiZhWuGu09]. These show that in different trees, the GMFPT obeys different dependence relation on network order. Then, some natural questions arise: what are the upper and lower bounds for GMFPT in general trees? In which trees are these bounds reached?
As a matter of fact, the above questions are equivalent to find the upper and lower bounds for the total effective resistance, $R_{\rm
sum}$, as defined similarly by Eq. (\[Hitting08\]). One can prove with ease using various methods [@EnJaSn76; @Pl84; @Lolo03; @ZhZhWaSh07; @GhBoSa08] that for trees with order $N$, the upper and lower bounds for $R_{\rm sum}$ are $$\label{UppBoun}
R_{\rm sum}^{\rm Upp} = \frac{N(N-1)(N+1)}{3}\,$$ and $$\label{LowBoun}
R_{\rm sum}^{\rm Low} = 2(N-1)^{2}\,,$$ respectively.
The upper bound can be only reached for the tree that is exactly a linear chain (a path), which has two nodes with degree 1 at both ends of the chain and $N-2$ nodes with degree 2 in the middle [@ZhZhWaSh07]. Actually, this linear chain is one of the particular Viscek fractals corresponding to $f=2$. The result provided by Eq. (\[UppBoun\]) is compatible with that of the Vicsek fractals corresponding to $f=2$. As for the lower bound, it can be only achieved when the tree is a star graph [@Pl84; @Lolo03; @GhBoSa08], consisting of one central node and $N-1$ leaf nodes. All these leaf nodes are linked to the central node, and there is no edge between the leaf nodes.
From Eqs. (\[UppBoun\]) and (\[LowBoun\]), we can easily obtain that the upper and lower bounds for GMFPT are $$\label{UppGMFPT}
\langle T \rangle^{\rm Upp} = \frac{(N-1)(N+1)}{3}\,$$ and $$\label{LowGMFPT}
\langle T \rangle^{\rm Low} =\frac{ 2(N-1)^{2}}{N}\,,$$ respectively. Thus, the GMFPT $\langle T \rangle$ for general trees satisfies the relation $\langle T \rangle^{\rm Upp} \leq \langle T
\rangle \leq \langle T \rangle^{\rm Low}$. For large trees (i.e., $N
\rightarrow \infty$), the leading scalings for $\langle T
\rangle^{\rm Upp}$ and $\langle T \rangle^{\rm Low}$ change separately with network order $N$ as $\langle T \rangle^{\rm Upp}
\sim N$ and $\langle T \rangle^{\rm Low} \sim N^2$, implying that the scaling for the GMFPT in any tree must lie between linear scaling and square of network order. It is very obvious that the upper bound is much larger than the lower bound, the reasons for which lie with the underlying structures of the corresponding graphs: the linear chain is homogeneous, while the star graph is heterogeneous.
In the star graphs, the central node has a very large degree, and thus plays a crucial role in keeping the whole graph together. When the random-walk process is performed in the star graphs, the walker has a tendency to migrate toward the central node, through which it jumps to the target nodes. Therefore, the efficiency of random walks is very high in the star graphs, the linear scaling of the GMFPT with $N$ is the best we can see [@GhBoSa08]. Notice that the same scaling has been previously observed for complete graphs [@Bobe05]. In fact, the star graphs can be obtained from the complete graph with the same order by whittling down complete graphs, i.e., by the judiciously removing edges from complete graphs leaving only one node with $N-1$ connections, in order that the walker in the star graphs can find the destination nodes as easily as in the complete graphs.
On the contrary, in the linear chains all nodes are homogenous. When the walker starting from the source point to find the target node far away from the staring point, it must traverse all nodes between the starting point and the destination node. This makes the traverse time much longer than in the star graphs.
Finally, we should stress that although the star graphs are extreme of heterogenous media, they are very instructive to understand the dynamics of random walks on other heterogeneous graphs, especially scale-free networks [@BaAl99], which are ubiquitous in real natural and social systems [@AlBa02; @DoMe02]. Previous studies have shown that random walks in scale-free networks are very efficient [@Bobe05; @ZhQiZhXiGu09; @ZhGuXiQiZh09; @ZhZhXiChLiGu09; @AgBu09; @ZhLiGoZhGuLi09; @ZhXiZhLiGu09], the roots of which is actually can be heuristically explained as above. The large-degree nodes in scale-free networks play a similar role as that of the central node in the star graphs, making the GMFPT very small.
Conclusions
===========
We have studied the discrete random walks on the family of Vicsek fractals, which includes the linear chains as a particular case. Using the connection between the FPTs and the Laplacian eigenvalues for general graphs, we have computed the GMFPT averaged over all pairs of nodes in the fractals and obtained explicit solutions to the GMFPT. The obtained closed-form formula shows that in the limit of infinite network order $N$, the GMFPT $ \langle T \rangle $ grows approximately as a power-law function of $N$: $\langle T \rangle
\sim N^{1+\ln 3/\ln (f+1)}$. We have also provided rigorous bounds on the network order dependence of the GMFPT in general treelike networks. We showed that the upper and lower bounds can be achieved in linear chains and star graphs, respectively. Our study sheds useful insights into the random-walk process occurring on treelike media.
Acknowledgments {#acknowledgments .unnumbered}
---------------
We would like to thank Yuan Lin for assistance. This work was supported by the National Natural Science Foundation of China under Grants No. 60704044, No. 60873040, and No. 60873070; the National Basic Research Program of China under Grant No. 2007CB310806; the Shanghai Leading Academic Discipline Project No. B114, and the Program for New Century Excellent Talents in University of China (Grant No. NCET-06-0376). B.W. also acknowledges the support by Fudan’s Undergraduate Research Opportunities Program, and H.J.Z. acknowledges the support by Shanghai Key Laboratory of Intelligent Information Processing, China. Grant No. IIPL-09-017.
B. Mandlebrot, The Fractal Geometry of Nature (Freeman, San Francisco, 1982).
S. Havlin and D. ben-Avraham, Adv. Phys. [**36**]{}, 695 (1987).
D. ben-Avraham and S. Havlin, Diffusion and Reactions in Fractals and Disordered Media (Cambridge Universiy Press, Cambridge, 2000).
W. Sierpinski, Compt. Rend. [**160**]{}, 302 (1915).
H. von Koch, Acta Math. [**30**]{}, 145 (1906).
T. Vicsek J. Phys. A [**16**]{}, L647 (1983).
K. J. Falconer, Fractal Geometry: Mathematical Foundations and Applications (Wiley, Chichester, 2003).
W. A. Schwalm, M. K. Schwalm, and M. Giona, Phys. Rev. E [**55**]{}, 6741 (1997).
M. E. J. Newman, SIAM Rev. [**45**]{}, 167 (2003).
S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and D.-U. Hwanga, Phys. Rep. [**424**]{}, 175 (2006).
S. N. Dorogovtsev, A. V. Goltsev and J. F. F. Mendes, Rev. Mod. Phys. [**80**]{}, 1275 (2008).
R. Metzler and J. Klafter, Phys. Rep. [**339**]{}, 1 (2000).
R. Metzler and J. Klafter, J. Phys. A [**37**]{}, R161 (2004).
R Burioni and D Cassi, J. Phys. A [**38**]{}, R45 (2005).
S. Redner, *A Guide to First-Passage Processes* (Cambridge University Press, Cambridge, 2001).
E. W. Montroll, J. Math. Phys. [**10**]{}, 753 (1969).
J. D. Noh and H. Rieger, Phys. Rev. Lett. [**92**]{}, 118701 (2004).
S. Condamin, O. Bénichou, and M. Moreau, Phys. Rev. Lett. [**95**]{}, 260601 (2005).
V. Sood, S. Redner, and D. ben-Avraham, J. Phys. A [**38**]{}, 109 (2005).
S. Condamin, O. Bénichou, V. Tejedor, R. Voituriez, and J. Klafter, Nature (London) [**450**]{}, 77 (2007).
A. Baronchelli, M. Catanzaro, and R. Pastor-Satorras, Phys. Rev. E [**78**]{}, 011114 (2008).
Z. Z. Zhang, Y. C. Zhang, S. G. Zhou, M. Yin, and J. H. Guan, J. Math. Phys. [**50**]{}, 033514 (2009).
V. Tejedor, O. Bénichou, and R. Voituriez, Phys. Rev. E [**80**]{}, 065104(R) (2009).
C. P. Haynes and A. P. Roberts, Phys. Rev. E [**78**]{}, 041111 (2008).
J. J. Kozak and V. Balakrishnan, Phys. Rev. E [**65**]{}, 021105 (2002).
J. J. Kozak and V. Balakrishnan, Int. J. Bifurcation Chaos Appl. Sci. Eng. [**12**]{}, 2379 (2002).
S. Havlin and H. Weissman, J. Phys. A [**19**]{}, L1021 (1986).
B. Kahng and S. Redner, J. Phys. A [**22**]{}, 887 (1989).
E. Agliari, Phys. Rev. E [**77**]{}, 011128 (2008).
R. A. Guyer, Phys. Rev. A [**30**]{}, 1112 (1984).
A. Volta, J. Phys. A [**42**]{}, 225003 (2009).
A. N. Berker and S. Ostlund, J. Phys. C [**12**]{}, 4961 (1979).
Z. Z. Zhang, W. L. Xie, S. G. Zhou, M. Li, and J. H. Guan, Phys. Rev. E [**80**]{}, 061111 (2009).
I. Webman and G. S. Grest, Phys. Rev. B [**31**]{}, 1689 (1985).
A. Blumen, A. Jurjiu, Th. Koslowski, and Ch. von Ferber, Phys. Rev. E [**67**]{}, 061103 (2003).
X. M. Wang, Z. F. Ling, and R. B. Tao, Phys. Rev. B [**45**]{}, 5675 (1992).
C. Stamarel, Ch. von Ferber, and A. Blumen, J. Chem. Phys. [**123**]{}, 034907 (2005).
Z. Z. Zhang, S. G. Zhou, L. C. Chen, M. Yin, and J. H. Guan, J. Phys. A [**41**]{}, 485102 (2008).
A. Ben-Israel and T. Greville, *Generalized Inverses: Theory and Applications*, 2nd ed. (Springer, New York, 2003).
C. Rao and S. Mitra, *Generalized Inverse of Matrices and Its Applications* (John Wiley and Sons, New York, 1971).
A. García Cantú and E. Abad, Phys. Rev. E [**77**]{}, 031121 (2008).
A. K. Chandra, P. Raghavan, W. L. Ruzzo, and R. Smolensky, in *Proceedings of the 21st Annnual ACM Symposium on the Theory of Computing* (ACM Press, New York, 1989), pp. 574-586.
P. Tetali, J. Theor. Probab. [**4**]{}, 101 (1991).
P. G. Doyle and J. L. Snell, *Random Walks and Electric Networks* (The Mathematical Association of America, Oberlin, OH, 1984); e-print arXiv:math.PR/0001057.
I. Gutman and B. Mohar, J. Chem. Inf. Comput. Sci. [**36**]{}, 982 (1996).
H.-Y. Zhu, D. J. Klein, and I. Lukovits, J. Chem. Inf. Comput. Sci. [**36**]{}, 420 (1996).
E. Domany, S. Alexander, D. Bensimon, and L. P. Kadanoff, Phys. Rev. B [**28**]{}, 3110 (1983).
R. Rammal, J. Phys. (France) [**45**]{}, 191 (1984).
C. S. Jayanthi, S. Y. Wu, and J. Cocks, Phys. Rev. Lett. [**69**]{}, 1955 (1992).
C. S. Jayanthi and S. Y. Wu, Phys. Rev. B [**50**]{}, 897 (1994).
A. Blumen, Ch. von Ferber, A. Jurjiu, and Th. Koslowski, Macromolecules [**37**]{}, 638 (2004).
W.-J. Tzeng and F. Y Wu, Appl. Math. Lett. [**13**]{}, 19 (2000).
Z.-G. Huang, X.-J. Xu, Z.-X. Wu, and Y.-H. Wang, Eur. Phys. J. B [**51**]{}, 549 (2006).
E. M. Bollt and D. ben-Avraham, New J. Phys. [**7**]{}, 26 (2005).
Z. Z. Zhang, Y. Qi, S. G. Zhou, S. Y. Gao, and J. H. Guan, Phys. Rev. E [**81**]{}, 016114 (2010).
Z. Z. Zhang, Y. Lin, S. G. Zhou, B. Wu, and J. H. Guan, New J. Phys. [**11**]{}, 103043 (2009).
R. Entringer, D. Jackson, and D. Snyder, Czech. Math. J. [**26**]{}, 283 (1976).
J. Plesnik, J. Graph Theory [**8**]{}, 1 (1984).
W. S. Lovejoy and C. H. Loch, Soc. Networks [**25**]{}, 333 (2003).
Z. Z. Zhang, S. G. Zhou, Z. Y. Wang, and Z. Shen, J. Phys. A: Math. Theor. [**40**]{}, 11863 (2007).
A. Ghosh, S. Boyd, and A. Saberi, SIAM Rev. [**50**]{}, 37 (2008).
A.-L. Barabási and R. Albert, Science [**286**]{}, 509 (1999).
R. Albert and A.-L. Barabási, Rev. Mod. Phys. [**74**]{}, 47 (2002).
S. N. Dorogovtsev and J.F.F. Mendes, Adv. Phys. [**51**]{}, 1079 (2002).
Z. Z. Zhang, Y. Qi, S. G. Zhou, W. L. Xie, and J. H. Guan, Phys. Rev. E [**79**]{}, 021127 (2009).
Z. Z. Zhang, J. H. Guan, W. L. Xie, Y. Qi, and S. G. Zhou, EPL [**86**]{}, 10006 (2009).
Z. Z. Zhang, S. G. Zhou, W. L. Xie, L. C. Chen, Y. Lin, and J. H. Guan, Phys. Rev. E [**79**]{}, 061113 (2009).
E. Agliari and R. Burioni, Phys. Rev. E [**80**]{}, 031125 (2009).
Z. Z. Zhang, Y. Lin, S. Y. Gao, S. G. Zhou, J. H. Guan, and M. Li, Phys. Rev. E [**80**]{}, 051120 (2009).
|
---
abstract: 'We determine the [Postnikov]{} Tower and [Postnikov]{} Invariants of a Crossed Complex in a purely algebraic way. Using the fact that Crossed Complexes are homotopy types for filtered spaces, we use the above “algebraically defined” [[Postnikov]{} tower]{} and [[Postnikov]{} invariant]{}[s]{} to obtain from them those of filtered spaces. We argue that a similar “purely algebraic” approach to [[Postnikov]{} invariant]{}[s]{} may also be used in other categories of spaces.'
author:
- 'M. Bullejos[^1], E. Faro[$^*$]{}, and M. A. García-Muñoz'
title: Postnikov Invariants of Crossed Complexes
---
Introduction
============
The theory of Postnikov towers provides both, a way of analyzing a space $X$ from the point of view of its homotopy groups, and a prescription for the construction of spaces with specified homotopy groups in each dimension. The required data for this construction is the information contained in the Postnikov tower of the space: a diagram of spaces $$\xymatrix@C=1.5pc@R=1.35pc {\cdots
\ar[r]& X_{n+1}\ar[r]^-{\eta_{n+1}} & X_n\ar[r]^-{\eta_n} & X_{n-1}\ar[r]
&\cdots \ar[r]& X_0,}$$ whose inverse limit has the same homotopy type as the given space and where each map $\eta_n$ is a fibration whose fibers are Eilenberg-Mac Lane spaces of the type of a $K(\Pi,n)$.
The Postnikov invariants of the space $X$ are cohomology invariants, denoted $k_{n}$, $n\geq1$, which provide the necessary information in order to build the Postnikov tower of $X$ floor by floor. The Postnikov invariant $k_{n}$, associated to the fibration $\eta_n$, says how to glue $K(\Pi,n)$ spaces into the space $X_{n-1}$ to form $X_{n}$.
The purpose of this paper is to present a purely algebraic approach to the calculation of the Postnikov invariants of a space in the sense that it avoids the use of topological tools such as universal covering; these are replaced by algebraic tools such as free resolutions.
One of the motivations for such an algebraic approach is the fact that it offers the possibility of applying it to more complicated contexts such as categories of diagrams of spaces. Previous work in this direction can be seen in [@BuCaFa1998], where the third equivariant Postnikov invariant of a $G$-space is calculated by purely algebraic methods of the same nature as those presented here.
Our approach to Postnikov towers and Postnikov invariants of the spaces in a given category of spaces is based in the existence of an algebraic category $\s$ with a Quillen model structure, together with a pair of functors $\Pi:\t\rightarrow\s$, $B:\s\rightarrow \t$ which induce an equivalence in the corresponding homotopy categories. Hence is a category of algebraic models for the homotopy types of the spaces in . In this situation, the calculation of the Postnikov towers of the spaces in can be reduced to calculating Postnikov towers in provided that the “classifying space” functor $B$ preserves fibrations as well as the homotopy type of their fibers. This is the case for the functors which are the object of this paper. These are, on the one hand, the functor $\Pi=$[*“Fundamental crossed complex of the singular complex of a space”*]{}, and on the other hand, the functor $B=$[*“Geometric realization of the nerve of a crossed complex”*]{} (see below).
The calculation of Postnikov towers of the algebraic models which are the objects of is based in the following general scheme: For every non-negative integer $n$ we seek a full, reflective subcategory $i_n:\s_n\rightarrow\s$ whose objects model all “*homotopy $n$-types*” in , and such that $\s_n$ is contained in $\s_{n+1}$ in such a way that the inclusions $j_n:\s_n\to\s_{n+1}$ satisfy $i_{n+1}j_n=i_n$. Then if $\tilde{P}_n$ is the left adjoint to $i_n$, the identity $\tilde{P}_{n+1}i_{n+1}=1_{\s_{n+1}}$ implies $\tilde{P}_{n+1}i_n=j_n$ and the composites $P_n=i_n\tilde{P}_n$ are idempotent endofunctors of verifying $P_{n+1}P_n=P_n\simeq P_nP_{n+1}$, and related by a chain of natural transformations $$\label{chain}
\xymatrix@C=1.5pc@R=1.35pc{\cdots
\ar[r]& P_{n+1}\ar[r]^-{\eta_{n+1}} & P_n\ar[r]^-{\eta_n} & P_{n-1}\ar[r]
&\cdots \ar[r]& P_0}$$ (where $\eta_{n+1}$ is the image by $P_{n+1}$ of the unit of the adjunction $\tilde{P}_n\vdash i_n$). Then we prove that this chain is the “*universal Postnikov tower*" in , in the sense that for any object $C\in\s$ the evaluation of the above chain in $C$ yields the Postnikov tower of $C$.
With regards to the Postnikov invariants, it is noteworthy the simple form taken by the fibrations of the Postnikov towers of the objects of , making it easy to analyze them and to show that the component of $\eta_{n+1}$ in each object can be interpreted as a 2-extension, a 2-torsor, and then it gives rise to an element $k_{n+1}$ of a 2-dimensional algebraic (cotriple) cohomology in (a slice of) the category $\s_n$ of algebraic $n$-types. We regard such 2-dimensional cohomology element as a sort of “*algebraic Postnikov invariant*", the “*topological*" one residing in a $(n+2)$-dimensional singular cohomology.
The last step in our approach consists in obtaining the topological Postnikov invariants from the algebraic ones. This is achieved by showing the existence of a natural map from the algebraic 2-cohomology of an algebraic $n$-type in $\s_n$ to the singular $(n+2)$-cohomology of its corresponding classifying space.
The ideal scenario offering the necessary tools to apply the algebraic approach just described is that in which is the category of CW-complexes and is the category of simplicial groupoids. The main interest of this context lies, of course, in the fact that simplicial groupoids model all homotopy types and, therefore, a procedure to calculate the algebraic Postnikov invariants of simplicial groupoids could be used to obtain the Postnikov invariants of any space. The work presented here is, however, more modest in scope and it is, in fact, a preliminary step in that direction. We carry out the general method described above in the category of crossed complexes, a category which does not model all homotopy types. However, although our present results cannot be used to obtain the Postnikov invariant of all spaces, they are, of course, sufficient to obtain the Postnikov invariants of any space having the homotopy type of a crossed complex.
The general plan of the paper is as follows: Section \[xm\] serves to set-up our notation and to introduce the definition and main facts about crossed modules that are used in the paper. Everything here is review material which can be found elsewhere in the literature, except that is presented in a, perhaps, slightly non-conventional way, with an emphasis in the functorial aspect of the definitions. We apologize for any distraction this may cause to those readers who are already familiar with the subject. Section \[crs\] introduces crossed complexes, the categories of $n$-types in crossed complexes and the Postnikov towers they give rise to. Crossed complexes are again introduced in a slightly non conventional way, being defined in terms of crossed modules instead of in terms of groupoids, as it is customary. Choosing a definition which is based on a more elaborate concept not only simplifies the definition itself but, more important, it allows simpler and clearer reasonings and proofs. We also show in this section that the geometric realizations of these Postnikov towers are the Postnikov towers of spaces. Section \[inv\] is the main section of the paper. Here the fibrations in the Postnikov towers of crossed complexes are analyzed and interpreted as extensions, torsors, and therefore, as a consequence of Duskin’s interpretation theorem, as cohomology elements in a cotriple cohomology. Finally a general theorem is proved showing how to map the cotriple cohomology of crossed complexes to the singular cohomology of its classifying spaces. Section \[apen\] is an appendix containing the basic definitions and results about torsors and their role in Duskin’s interpretation theorem of cotriple cohomology. This material, essential for the main results of the paper, is well known to the specialist but it is no so well known in larger circles. It has been put in an appendix in order not to break the discourse and to allow the reader to focus on the main line of reasoning.
Crossed modules {#xm}
===============
We denote [[[$\mathbf{Gr}$]{}]{}]{} the category of groups and [[[$\mathbf{Gpd}$]{}]{}]{} the category of small groupoids, that is, the category of internal groupoids in the category ${{\ensuremath{\mathbf{Set}}}}$ of sets. By [[$\mathbf{TdGpd}$]{}]{} we denote the full subcategory of ${{{\ensuremath{\mathbf{Gpd}}}}}$ determined by the totally disconnected groupoids. If $X$ is a set, ${{{\ensuremath{\mathbf{Gpd}}}}}_{X}$ denotes the subcategory of ${{{\ensuremath{\mathbf{Gpd}}}}}$ whose objects are all groupoids with set of objects $X$ and whose arrows are functors which are the identity on objects. Similarly, ${{\ensuremath{\mathbf{TdGpd}}}}_X$ denotes the full subcategory of ${{{\ensuremath{\mathbf{Gpd}}}}}_X$ determined by the totally disconnected groupoids. Clearly, ${{\ensuremath{\mathbf{TdGpd}}}}_X$ can be identified with the category ${{{\ensuremath{\mathbf{Gr}}}}}({{\ensuremath{\mathbf{Set}}}}/X)$ of internal group objects in the slice category ${{\ensuremath{\mathbf{Set}}}}/X$. For a given groupoid we denote ${\text{\sf obj}}(\bg)$ its set of objects, and ${\text{\sf arr}}(\bg)$ its set of arrows. It is clear that ${\text{\sf obj}}$ determines a functor ${\text{\sf obj}}:{{\ensuremath{\mathbf{TdGpd}}}}\rightarrow
{{\ensuremath{\mathbf{Set}}}}$ whose fiber over a set $X$ is the category ${{\ensuremath{\mathbf{TdGpd}}}}_X$.
If is a groupoid, a (left) -group is a functor from to [[[$\mathbf{Gr}$]{}]{}]{}. We will use exponential notation to denote functor categories, so that the category of (always left-) -groups will be denoted ${{{\ensuremath{\mathbf{Gr}}}}}^\bg$. An important example of a -group is the functor ${\text{\sf End}}_\bg:\bg\rightarrow{{{\ensuremath{\mathbf{Gr}}}}}$ taking each object of to its group of endomorphisms and each arrow $u$ in to the group homomorphism (really an iso) given by conjugation by $u$. This -group is often referred to as the groupoid acting on itself by conjugation.
A given -group $C:\bg\rightarrow{{{\ensuremath{\mathbf{Gr}}}}}$ is often determined in terms of an action of (the arrows of) on (the arrows of) a totally disconnect groupoid, $\widehat{C}$, whose set of object is ${\text{\sf obj}}(\bg)$ and whose endomorphism groups are ${\text{\sf End}}_{\widehat{C}}(x)=C(x)$. This action of on $\widehat{C}$ is traditionally denoted $$\laction t u= C(t)(u),$$ for $t:x\rightarrow y$ an arrow in and $u$ an element in $C(x)$. This description of -groups is objectified by a full and faithful functor $\widehat{(\;)}:{{{\ensuremath{\mathbf{Gr}}}}}^\bg\rightarrow{{\ensuremath{\mathbf{TdGpd}}}}_{{\text{\sf obj}}(\bg)}$ which reflects zero objects and zero maps, and therefore not only preserves but also reflects chain complexes. Obviously, $\widehat{{\text{\sf End}}_{\bg}} = {\text{\sf End}}(\bg)$, the subcategory of $\bg$ consisting of just its endomorphisms.
For a given groupoid we denote ${{\ensuremath{\mathbf{Pxm}_{}}}}_\bg$ the category of pre-crossed modules over , which we define as the slice category ${{\ensuremath{\mathbf{Pxm}_{}}}}_\bg={{{\ensuremath{\mathbf{Gr}}}}}^\bg/{\text{\sf End}}_\bg$. The initial and terminal objects in ${{\ensuremath{\mathbf{Pxm}_{}}}}_\bg$ are denoted ${{\boldsymbol0}_{_{\bg}}}$ and ${{\boldsymbol1}_{_{\bg}}}$ respectively, so that ${{\boldsymbol0}_{_{\bg}}}$ is a constant zero functor $\bg\rightarrow{{{\ensuremath{\mathbf{Gr}}}}}$ together with the unique natural transformation from it to ${\text{\sf End}}_\bg$, while ${{\boldsymbol1}_{_{\bg}}}$ is the functor ${\text{\sf End}}_\bg$ together with its identity map. Note that ${{\boldsymbol0}_{_{\bg}}}={{\boldsymbol1}_{_{\bg}}}$ if and only if is discrete as category (that is, all arrows in are identities).
If and $\bg'$ are any two groupoids and $(C,\delta)$, $(C',\delta')$ are pre-crossed modules respectively over and $\bg'$, a morphism of pre-crossed modules from $(C,\delta)$ to $(C',\delta')$ is a pair $(f,\alpha)$ where $f:\bg\rightarrow\bg'$ is a *change-of-base* functor and $\alpha:C\rightarrow C'\circ f$ is a natural transformation such that $(\delta'*f)\circ\alpha=\tilde{f}\circ\delta$, where $\tilde{f}$ is the same functor $f$ but regarded as natural transformation from ${\text{\sf End}}_\bg$ to ${\text{\sf End}}_{\bg'}\circ f$, $$\xymatrix@C=1.5pc@R=1.75pc {C\ar[d]_\delta
\ar[r]^-\alpha & C'\circ f\ar[d]^{\delta'*f} \\
{\text{\sf End}}_\bg\ar[r]_-{\tilde{f}} & {\text{\sf End}}_{\bg'}\circ f. }$$ For an object $x\in\bg$ and an element $u\in C(x)$, this condition reads $f(\delta_x(u))=\delta'_{f(x)}(\alpha_x(u))$. The general morphisms of pre-crossed modules just defined are the arrows of the category of pre-crossed modules, denoted [[$\mathbf{Pxm}_{}$]{}]{}. The structure of an object in [[$\mathbf{Pxm}_{}$]{}]{} can be described as a triple $(\bg,C,\delta)$ where is a groupoid and $(C,\delta)$ is a pre-crossed module over . By a *reduced* pre-crossed module we mean one in which is just a group.
The following proposition provides the definition of the fundamental groupoid of a pre-crossed module. Note that if $(C,\delta)$ is a pre-crossed module over , by applying the functor $\widehat{(\;)}$ to the natural map $\delta:C\rightarrow {\text{\sf End}}_\bg$ we obtain a functor $\widehat{\delta}:\widehat{C}\rightarrow {\text{\sf End}}(\bg)$ which ($\widehat{C}$ being totally disconnected) is equivalent to a functor $\widehat{C}\rightarrow\bg$. The latter will not be distinguished from $\widehat{\delta}$.
\[adjs betw xm and gpd\] The categories ${{\ensuremath{\mathbf{Pxm}_{}}}}_\bg$ are the fibres of a fibration “base groupoid of a pre-crossed module”, ${\text{\sf base}}:{{\ensuremath{\mathbf{Pxm}_{}}}}\rightarrow{{{\ensuremath{\mathbf{Gpd}}}}}$. This functor has both adjoints ${\text{\sf discr}}\dashv{\text{\sf base}}\dashv{\text{\sf codiscr}}$ given by the initial (left) and terminal (right) objects in the corresponding fibres. Furthermore the left adjoint ${\text{\sf discr}}$ has a further left adjoint “*fundamental groupoid*" $\pi_1\dashv {\text{\sf discr}}$.
Everything is quite standard; we just comment on the last statement. The fundamental groupoid of a pre-crossed module $\c=(\bg,C,\delta)$ is calculated by the coequalizer $$\everyentry={\vphantom{\Big(}}
\xymatrix@C=1.75pc@R=1.75pc
{\widehat{C}\ar@<.5ex>[r]^{\widehat{\delta}}\ar@<-.5ex>[r]_0 &
\bg\ar[r]^-q & \pi_1(\c),}$$ that is, the fundamental groupoid is given by the quotient $\pi_1(\c)=\bg/{\mathop{\rm im}}(\widehat{\delta})$. Note that all functors in the above diagram are the identity on objects.
For a given pre-crossed module $\c=(\bg,C,\delta)$ the functors $C\circ\widehat\delta$ and ${\text{\sf End}}_{\widehat{C}}$ agree on objects but, in general, not in arrows. Therefore, that these two functors be equal is a special property a pre-crossed module may have.
A crossed module is a pre-crossed module $\c=(\bg,C,\delta)$ such that $C\circ\widehat{\delta}={\text{\sf End}}_{\widehat{C}}$. The category of crossed modules, denoted [[$\mathbf{Xm}_{}$]{}]{}, is the corresponding full subcategory of [[$\mathbf{Pxm}_{}$]{}]{}. For a given groupoid , the category of -crossed modules, denoted ${{\ensuremath{\mathbf{Xm}_{}}}}_\bg$, is the obvious full subcategory of ${{\ensuremath{\mathbf{Pxm}_{}}}}_\bg$.
In terms of elements, the condition that $C\circ\widehat{\delta}$ and ${\text{\sf End}}_{\widehat{C}}$ agree on arrows reads: $$\laction {\delta_x(u)}v= u v u^{-1},$$ for all objects $x\in\bg$ and elements $u,v\in C(x)$. This is the well known Peiffer identity. This property implies that $\ker\delta_x$ is contained in the center of $C(x)$, and this, in turn, has the following important consequences:
1. For any given -group $C$, the -pre-crossed module $(C,0)$ is a crossed module if and only if every group $C(x)$ is abelian, that is, if $C$ is a -module.
2. For any -crossed module $(C,\delta)$, the kernel of $\delta$ (calculated in ${{{\ensuremath{\mathbf{Gr}}}}}^\bg$) is a -module (that is, $(\ker\delta)(x)=\ker\delta_x$ is an abelian group).
3. The action of ${\mathop{\rm im}}\delta$ on $\ker\delta$ is trivial, that is, the following diagram commutes $$\label{consequence xm}
\everyentry={\vphantom{\Big(}}
\xymatrix@C=1.75pc@R=1.75pc
{\widehat{C}\ar@<.6ex>[r]^{\widehat{\delta}}\ar@<-.4ex>[r]_0 & \bg
\ar[r]^-{\ker\delta} &
{{\ensuremath{\mathbf{Ab}}}},}$$ where we have denoted “0” the functor which is the identity on objects and sends every map to an identity; this functor, if regarded as a map in ${{{\ensuremath{\mathbf{Gpd}}}}}_{{\text{\sf obj}}(\bg)}$, is indeed a zero map.
[**Examples:**]{} 1. For any $\bg$-module $A:\bg\to{{\ensuremath{\mathbf{Ab}}}}$, the pre-crossed module ${\text{\sf zero}}(A) = (\bg,A,0)$ is a crossed module. 2. Any pre-crossed module $(\bg, C, \delta)$ with $\delta$ a monomorphism is a crossed module. 3. In particular, for any groupoid $\bg$, the pre-crossed modules ${{\boldsymbol0}_{_{\bg}}}$ and ${{\boldsymbol1}_{_{\bg}}}$ are crossed modules.
As a consequence of the above Example 3, the right and left adjoints to “base groupoid of a pre-crossed module” are also right and left adjoints to “base groupoid of a crossed module” and furthermore “fundamental groupoid of a crossed module” is left adjoint to “discrete crossed module on a groupoid”. From now on we will regard the functors in the sequence of adjunctions $\pi_1 \dashv
{\text{\sf discr}}\dashv {\text{\sf base}}\dashv {\text{\sf codiscr}}$ of Proposition \[adjs betw xm and gpd\] as defined/taking values in ${{\ensuremath{\mathbf{Xm}_{}}}}$.
There is an important forgetful functor defined on the category of -crossed modules, which will be used later on. This is the functor $$\label{techo}\techo_2=\techo:{{\ensuremath{\mathbf{Xm}_{}}}}_\bg\rightarrow {{{\ensuremath{\mathbf{Gr}}}}}^\bg$$ taking a -crossed module $\c=(\bg,C,\delta)$ to the -group $C$ and each map $(1_{\bg},\alpha):(\bg,C,\delta)\to(\bg,C',\delta')$ of -crossed modules to the natural transformation $\alpha:C\to C'$. It is an important property of this functor the fact that it preserves finite limits and coequalizers.
Our next objective is to establish the tripleability of the category of crossed modules over a certain category (see below) so that we can define the cotriple which will be used to calculate an algebraic cohomology of crossed modules.
\[free xm on a pxm\] The inclusion functor $U:{{\ensuremath{\mathbf{Xm}_{}}}}\rightarrow{{\ensuremath{\mathbf{Pxm}_{}}}}$ has a left adjoint which is calculated by factoring out the Peiffer subgroup.
See [@BrWe1996 p. 9], [@BrHu1982], or [@HoMeSi1993].
This inclusion functor $U:{{\ensuremath{\mathbf{Xm}_{}}}}\rightarrow{{\ensuremath{\mathbf{Pxm}_{}}}}$ is in fact monadic. We will use it to obtain, by composing it with a certain forgetful functor $U':{{\ensuremath{\mathbf{Pxm}_{}}}}\rightarrow{{\ensuremath{\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}}}}$, another monadic functor which will determine the cotriple on ${{\ensuremath{\mathbf{Xm}_{}}}}$ by means of which we will calculate the cohomology of crossed modules. Let [[$\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}$]{}]{} be the category of “arrows to groupoids” whose objects are triples $(X,f,\bg)$ where $X$ is a set, is a groupoid, and $f:X\rightarrow
{\text{\sf End}}(\bg)$ is a map from $X$ to the set of all arrows of which are endomorphisms. An arrow from $(X,f,\bg)$ to $(X',f',\bg')$ in [[$\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}$]{}]{} is a pair $(\alpha,\beta)$ where $\alpha:X\rightarrow X'$ is a map of sets, and $\beta:\bg\rightarrow\bg'$ is a functor such that $f'\alpha=\beta f$. Then we have:
\[pxm to agpd has ladj\] The obvious forgetful functor $U':{{\ensuremath{\mathbf{Pxm}_{}}}}\rightarrow{{\ensuremath{\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}}}}$ has a left adjoint.
The forgetful functor $U':{{\ensuremath{\mathbf{Pxm}_{}}}}\rightarrow{{\ensuremath{\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}}}}$ takes a pre-crossed module $(\bg,C,\delta)$ to the triple $({\text{\sf arr}}(\widehat{C}),\widehat{\delta},\bg)$. We will merely give the definition of its left adjoint $F:{{\ensuremath{\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}}}}\rightarrow{{\ensuremath{\mathbf{Pxm}_{}}}}$. This is defined on objects as $F(X,f,\bg)=(\bg,C,\delta)$, where $C:\bg\rightarrow{{{\ensuremath{\mathbf{Gr}}}}}$ is defined on objects as $$C(x)= F_{\text{gp}} \bigg( \coprod_{z\in {\text{\sf obj}}(\bg)} \Big( G(z,x)\times
\coprod_{v\in\bg(z,z)} {\text{\sf fbr}}(f,v) \Big) \bigg) ,$$ $F_{\text{gp}}$ is the free group functor, and ${\text{\sf fbr}}(f,v)$ is the fiber of the map $f$ at $v$. Thus, $C(x)$ is the free group generated by all pairs $\langle
t,u\rangle $, where $t:z\rightarrow x$ is a map in and $u\in X$ is such that $f(u)$ is an endomorphism of $z$ in . The -group $C$ is defined on arrows $s:x\rightarrow y$ in by defining on generators: $$C(s)(\langle t,u\rangle )=\langle st,u\rangle .$$ The natural map $\delta:C\rightarrow {\text{\sf End}}_\bg$ has components $\delta_x:C(x)\rightarrow {\text{\sf End}}_\bg(x)$ which are defined on generators as: $$\delta_x(\langle t,u\rangle )=tf(u)t^{-1}.$$ It is easy to show that this defines $F$ on objects. Now on arrows: For an arrow $(\alpha,\beta):(X,f,\bg)\rightarrow(X',f',\bg')$ in [[$\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}$]{}]{}, we define $F(\alpha,\beta)=(\beta,\overline{\alpha})$, where $\overline{\alpha}:C\rightarrow C'\circ \beta$ has components defined on generators by $$\overline{\alpha}_x(\langle t,u\rangle )= \langle \beta(t),\alpha(u)\rangle .$$ In this way we get a functor $F:{{\ensuremath{\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}}}}\rightarrow{{\ensuremath{\mathbf{Pxm}_{}}}}$. This is easily verified to be left adjoint to $U'$ (see [@Garcia2003][, Proposición 3.1.13]{} for the details).
\[pxm tripleable over agpd\] The composite functor $U_2:{{\ensuremath{\mathbf{Xm}_{}}}}\xrightarrow{U}{{\ensuremath{\mathbf{Pxm}_{}}}}\xrightarrow{U'}{{\ensuremath{\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}}}}$ is monadic.
We already know that $U_2$ has a left adjoint. By Beck’s tripleability theorem it is sufficient to prove that it reflects isomorphisms and that it preserves coequalizers of $U_2$-contractible pairs. The first thing is easy to see. The second thing requires a careful analysis of coequalizers in ${{\ensuremath{\mathbf{Xm}_{}}}}$ and in ${{\ensuremath{\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}}}}$. It is proved in this way: on the coequalizer of the $U_2$-image of a $U_2$-contractible pair one can build in a natural way a structure of crossed module together with a map in ${{\ensuremath{\mathbf{Xm}_{}}}}$ from the codomain of the given contractible pair to this crossed module. After some tedious calculations one verifies that this map is a coequalizer in ${{\ensuremath{\mathbf{Xm}_{}}}}$ and that its image is the coequalizer in ${{\ensuremath{\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}}}}$ from which it was built, proving the desired property. See [@Garcia2003], Proposición 3.1.15, p. 148, for the details.
We denote ${\ensuremath{\bbg_{2}}}$ the cotriple induced on ${{\ensuremath{\mathbf{Xm}_{}}}}$ by the the monadic functor $U_2$, that is, ${\ensuremath{\bbg_{2}}} = F_2U_2$.
The next proposition reminds us of the well known fact that crossed modules can be regarded as groupoids (actually, as 2-groupoids or groupoids enriched in the category of groupoids). For some purposes in our context it will be convenient to regard 2-groupoids (i.e. crossed modules) as those special double groupoids (or internal groupoids in the category of groupoids) whose groupoids of objects and of arrows have the same set of objects and whose structural functors (domain, codomain, identity, and composition) are the identity on objects.
\[xm as groupoids\] There is a functor ${\text{\sf xm}}:{{{\ensuremath{\mathbf{Gpd}}}}}({{{\ensuremath{\mathbf{Gpd}}}}})\rightarrow{{\ensuremath{\mathbf{Xm}_{}}}}$, from the category of double groupoids to that of crossed modules, which has a pseudo section $$\label{eq xm as gpd}
{\text{\sf gpd}}:{{\ensuremath{\mathbf{Xm}_{}}}}\rightarrow {{{\ensuremath{\mathbf{Gpd}}}}}({{{\ensuremath{\mathbf{Gpd}}}}}),$$ allowing us to regard any crossed module $(\bg,C,\delta)$ as an internal groupoid in groupoids, having as groupoid of objects and with groupoid of arrows given by the “semidirect product” or Grothendieck construction $\semi{\bg}{C}=\int_\bg
C$. Furthermore, the above functors establish an isomorphism between the category of crossed modules and the category of 2-groupoids.
(See Proposition \[isotilden\] for a “higher dimensional version” of \[xm as groupoids\].)
Let $(\bg_0,\bg_1,s,t,i)$ be the underlying reflexive graph of a double groupoid [[$\boldsymbol{\cal G}$]{}]{}. We define a pre-crossed module ${\text{\sf xm}}({{\ensuremath{\boldsymbol{\cal G}}}})=(C,\delta)$ over the groupoid $\bg_0$ of objects of [[$\boldsymbol{\cal G}$]{}]{} by $$C=(\ker\widetilde{s})\circ i \quad
\text{and} \quad \delta=(\widetilde{t}\circ j)\ast i,$$ where $j:\ker\widetilde{s}\rightarrow {\text{\sf End}}_{\bg_1}$ is the canonical inclusion, and we use again the notation of tilde to denote the natural transformation $\widetilde{f}:{\text{\sf End}}_\bg\rightarrow{\text{\sf End}}_{\bg'}\circ f$ induced by a functor $f:\bg\rightarrow\bg'$. It is immediate to verify that $\delta$ satisfies Peiffer’s identity and therefore ${\text{\sf xm}}({{\ensuremath{\boldsymbol{\cal G}}}})$ is a crossed module. This defines the functor ${\text{\sf xm}}$ on objects. On arrows $(f_0,f_1): {{\ensuremath{\boldsymbol{\cal G}}}}\rightarrow
{{\ensuremath{\boldsymbol{\cal G}}}}'$ ${\text{\sf xm}}$ is defined by ${\text{\sf xm}}(f_0,f_1) =
\textcolor{black}{(f_0,\alpha)}$, where $\alpha=(\widetilde{f_1}\circ j)\ast i$.
Let us now define the functor ${\text{\sf gpd}}:{{\ensuremath{\mathbf{Xm}_{}}}}\rightarrow{{{\ensuremath{\mathbf{Gpd}}}}}({{{\ensuremath{\mathbf{Gpd}}}}})$. Given a -crossed module $\c=(C,\delta)$ by applying Grothendieck semidirect product construction to $C$ we get a groupoid $\semi{\bg}{C}$ together with a canonical split projection $$\xymatrix@C=1pc@R=1.75pc {**[l]\semi{\bg}{C}\ar@<-.2ex>[rr]_-s && \bg
\ar@/_2ex/[ll]_-i}.$$ which is the identity on objects. Then the underlying reflexive graph of ${\text{\sf gpd}}(\c)$ is (, $\semi{\bg}{C}$, $s,t,i$), were the functor $t$ is the identity on objects and takes any arrow $\textcolor{black}{(u,a)}:x\rightarrow
y$ in $\semi{\bg}{C}$ ($u:x\rightarrow y$ an arrow in $\bg$ and $a\in C(y)$) to the composition $\delta_y(a)\circ u$. The composition map making this graph into an internal groupoid in groupoids is the only possible one, which on the arrows $x\to y$ of $\semi{\bg}{C}$ is given by the formula $$\textcolor{black}{ (v,b)\circ (u,a)=(u,b a) \qquad \bpar{\text{supposed} \ v =
\delta_y(a)u}. }$$ This double groupoid is in fact a 2-groupoid. To an arrow $(f,\alpha) :
(\bg,C,\delta) \to (\bg',C',\delta')$ ${\text{\sf gpd}}$ associates the map of crossed modules ${\text{\sf gpd}}(f,\alpha) = (f,\alpha')$ where $\alpha':\semi\bg C \to
\semi{\bg'}{C'}$ is the functor defined by $$\alpha'(u,a) =
\bpar{f(u),\alpha_y(a)}$$ for each $u:x\to y$ and $a\in C(y)$. It is easy to verify that the crossed module corresponding to ${\text{\sf gpd}}(\c)$ is isomorphic to cand also that for any 2-groupoid [[$\boldsymbol{\cal G}$]{}]{} the 2-groupoid ${\text{\sf gpd}}\big({\text{\sf xm}}({{\ensuremath{\boldsymbol{\cal G}}}})\big)$ is isomorphic to [[$\boldsymbol{\cal G}$]{}]{}.
Note that for any groupoid $\bg$ the functors ${\text{\sf xm}}$ and ${\text{\sf gpd}}$ induce an equivalence between the category ${{\ensuremath{\mathbf{Xm}_{}}}}_\bg$ and the subcategory of those 2-groupoids determined by those 2-groupoids having $\bg$ as groupoid of objects and by those functors which are the identity on objects. \[gpd of conn comps of a xm\] We note also that, the fundamental groupoid of a crossed module is equal to the groupoid of connected components of the 2-groupoid ${\text{\sf gpd}}(\c)$.
Besides $\pi_0(\c)$ and $\pi_1(\c)$, the commutativity of diagram allows us to define the second “homotopy group”, $\pi_2(\c)$, of the crossed module $\c$, as the unique $\pi_1(\c)$-module such that $\pi_2(\c)\circ q=\ker\delta$, $$\label{first kernel induced map} \vcenter{ \xymatrix@C=1.5pc@R=1.75pc@!=1em {
{\vphantom{\Big(}}\widehat{C}
\ar@<.6ex>[r]^-{\widehat{\delta}}
\ar@<-.4ex>[r]_-0 & {\bg}
\ar@/_.1 pt/[dr]_{\ker\delta}
\ar[rr]^-q & & **[r]{\pi_1(\c).}
\ar@/^.1 pt/@{.>}[dl]^{\pi_2(\c)}
\\ & &
{{\ensuremath{\mathbf{Ab}}}}& } }$$
Crossed complexes and their Postnikov towers {#crs}
============================================
As indicated in the Introduction, we give a definition of crossed complex which rests on the concept of crossed module instead of (the usual) building crossed complexes all the way from groupoids. Crossed complexes over a fixed groupoid are very easy to define as special types of chain complexes in the category of crossed modules over the given groupoid. Having done that, it is evident how to define morphisms between crossed complexes over different groupoids to get the full category of crossed complexes. Standard references for crossed complexes are [@BrHi1981a], [@BrHi1981b], and [@Tonks1993].
If is a fixed groupoid, a chain of complex in ${{\ensuremath{\mathbf{Xm}_{}}}}_\bg$ is a diagram $$\xymatrix@C=1.75pc@R=1.75pc { \cdots \ar[r] & \c_{n+1}\ar[r]^-{\partial_{n+1}} &
\c_n\ar[r]^-{\partial_n} &
\c_{n-1}\ar[r] & \cdots \ar[r] & \c_2\ar[r]^-{\partial_2} & \c_1,}$$ of -crossed modules whose underlying diagram of -groups is a chain complex in ${{{\ensuremath{\mathbf{Gr}}}}}^\bg$. In such a diagram, the fact that (for $n>1$) there is a zero map from $\c_{n+1}$ to $\c_{n-1}$, in ${{\ensuremath{\mathbf{Xm}_{}}}}_\bg$, implies that in the crossed module $\c_{n+1}=(C_{n+1},\delta_{n+1})$, $\delta_{n+1}=0$ and therefore $\c_{n+1}$ is abelian, meaning that it is not just a -group but a -module, $C_{n+1}:\bg\rightarrow{{\ensuremath{\mathbf{Ab}}}}$. Thus, in a chain complex in ${{\ensuremath{\mathbf{Xm}_{}}}}_\bg$ such as the above one, for all $n\geq3$, $\c_n$ is an abelian crossed module.
If is a groupoid, a -crossed complex is a chain complex in ${{\ensuremath{\mathbf{Xm}_{}}}}_\bg$ of the form $$\label{a crs}
{{\ensuremath{\boldsymbol{\cal C}}}}:
\xymatrix@C=1.75pc@R=1.75pc { \cdots \ar[r] & \c_{n+1}
\ar[r]^-{\partial_{n+1}} & \c_n\ar[r]^-{\partial_n} &
\c_{n-1}\ar[r] & \cdots \ar[r] &
\c_2\ar[r]^-{\partial_2} & {{\boldsymbol1}_{_{\bg}}},}$$ such that for $n\geq 3$ the action of ${\mathop{\rm im}}(\partial_2)$ on $\widehat{C}_n$ is trivial. In other words, for every $n\geq3$ the following diagram commutes $$\label{condition crs}
\everyentry={\vphantom{\Big(}}
\xymatrix@C=1.75pc@R=1.75pc
{\widehat{C}_2\ar@<.6ex>[r]^{\widehat{\delta}_2}\ar@<-.4ex>[r]_0 & \bg
\ar[r]^-{C_n} &
{{\ensuremath{\mathbf{Ab}}}}.}$$ Here we are using the notation $\c_n=(C_n,\delta_n),\; n\geq 2,$ for the crossed modules in [[$\boldsymbol{\cal C}$]{}]{}. The groupoid is called the base groupoid of the crossed complex, and $\c_2$ is called the base crossed module.
For $n\geq3$, by the commutativity of , $C_n$ induces a $\pi_1(\c_2)$-module $\overline{C}_n$, $$\label{induced module by a ch complex} \vcenter{ \xymatrix@C=1.6pc@R=1.75pc@!=1em {
{\vphantom{\Big(}}\widehat{C}_2
\ar@<.6ex>[r]^-{\widehat{\delta}_2}
\ar@<-.4ex>[r]_-0 & {\bg}
\ar@/_.1 pt/[dr]_{C_n}
\ar[rr]^-q & & **[r]{\pi_1(\c_2)}
\ar@/^.1 pt/@{.>}[dl]^{\overline{C}_n}
\\ & &
{{\ensuremath{\mathbf{Ab}}}}& } }$$ and the crossed complex induces a chain complex of $\pi_1(\c_2)$-modules of the form, $$\label{induced chain complex}
\overline{{{\ensuremath{\boldsymbol{\cal C}}}}}:
\xymatrix@C=1.5pc@R=1.5pc {\cdots \ar[r] &
\overline{C}_{n+1}\ar[r]^-{\partial_{n+1}} &
\overline{C}_n\ar[r]^-{\partial_{n}} &
\overline{C}_{n-1}\ar[r] & \cdots \ar[r] & \overline{C}_3\ar[r]^-{\partial_{3}}
& \pi_2(\c_2)\ar[r]^-0 & 0,}$$ which will be used in the definition of the higher “homotopy groups” of a crossed complex. (We name the natural maps in this chain complex after their corresponding maps in because they are essentially the same, having the same components in ${{\ensuremath{\mathbf{Ab}}}}$.)
If [[$\boldsymbol{\cal C}$]{}]{} is a -crossed complex and ${{\ensuremath{\boldsymbol{\cal C}}}}'$ is a $\bg'$-crossed complex, a morphism $\bf: {{\ensuremath{\boldsymbol{\cal C}}}}\rightarrow {{\ensuremath{\boldsymbol{\cal C}}}}'$ is just a chain map, that is, a family $\bf = \{f_n: \c_n \rightarrow \c'_n\}_{n\geq 1}$ of maps of crossed modules such that for $n\geq 1$, $f_n\partial_{n+1}
=\partial'_{n+1}f_n$, $$\xymatrix{
\c_{n+1} \ar[d]_{f_{n+1}} \ar[r]^{\partial_{n+1}} & \c_n \ar[d]^{f_n} \\
\c'_{n+1} \ar[r]^{\partial'_{n+1}} & \c'_n . }$$ Note that this condition implies that all maps $f_n$ have the same change-of-base functor, which is equal to $f_1$. The resulting category of crossed complexes will be denoted [$\mathbf{Crs}$]{}.
A morphism of crossed complexes $\bf:{{\ensuremath{\boldsymbol{\cal C}}}}\rightarrow{{\ensuremath{\boldsymbol{\cal C}}}}'$ is a *fibration* if each component $f_n$, is a fibration of crossed modules, that is, if the functor $f_1:\bg\rightarrow\bg'$ is a fibration of groupoids and the natural map of -groups underlying each $f_n$ is surjective. The fibrations in [$\mathbf{Crs}$]{} are part of a Quillen model structure in this category.
An $n$-crossed complex or crossed complex of rank $n$ is a crossed complex such as in which all crossed modules $\c_m$ for $m>n$ are equal to ${{\boldsymbol0}_{_{\bg}}}$. The full subcategory of [$\mathbf{Crs}$]{} determined by the $n$-crossed complexes will be denoted ${\ensuremath{\mathbf{Crs}}}_n$. For a -crossed complex to be of rank $0$ it is necessary that be a discrete groupoid, that is, just a set. Conversely, associated to a discrete groupoid there is precisely one $0$-crossed complex over . Thus, ${\ensuremath{\mathbf{Crs}}}_0$ can be identified with the category of sets and we will put ${\ensuremath{\mathbf{Crs}}}_0={{\ensuremath{\mathbf{Set}}}}$. Similarly, since a map between two 1-crossed complexes is completely determined by the change-of-base functor, which may be arbitrary, we will identify ${\ensuremath{\mathbf{Crs}}}_1$ with the category of groupoids and we write ${\ensuremath{\mathbf{Crs}}}_1={{{\ensuremath{\mathbf{Gpd}}}}}$. Finally, we will also write ${\ensuremath{\mathbf{Crs}}}_2={{\ensuremath{\mathbf{Xm}_{}}}}$ for similar reasons.
The objects in ${\ensuremath{\mathbf{Crs}}}_n$ are homotopy $n$-types, that is, they have trivial “homotopy groups" in dimensions greater than $n$. The homotopy groups of a crossed complex are defined as follows: $\pi_0({{\ensuremath{\boldsymbol{\cal C}}}})$ is the set of connected components of the base groupoid, so $\pi_0({{\ensuremath{\boldsymbol{\cal C}}}})=\pi_0(\bg)$. Similarly, $\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})=\pi_1(\c_2)=\bg/{\mathop{\rm im}}(\delta_2)$, the fundamental groupoid of the base crossed module of [[$\boldsymbol{\cal C}$]{}]{}. For $n\geq 2$, $\pi_n({{\ensuremath{\boldsymbol{\cal C}}}})$ is defined as the “homology group" $H_n(\overline{{{\ensuremath{\boldsymbol{\cal C}}}}}):\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})\rightarrow {{\ensuremath{\mathbf{Ab}}}}$ of the induced chain complex of $\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$-modules . Note that if we consider $\pi_n({{\ensuremath{\boldsymbol{\cal C}}}})$ as a -module via the canonical projection $q:\bg\rightarrow\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$, for $n\geq 2$, the -crossed module $(\pi_n({{\ensuremath{\boldsymbol{\cal C}}}}),0)$ is the kernel of the induced map $\overline{\partial}_n:\c_n/{\mathop{\rm im}}(\partial_{n+1})\rightarrow \c_{n-1}$ (see below).
In the same way that the discrete inclusion of sets into groupoids is both reflexive and coreflexive, Proposition \[adjs betw xm and gpd\] tells us that the “discrete" inclusion $\bg\mapsto {{\boldsymbol1}_{_{\bg}}}$ of groupoids into crossed modules is both reflexive and coreflexive. These are particular cases of a general situation. For every $n\geq 0$, the subcategory ${\ensuremath{\mathbf{Crs}}}_n$ of ${\ensuremath{\mathbf{Crs}}}$ is both reflexive and coreflexive. We are mainly interested in the reflector $\tilde{P}_n:{\ensuremath{\mathbf{Crs}}}\rightarrow{\ensuremath{\mathbf{Crs}}}_n$, left adjoint to the inclusion $i_n:{\ensuremath{\mathbf{Crs}}}_n\rightarrow{\ensuremath{\mathbf{Crs}}}$. For $n=0,1,$ we have $\tilde{P}_0=\pi_0\circ {\text{\sf base}}$ (“set of connected components of the base groupoid") and $\tilde{P}_1=\pi_1\circ{\text{\sf base}}$ (“fundamental groupoid of the base crossed module"). For higher $n, \; \tilde{P}_n$ is calculated in terms of the following coequalizer in [[$\mathbf{Xm}_{}$]{}]{}: $$\xymatrix@C=1.75pc@R=1.75pc
{\c_{n+1}\ar@<.6ex>[r]^-{\partial_{n+1}}\ar@<-.4ex>[r]_-0 &
\c_n \ar[r]^-{q_n} &
\c_n/{\mathop{\rm im}}\partial_{n+1}. }$$ Thus $\tilde{P}_n$ associates to the crossed complex [[$\boldsymbol{\cal C}$]{}]{} given in the following $n$-crossed complex: $$\label{ind part}
\tilde{P}_n({{\ensuremath{\boldsymbol{\cal C}}}}):
\xymatrix@C=1.25pc@R=1.75pc { \cdots \ar[r] & {{\boldsymbol0}_{_{\bg}}} \ar[r] &
\c_n/{\mathop{\rm im}}\partial_{n+1}
\ar[r]^-{\overline{\partial}_n} & \c_{n-1}\ar[r] & \cdots \ar[r] &
\c_2\ar[r]^-{\partial_2} &
{{\boldsymbol1}_{_{\bg}}} \; ,}$$ where $\overline{\partial}_n$ is the unique map of crossed modules such that $\partial_n=\overline{\partial}_n\circ q_n$, induced by the fact that $\partial_n\partial_{n+1}=0$.
The objects in ${\ensuremath{\mathbf{Crs}}}_n$ are homotopy $n$-types and, in addition, all $n$-types of crossed complexes are represented in ${\ensuremath{\mathbf{Crs}}}_n$. That is, if ${{\ensuremath{\boldsymbol{\cal C}}}}\in {\ensuremath{\mathbf{Crs}}}$ is any crossed complex which is an $n$-type, there exists an object ${{\ensuremath{\boldsymbol{\cal C}}}}_n\in{\ensuremath{\mathbf{Crs}}}_n$ which is homotopically equivalent to [[$\boldsymbol{\cal C}$]{}]{}. That object is just ${{\ensuremath{\boldsymbol{\cal C}}}}_n=\tilde{P}_n({{\ensuremath{\boldsymbol{\cal C}}}})$.
Regarding the reflectors $\tilde{P}_n$ as endofunctors, $P_n$, of [$\mathbf{Crs}$]{}, we have a situation as described in the introduction. We have idempotent endofunctors $P_n:{\ensuremath{\mathbf{Crs}}}\rightarrow{\ensuremath{\mathbf{Crs}}}$ such that $P_n=P_{n+1}P_n$, and $\eta_{n+1}=P_{n+1}*\delta^{(n)}$ is the composition of $P_{n+1}$ with the unit $\delta^{(n)}$ of the adjunction $\tilde{P}_n\dashv i_n$. Working out the components of the unit $\delta^{(n)}$, for $n>1$, one finds that for a given -crossed complex [[$\boldsymbol{\cal C}$]{}]{}, the components of the map $(\eta_{n+1})_{{\ensuremath{\boldsymbol{\cal C}}}}:P_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})\rightarrow P_n({{\ensuremath{\boldsymbol{\cal C}}}})$ are: the trivial map $\c_{n+1}/{\mathop{\rm im}}\partial_{n+2}\rightarrow {{\boldsymbol0}_{_{\bg}}}$ at dimension $n+1$, the “projection to the quotient", $q_n:\c_n\rightarrow \c_n/{\mathop{\rm im}}\partial_{n+1}$, at dimension $n$, and an identity map at all other dimensions. The cases of $\eta_1$ and $\eta_2$ are little different since for these maps the change-of-base functor is not an identity. For $\eta_1$ the change of base functor is the canonical projection $q_0:\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})\rightarrow\pi_0({{\ensuremath{\boldsymbol{\cal C}}}})$, while for $\eta_2$ it is the canonical projection $q_1:\bg\rightarrow\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$.
\[fibras son kpin\] For every crossed complex ${{\ensuremath{\boldsymbol{\cal C}}}}\in{\ensuremath{\mathbf{Crs}}}$, and every $n\geq0$ the map $(\eta_{n+1})_{{\ensuremath{\boldsymbol{\cal C}}}}:P_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})\rightarrow P_n({{\ensuremath{\boldsymbol{\cal C}}}})$ is a fibration with fibers of the type of $K(\Pi,n+1)$. For $n>1$, the fiber of $(\eta_{n+1})_{{\ensuremath{\boldsymbol{\cal C}}}}$ over $x\in\bg$ has the homotopy type of $K\bpar{\pi_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})(x),n+1}$.
Let us first consider $(\eta_1)_{{{\ensuremath{\boldsymbol{\cal C}}}}}$, which is $q_0:\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})\rightarrow \pi_0({{\ensuremath{\boldsymbol{\cal C}}}})$, a surjective map to a discrete groupoid and therefore it is a fibration of groupoids. The fiber over a given connected component $\bar{x}\in\pi_0({{\ensuremath{\boldsymbol{\cal C}}}})$ is a connected groupoid and therefore has the homotopy type of a $K(\Pi,1)$ (taking for $\Pi$ any of the groups of endomorphisms of any object in that connected groupoid).
Next, we look at $(\eta_2)_{{\ensuremath{\boldsymbol{\cal C}}}}:P_2({{\ensuremath{\boldsymbol{\cal C}}}})\rightarrow P_1({{\ensuremath{\boldsymbol{\cal C}}}})$. The fiber of this map over an object $x\in\bg$ is the reduced 2-crossed complex $$(\c_2/{\mathop{\rm im}}\partial_3)(x):C_2(x)/{\mathop{\rm im}}(\partial_3)_x\rightarrow
{\mathop{\rm im}}(\partial_2)_x\; .$$ This is a crossed module over the group ${\mathop{\rm im}}(\partial_2)_x$ and therefore it has $\pi_0=0$. Since the above map is surjective, this crossed module has $\pi_1=0$, while $\pi_2$ is precisely the abelian group $\pi_2({{\ensuremath{\boldsymbol{\cal C}}}})(x)$. For higher $n$, $\pi_n=0$, thus $\eta_2$ is a fibration with fiber over $x$ of the type $K\bpar{\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})(x),2}$.
For $n>2$ all the $\eta_n$ are morphisms of crossed complexes whose change of base functor is the identity on objects (as in the case $n=2$). Therefore, their fiber on an object $x\in\bg$ is a reduced crossed complex. The special thing for $n>2$ is that the base groupoid of the fiber is trivial and therefore the fiber is just a chain complex of abelian groups. In general, the fiber of $(\eta_n)_{{\ensuremath{\boldsymbol{\cal C}}}}$ (for $n>2$) over an $x\in\bg$ is a crossed complex with trivial components below the $n-1$ an a surjective morphism in dimension $n$. Therefore, all homotopy groups of the fiber are trivial in dimensions other than $n$, and it is equal to $\pi_n({{\ensuremath{\boldsymbol{\cal C}}}})(x)$ in dimension $n$.
\[prop ptow of a crs\] For every crossed complex [[$\boldsymbol{\cal C}$]{}]{}, the chain of fibrations $$\label{diagr ptow of a crs}
\xymatrix@C=1.75pc@R=1.75pc { \cdots \ar[r]^-{\eta_{n+2}} &
P_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})\ar[r]^-{\eta_{n+1}}& P_n({{\ensuremath{\boldsymbol{\cal C}}}})\ar[r]^-{\eta_n} &\cdots
\ar[r]^-{\eta_1} & P_0({{\ensuremath{\boldsymbol{\cal C}}}})}$$ is a Postnikov tower for [[$\boldsymbol{\cal C}$]{}]{}.
By Proposition \[fibras son kpin\] it is sufficient to prove that the limit of diagram is [[$\boldsymbol{\cal C}$]{}]{}. We know that the morphisms $\delta^{(n)}:{{\ensuremath{\boldsymbol{\cal C}}}}\rightarrow P_n({{\ensuremath{\boldsymbol{\cal C}}}})$ determined by the units of the adjunctions $\tilde{P}_n\dashv i_n$ constitute a cone over . Given any other cone $\{\phi^{(n)}:{{\ensuremath{\boldsymbol{\cal C}}}}'\rightarrow
P_n({{\ensuremath{\boldsymbol{\cal C}}}})\}$ over there is a unique way of defining a map of crossed complexes $\bf:{{\ensuremath{\boldsymbol{\cal C}}}}'\rightarrow{{\ensuremath{\boldsymbol{\cal C}}}}$ such that for all $n$, $\delta^{(n)}\bf=\phi^{(n)}$. One just needs to take into account that $\delta^{(n)}_m$ is an identity map for all $m<n$ and define $f_n=\phi_n^{(n+1)}$.
The subcategories ${\ensuremath{\mathbf{Crs}}}_n$ of [$\mathbf{Crs}$]{} are not only reflexive, but also coreflexive, the right adjoint to the inclusion being “simple truncation", $T_n:{\ensuremath{\mathbf{Crs}}}\rightarrow{\ensuremath{\mathbf{Crs}}}_n$, so that $T_0$ is essentially the set of objects of the base groupoid, $T_1$ is “base groupoid", $T_2$ is “base crossed module". Furthermore, $T_n$ has itself a right adjoint denoted ${\text{\sf cosk}}^n$. For $n=0$ and $n=1$ this further right adjoint is “codiscrete groupoid on a set" ($n=0$) and “trivial crossed module on a groupoid" . For $n>1$, the right adjoint to $T_n$, ${\text{\sf cosk}}^n:{\ensuremath{\mathbf{Crs}}}_n\rightarrow{\ensuremath{\mathbf{Crs}}}$, assigns to an $n$-crossed complex $${{\ensuremath{\boldsymbol{\cal C}}}}:
\xymatrix@C=1.25pc@R=1.75pc { \cdots\ar[r] & {{\boldsymbol0}_{_{\bg}}} \ar[r] &
{{\boldsymbol0}_{_{\bg}}}\ar[r] & \c_n\ar[r]^-{\partial_n} &
\c_{n-1}\ar[r] & \cdots \ar[r] & \c_2\ar[r]^-{\partial_2} & {{\boldsymbol1}_{_{\bg}}},}$$ the following $(n+1)$-crossed complex: $${\text{\sf cosk}}^n({{\ensuremath{\boldsymbol{\cal C}}}}):
\xymatrix@C=1.25pc@R=1.75pc { \cdots\ar[r] & {{\boldsymbol0}_{_{\bg}}} \ar[r] &
\ker\partial_n\ar[r] &
\c_n\ar[r]^-{\partial_n} & \c_{n-1}\ar[r] & \cdots \ar[r] &
\c_2\ar[r]^-{\partial_2} & {{\boldsymbol1}_{_{\bg}}}\; .}$$ For $n>2$, the functor $$\label{ntecho}\ntecho:{\ensuremath{\mathbf{Crs}}}_{n,\bg}\rightarrow{{\ensuremath{\mathbf{Ab}}}}^\bg$$ takes each $n$-crossed complex [[$\boldsymbol{\cal C}$]{}]{} having as base groupoid, to the -module $\ntecho({{\ensuremath{\boldsymbol{\cal C}}}})=\techo(\c_n)$. Note that, since finite limits and coequalizers in ${\ensuremath{\mathbf{Crs}}}_{n,\bg}$ are calculated componentwise, and since the functor $\techo = \techo_2$ preserves finite limits and coequalizers, for $n>2$ the functor also preserves finite limits and coequalizers.
We end this section with a higher dimensional analog of Proposition \[xm as groupoids\]. The idea is to regard the $(n+1)$-crossed complexes as some kind of internal groupoids in the category of $n$-crossed complexes. For $n>1$ there is a difficulty we do not have in Proposition \[xm as groupoids\]. For example, (in case $n=2$) it is possible to carry out a construction analogous to the one defining the functor ${\text{\sf xm}}$, but starting with a groupoid internal in crossed modules: $(\c_0,\c_1,s,t,i,\gamma)$. Let the crossed modules of objects and arrows of this groupoid be $\c_i=(\bg_i,C_i,\delta_i)$, $i=0,1$, and let the domain, codomain and identity maps be $s=(f_s,\alpha_s)$, etc. We can define $\c_3\xto{\partial_3}\c_2\xto{\delta}{{\boldsymbol1}_{_{\bg_0}}}$ where $\c_3=(\bg_0,\ker(\alpha_s*f_i),\delta\partial_3)$, $\delta=\delta_0$, $\partial_3=\alpha_t*f_i$, and $\c_2=\c_0$. One does not, however, obtain directly from this construction a 3-crossed complex unless the base groupoids of the crossed modules $\c_0$, $\c_1$ are the same and the structural maps ($s$, $t$, $i$ and $\gamma$) have trivial change of base (i.e. $f_s=1_{\bg_0}$ etc.). It is of course possible to force the result to be a 3-crossed complex by making the appropriate quotients, but this would introduce unnecessary complication. Thus, we shall appropriately restrict the categories of internal groupoids in crossed complexes so that, for example, in the case $n=2$ we will only consider those internal groupoids in crossed modules satisfying the conditions said above. In general, for each $n>0$ let $\gcrs[n]$ be the full subcategory of ${{{\ensuremath{\mathbf{Gpd}}}}}({\ensuremath{\mathbf{Crs}}}_n)$ (internal groupoids in ${\ensuremath{\mathbf{Crs}}}_n$) determined by those groupoids ${{\ensuremath{\boldsymbol{\cal G}}}}\in{{{\ensuremath{\mathbf{Gpd}}}}}({\ensuremath{\mathbf{Crs}}}_n)$ whose $n$-crossed complex of objects has the same $(n-1)$-truncation as its $n$-crossed complex of arrows and whose structural maps (domain, codomain, identity, and composition) have the identity map as $(n-1)$-truncation. Thus, an object ${{\ensuremath{\boldsymbol{\cal G}}}}\in\gcrs[n]$ gives rise to a diagram in ${{\ensuremath{\mathbf{Xm}_{}}}}$ of the form $$\label{gcrsn}
\xy
(0,0)*+{\c_n^1 \times_{\c_n^0} \c_n^1}="c",
(30,0)*+{\c_n^1}="d",
(60,0)*+{\c_n^0}="e",
(30,-15)*+{\c_{n-1}}="g",
(30,-25)*+{\c_{n-2}}="h",
(30,-35)*+{\vdots}="i",
(30,-45)*+{\c_2}="j",
(30,-55)*+{\mathbf{1}_\g}="k",
\ar @/^-4ex/ "e";"d" |{id}
\ar @{->}^{s} "d";"e" <3pt>
\ar @{->}_{t} "d";"e" <-3pt>
\ar @{->}^-{\circ} "c";"d"
\ar @/_3ex/ "c";"g"_{\partial_n^1 \times_{\partial_n^0}\partial_n^1}
\ar @{->}^{\partial_n^1} "d";"g"
\ar @/^3ex/ "e";"g"^{\partial_n^0}
\ar @{->}^{\partial_{n-1}} "g";"h"
\ar @{->} "h";"i"
\ar @{->} "i";"j"
\ar @{->}^{\partial_2} "j";"k"
\endxy$$
We can now state:
\[isotilden\] For each $n>1$ there is a functor $\fcrs_n:\gcrs[n]\to
{\ensuremath{\mathbf{Crs}}}_{n+1}$, which has a section $$\label{eq xm as gpd general n}
{\text{\sf gpd}}_n:{\ensuremath{\mathbf{Crs}}}_{n+1}\to \gcrs[n],$$ allowing us to regard any $(n+1)$-crossed complex $${{\ensuremath{\boldsymbol{\cal C}}}}:
\xymatrix@C=1.25pc@R=1.75pc { \cdots\ar[r] & {{\boldsymbol0}_{_{\bg}}}\ar[r] &
\c_{n+1}\ar[r]^-{\partial_{n+1}} & \c_n\ar[r]^-{\partial_n} &
\c_{n-1}\ar[r] & \cdots \ar[r] & \c_2\ar[r]^-{\partial_2} & {{\boldsymbol1}_{_{\bg}}},}$$ as an internal groupoid in $n$-crossed complexes, having ${{\ensuremath{\boldsymbol{\cal F}}}}_0=T_{n}({{\ensuremath{\boldsymbol{\cal C}}}})$ as $n$-crossed complex of objects and whose $n$-crossed complex of arrows is given by $${{\ensuremath{\boldsymbol{\cal F}}}}_1: \c_{n+1} \times \c_n \xto{\partial_n \, p_0} \c_{n-1}
\xto{\partial_{n-1}} \ldots \to \c_2 \to \mathbf{1}_\bg.$$ where $\c_{n+1} \times \c_n$ is a cartesian product in ${{\ensuremath{\mathbf{Xm}_{}}}}_\bg$ and $p_0:\c_{n+1}\times\c_n\to \c_n$ is the corresponding canonical projection. The functors $\fcrs_n$ and ${\text{\sf gpd}}_n$ establish an isomorphism between ${\ensuremath{\mathbf{Crs}}}_{n+1}$ and $\gcrs[n]$.
Let us complete the definition of ${\text{\sf gpd}}_n$. The domain map $s:{{\ensuremath{\boldsymbol{\cal F}}}}_1 \to {{\ensuremath{\boldsymbol{\cal F}}}}_0$ is induced by the projection $p_0:\c_{n+1} \times \c_n \to
\c_n$, and the codomain map $t:{{\ensuremath{\boldsymbol{\cal F}}}}_1 \to {{\ensuremath{\boldsymbol{\cal F}}}}_0$ is induced by the morphism of -groups $C_{n+1} \times C_n \to C_n$ (actually $\bg$-modules except in the case $n=2$) defined on $(u,v)\in C_{n+1}(x) \times C_n(x)$ as $(u,v)\mapsto\partial_{n+1}(u) \,v\in C_n(x)$ (note that even in the case $n=2$, in which $C_2(x)$ may not be abelian, $\partial_{n+1}$ takes its values in the center of $C_n$ and thus this is a homomorphism). Clearly the canonical map $C_n
\hookrightarrow C_{n+1} \times C_n$ is a common section for $s$ and $t$, so that we get an internal graph $$\label{grcrnt}\xymatrix{{{\ensuremath{\boldsymbol{\cal F}}}}_1 \ar@<0.6ex>[r]^s \ar@<-0.6ex>[r]_t & {{\ensuremath{\boldsymbol{\cal F}}}}_0
\ar@/_1.2pc/[l]_{id}}$$ in the category $({\ensuremath{\mathbf{Crs}}}_n)_{T_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}})}$ of $n$-crossed complexes whose $(n-1)$-truncation is $T_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}})$.
We want to endow this graph with a structure of internal groupoid in ${\ensuremath{\mathbf{Crs}}}_{n}$. For this it is sufficient to do it at the highest dimension, only place where the graph structure is not trivial. In this dimension we have the internal graph $$\label{grab} \xymatrix{C_{n+1} \times C_n \ar@<0.6ex>[r]^-s \ar@<-0.6ex>[r]_-t &
C_n
\ar@/_1.5pc/[l]_{id}}$$ in the category ${{{\ensuremath{\mathbf{Gr}}}}}^\bg$ of -groups, which admits a unique structure of internal groupoid in ${{{\ensuremath{\mathbf{Gr}}}}}^\bg$.
The structure of internal groupoid in ${{{\ensuremath{\mathbf{Gr}}}}}^\bg$ of the graph determines a structure of internal groupoid in $({\ensuremath{\mathbf{Crs}}}_n)_{T_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}})}$ on the graph , this internal groupoid in ${\ensuremath{\mathbf{Crs}}}_n$ will be denoted ${\text{\sf gpd}}_n({{\ensuremath{\boldsymbol{\cal C}}}})$. It is easy to show that this construction is functorial and thus we have a functor $$\label{ngd} {\text{\sf gpd}}_n: {\ensuremath{\mathbf{Crs}}}_{n+1} \to \gcrs[n].$$ In order to define its cuasi-inverse $\fcrs_n:\gcrs[n]
\longrightarrow{\ensuremath{\mathbf{Crs}}}_{n+1}$ we consider an object $${{\ensuremath{\boldsymbol{\cal G}}}}: \xymatrix{{{\ensuremath{\boldsymbol{\cal C}}}}^1 \ar@<0.6ex>[r]^s \ar@<-0.6ex>[r]_t & {{\ensuremath{\boldsymbol{\cal C}}}}^0
\ar@/_1.2pc/[l]_{id}}$$ in , as in , and we apply to the morphism $s:{{\ensuremath{\boldsymbol{\cal C}}}}^1\to {{\ensuremath{\boldsymbol{\cal C}}}}^0$. We then obtain a morphism of -groups $$s=\ntecho(s):\ntecho({{\ensuremath{\boldsymbol{\cal C}}}}^1)=C_n^1 \longrightarrow C_n^0=\ntecho({{\ensuremath{\boldsymbol{\cal C}}}}^0).$$ whose kernel is a $\bg$-module $K={\text{\sf ker}}(s):\bg \to {{\ensuremath{\mathbf{Ab}}}}$ associating with each object $x\in \bg$ the subgroup $K(x)$ of $C_n^1(x)$ consisting of the elements $u \in C_n^1(x)$ such that $s_x(u)= 0_{C_n^0(x)}$, with an action which is induced by the action of $C_n^1$. Evidently, this $\bg$-module determines a crossed module $\ck=(\bg,K,0)$ and $\partial_2$ acts trivially on the totally disconnected groupoid $\widehat{K}$. As a result we have a $(n+1)$-crossed complex $$\fcrs_n({{\ensuremath{\boldsymbol{\cal G}}}})=(\ck \xto{\partial_{n+1}} \c_n^0 \xto{\partial^0_n} \c_{n-1}
\to \ldots \to \c_2 \xto{\partial_2} \mathbf{1}_\bg)$$ where $\partial_{n+1}:K \to C^0_n$ is a morphism of $\bg$-groups induced by the morphism of $\bg$-groups $t=\ntecho(t):C_n^1 \to C_n^0$ associated to the codomain of ${{\ensuremath{\boldsymbol{\cal G}}}}$, that is, for each object $x \in \bg$, the $x$-component of $\partial_{n+1}$ is given by $$(\partial_{n+1})_x: K(x) \to C^0_n(x), \; (\partial_{n+1})_x(u) = t_x(u).$$ Note that $\fcrs_n({{\ensuremath{\boldsymbol{\cal G}}}})$ is really a chain complex, that is, $ \partial^0_n
\partial_{n+1} = 0.$ This construction of $\fcrs_n({{\ensuremath{\boldsymbol{\cal G}}}})$ is also functorial so that we have a functor $$\label{fcrsn} \fcrs_n: \gcrs[n] \to
{\ensuremath{\mathbf{Crs}}}_{n+1}.$$ Let us see that it is a cuasi-inverse for ${\text{\sf gpd}}_n$.
If ${{\ensuremath{\boldsymbol{\cal G}}}}\in \gcrs[n]$ as in , $\ntecho({{\ensuremath{\boldsymbol{\cal G}}}})$ is an internal groupoid in the category of $\bg$-groups, hence for every $x \in {\text{\sf obj}}(\bg)$ we have an internal groupoid in the category of groups $$\xymatrix{C^1_n(x) \times_{C_n^0(x)} C^1_n(x) \ar[r] & C_n^1(x)
\ar@<0.6ex>[r]^{s_x}
\ar@<-0.6ex>[r]_{t_x} & C_n^0(x) \ar@/_1.2pc/[l]_{id_x}}.$$ Thus, we have a group isomorphism $$G_x:\semi{C_n^0(x)}{\N_{(s_x,id_x)}}=K(x) \times C_n^0(x) \xto{\cong}
C_n^1(x) ;\qquad G_x(u,v)= u \ id_x(v).$$ Since the above isomorphism is natural, it is immediate that the pair $(G,
Id_{C_n^0})$ is an isomorphism of graphs in ${{{\ensuremath{\mathbf{Gr}}}}}^\bg$ which induces a graph isomorphism in ${\ensuremath{\mathbf{Crs}}}_n$, hence an isomorphism in between the groupoids ${\text{\sf gpd}}_n\fcrs_n({{\ensuremath{\boldsymbol{\cal G}}}})$ and ${{\ensuremath{\boldsymbol{\cal G}}}}$. Conversely, if [[$\boldsymbol{\cal C}$]{}]{} is a $(n+1)$-crossed complex, then $$\fcrs_n\big({\text{\sf gpd}}_n({{\ensuremath{\boldsymbol{\cal C}}}})\big) = (\ck \xto{\partial_{n+1}} \c_n
\xto{\partial^0_n} \c_{n-1}
\to \ldots \to \c_2 \xto{\partial_2} \mathbf{1}_\bg),$$ where $\ck = (\bg,K,0)$ with $K = {\text{\sf ker}}(C_{n+1}\times C_n \xto{s}C_n)$. Since $s$ is the canonical projection, it is clear that $K=C_{n+1}$ and $\ck = \c_{n+1}$. Looking closely at the connecting morphisms in $\fcrs_n({\text{\sf gpd}}_n({{\ensuremath{\boldsymbol{\cal C}}}}))$ one realizes immediately that $\fcrs_n\big({\text{\sf gpd}}_n({{\ensuremath{\boldsymbol{\cal C}}}})\big) = {{\ensuremath{\boldsymbol{\cal C}}}}$.
\[nypn\] For any $n>0$ the following equations of functors hold $$\xymatrix{{\ensuremath{\mathbf{Crs}}}_{n+1}\ar[r]^{{\text{\it i}}_{n+1}}\ar[d]_{{\text{\sf gpd}}_n} & {\ensuremath{\mathbf{Crs}}}\ar[d]^{\widetilde{P}_n} \\
\gcrs[n] \ar[r]_{\pi_0} & {\ensuremath{\mathbf{Crs}}}_n ,}\qquad
\xymatrix{{\ensuremath{\mathbf{Crs}}}_{n+1}\ar[r]^{{\text{\it i}}_{n+1}} &{\ensuremath{\mathbf{Crs}}}\ar[d]^{\widetilde{P}_n} \\
\gcrs[n]\ar[u]^{\fcrs_n}\ar[r]_{\pi_0} &{\ensuremath{\mathbf{Crs}}}_n,}$$ $$\widetilde{P}_n{\text{\it i}}_{n+1} = \pi_0{\text{\sf gpd}}_n\,, \qquad
\widetilde{P}_n{\text{\it i}}_{n+1}\fcrs_n = \pi_0,$$ in other words, for each $(n+1)$-crossed complex [[$\boldsymbol{\cal C}$]{}]{} and each groupoid ${{\ensuremath{\boldsymbol{\cal G}}}}\in\gcrs[n]$: $$P_n({{\ensuremath{\boldsymbol{\cal C}}}})=\pi_0{\text{\sf gpd}}_n({{\ensuremath{\boldsymbol{\cal C}}}})\qquad \mbox{and} \qquad
\pi_0({{\ensuremath{\boldsymbol{\cal G}}}})=P_n\fcrs_n({{\ensuremath{\boldsymbol{\cal G}}}}),$$
Since $\fcrs_n({\text{\sf gpd}}_n({{\ensuremath{\boldsymbol{\cal C}}}})) = {{\ensuremath{\boldsymbol{\cal C}}}}$, it is sufficient to verify the right hand equation, that is, $\pi_0({{\ensuremath{\boldsymbol{\cal G}}}})=P_n\fcrs_n({{\ensuremath{\boldsymbol{\cal G}}}})$. These two crossed complexes agree in dimensions less than $n$. In dimension $n$, $\pi_0({{\ensuremath{\boldsymbol{\cal G}}}})$ is the coequalizer of $s$ and $t$ (see diagram \[gcrsn\]). On the other hand, $P_n\fcrs_n({{\ensuremath{\boldsymbol{\cal G}}}})$ is, in dimension $n$, the quotient of $C^0_n$ by the image of $\partial_{n+1}(=t):\ker (s)\to C^0_n$. That this quotient is equal to the previous coequalizer is an immediate consequence of the general fact that the coequalizer of a parallel pair of group homomorphisms, $s,t:G\to
H$ having a common section is the quotient of $H$ by $t(\ker(s))$.
\[endypi\] For each groupoid ${{\ensuremath{\boldsymbol{\cal G}}}}\in\gcrs[n]$ and each $(n+1)$-crossed complex [[$\boldsymbol{\cal C}$]{}]{} we have natural isomorphisms: $$\begin{array}{c}
\ntecho{\text{\sf End}}({{\ensuremath{\boldsymbol{\cal G}}}})\cong \ntecho{\text{\sf obj}}({{\ensuremath{\boldsymbol{\cal G}}}})\times (\pi_{n+1}\fcrs_n({{\ensuremath{\boldsymbol{\cal G}}}})\circ \, q)
\qquad\mbox{y}\\ [2pc]
\ntecho{\text{\sf End}}{\text{\sf gpd}}_n({{\ensuremath{\boldsymbol{\cal C}}}})\cong \ntecho T_n({{\ensuremath{\boldsymbol{\cal C}}}})\times (\pi_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})\circ\, q),
\end{array}$$ where denotes both the base groupoid of ${\text{\sf obj}}({{\ensuremath{\boldsymbol{\cal G}}}})$ and that of [[$\boldsymbol{\cal C}$]{}]{}, and $q$ denotes either the canonical projection $\bg\to\pi_1{\text{\sf obj}}({{\ensuremath{\boldsymbol{\cal G}}}})$ or $\bg\to\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$.
The second isomorphism is a consequence of the first one and of the identities ${\text{\sf obj}}({\text{\sf gpd}}_n({{\ensuremath{\boldsymbol{\cal C}}}}))=T_n({{\ensuremath{\boldsymbol{\cal C}}}})$ and $\fcrs_n {\text{\sf gpd}}_n({{\ensuremath{\boldsymbol{\cal C}}}})={{\ensuremath{\boldsymbol{\cal C}}}}$.
For each object $x\in \bg$, the isomorphism $$\ntecho{\text{\sf End}}({{\ensuremath{\boldsymbol{\cal G}}}})(x) \xto{\cong} \ntecho{\text{\sf obj}}({{\ensuremath{\boldsymbol{\cal G}}}})(x)\times
(\pi_{n+1}\fcrs_n({{\ensuremath{\boldsymbol{\cal G}}}})\circ
\, q(x))$$ takes each $u\in \ntecho{\text{\sf End}}({{\ensuremath{\boldsymbol{\cal G}}}})(x)$ to the pair $$(\; s(u)=t(u)\;,\; u-id(s(u))\; ) \in\ntecho{\text{\sf obj}}({{\ensuremath{\boldsymbol{\cal G}}}})(x)\times
(\pi_{n+1}\fcrs_n({{\ensuremath{\boldsymbol{\cal G}}}})\circ \, q(x)).$$
The Postnikov invariants of a crossed complex {#inv}
=============================================
As indicated in the Introduction, we distinguish two types of Postnikov invariants of a crossed complex. On the one hand we have the “*algebraic*" invariants, which are elements of *algebraic* (cotriple) cohomologies in the categories ${\ensuremath{\mathbf{Crs}}}_n$. On the other hand, corresponding to each algebraic invariant $k_{n+1}$, we have a “*topological*" invariant, which is an element of a *singular* cohomology. In this section we first determine the algebraic invariants, characterizing them as extensions and as torsors. Then, we define the topological invariants and the singular cohomologies in which they live. Finally, we show how to map the cotriple cohomologies to the singular ones so that one can obtain the topological invariants from the algebraic ones.
The algebraic invariants
------------------------
Let [[$\boldsymbol{\cal C}$]{}]{} be a $\bg$-crossed complex. For $n\geq 0$, the $(n+1)^{th}$ algebraic Postnikov invariant, $k_{n+1}$, of [[$\boldsymbol{\cal C}$]{}]{} is determined by the fibration $\eta_{n+1}:P_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})\rightarrow P_n({{\ensuremath{\boldsymbol{\cal C}}}})$, which is completely described by the following diagram: $$\label{eta n+1}
\vcenter{
\everyentry={\vphantom{\big(}}
\xymatrix@C=1.25pc@R=1.5pc {
\cdots\ar[r] & {{\boldsymbol0}_{_{\bg}}}\ar[r] \ar@{=}[d] & \c_{n+1} / {\mathop{\rm im}}\partial_{n+2}
\ar[d]\ar[r]^-{\bar{\partial}_{n+1}} & \c_n\ar[r]^{\partial_n} \ar[d]^{q_n} &
\c_{n-1}\ar[r]^{\partial_{n-1}}\ar@{=}[d] & \cdots \\ \cdots \ar[r] &
{{\boldsymbol0}_{_{\bg}}}\ar[r] & {{\boldsymbol0}_{_{\bg}}}\ar[r] & \c_n/{\mathop{\rm im}}\partial_{n+1}
\ar[r]^-{\bar{\partial}_n} & \c_{n-1}\ar[r]^-{\partial_{n-1}} & \cdots } }$$ For $n=0$ we get the first invariant, $k_1$, determined by $\eta_1$: $$\everyentry={\vphantom{\big(}}
\xymatrix@C=1.25pc@R=1.5pc {\cdots \ar[r] &
{{\boldsymbol0}_{_{\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})}}}\ar@<-1.2ex>[d]^{(q_0,0)} \ar[r] & {{\boldsymbol1}_{_{\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})}}}
\ar@<-1.2ex>[d]^{(q_0,0)}
\\
\cdots \ar[r] & {{\boldsymbol0}_{_{\pi_0({{\ensuremath{\boldsymbol{\cal C}}}})}}} \ar[r] & {{\boldsymbol1}_{_{{\pi_0({{\ensuremath{\boldsymbol{\cal C}}}})}}}}}$$ where we have indicated in the vertical arrows the change of base functor $q_0$ since it is not an identity.
This invariant is an element in the topos cohomology of ${\ensuremath{\mathbf{Crs}}}_0={{\ensuremath{\mathbf{Set}}}}$, a category which has very little structure and, correspondingly, with a very simple cohomology: it is trivial in dimensions $\geq 1$, so that $H^2(P_0({{\ensuremath{\boldsymbol{\cal C}}}}),\pi_1({{\ensuremath{\boldsymbol{\cal C}}}}))$ only has one element. Although it is not difficult to see that this element corresponds to $\eta_1$, there is really no need of our machinery to determine it, and we will not discuss here this case any further. For a more complete discussion of this case we refer the interested reader to [@Garcia2003].
For $n=1$ we have the second invariant, $k_2$, determined by $\eta_2$: $$\everyentry={\vphantom{\big(}} \xymatrix@C=1.25pc@R=1.5pc {\cdots \ar[r] &
{{{\boldsymbol0}_{_{\bg}}}\ }\ar@<-1ex>[d]^{(q_1,0)} \ar[r] & \c_2/{\mathop{\rm im}}\partial_3
\ar@<-1ex>[d]^{(q_1,0)}
\ar[r]^-{\bar{\partial}_2} & {{{\boldsymbol1}_{_{\bg}}}\ }\ar@<-1ex>[d]^{(q_1,q_1).} \\
\cdots \ar[r] &
{{\boldsymbol0}_{_{\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})}}} \ar[r] & {{\boldsymbol0}_{_{\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})}}}\ar[r] &
{{\boldsymbol1}_{_{{\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})}}}}}$$ Note, however, that $q_1$ is the cokernel of $\bar{\partial}_2$, while $\ker\bar{\partial}_2$ is, as noted earlier (see page ), the -crossed module $\a_2=(\pi_2({{\ensuremath{\boldsymbol{\cal C}}}}),0)$, where $\pi_2({{\ensuremath{\boldsymbol{\cal C}}}})$ is considered as -module via $q_1:\bg\rightarrow\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$. Thus $\eta_2$ is completely determined by the following sequence of crossed modules $$\label{k2 ex seq xm} \everyentry={\vphantom{\big(}} \xymatrix@C=1.25pc@R=1.5pc {
0 \ar[r] & \a_2\ar[r] & \c_2/{\mathop{\rm im}}\partial_3 \ar[r]^-{\bar{\partial}_2} &
{{\boldsymbol1}_{_{\bg}}} \ar[r]^-{q_1} & {{\boldsymbol1}_{_{{\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})}}}}\ar[r] & 0,}$$ which can be regarded as a genuine exact sequence $$\label{k2 ex seq gpd} \everyentry={\vphantom{\Big(}} \xymatrix@C=1.25pc@R=1.5pc
{ 0 \ar[r] & \widehat{\pi_2({{\ensuremath{\boldsymbol{\cal C}}}})} \ar[r] &
\widehat{C}_2/{\mathop{\rm im}}\widehat{\partial}_3 \ar[r]^-{\bar{\partial}_2} & \bg
\ar[r]^-{q_1} & \pi_1({{\ensuremath{\boldsymbol{\cal C}}}})\ar[r] & 0,}$$ in the category ${{{\ensuremath{\mathbf{Gpd}}}}}_{{\text{\sf obj}}(\bg)}$.
A sequence such as or is an extension of the groupoid $\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$ by the $\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$-module $\pi_2({{\ensuremath{\boldsymbol{\cal C}}}})$, according to the following definition, of which the “*reduced*" or pointed case is the well known definition of 2-extensions of groups by modules:
\[2exgpd\] A 2-extension of a groupoid $\Pi$ by a $\Pi$-module $A:\Pi\rightarrow{{\ensuremath{\mathbf{Ab}}}}$ is a crossed module $\c=(\bg,C,\delta)$, called the fiber of the extension, together with an exact sequence
$$\everyentry={\vphantom{\Big(}}
\xymatrix@C=1.25pc@R=1.5pc { 0\ar[r] & \widehat{A} \ar[r] &
\widehat{C}\ar[r]^-{\widehat{\delta}} & \bg\ar[r]^-q & \Pi\ar[r] & 0}$$
in ${{{\ensuremath{\mathbf{Gpd}}}}}_{{\text{\sf obj}}(\Pi)}$ such that the kernel of $\delta$ factors through $\Pi$ as $A\circ q$,
$$\label{com tria}
\vcenter{
\xymatrix{\bg\ar[rr]^q\ar@/_.1pt/[dr]_-{\ker \delta} && \Pi \ar@/^.1pt/[dl]^A \\
& {{\ensuremath{\mathbf{Ab}}}}} }.$$
The standard definition of morphism of extensions gives rise to a category, denoted ${\text{\it Ext}}^2(\Pi,A)$, whose objects are the 2-extensions of a groupoid $\Pi$ by a $\Pi$-module $A$. As usual we denote with brackets, ${\text{\it Ext}}^2[\Pi,A]$, the category of connected components of ${\text{\it Ext}}^2(\Pi,A)$.
It is now easy to show that the extensions just defined represent cohomology elements in a well known cotriple cohomology of groupoids. We use the notation and results of the Appendix, Section 5.
\[2-exts of gpd are tor\] Let $U:{{{\ensuremath{\mathbf{Gpd}}}}}\rightarrow{{{\ensuremath{\mathbf{Gph}}}}}$ be the underlying graph functor defined on the category of groupoids. If $\Pi$ is a groupoid, $A$ is a $\Pi$-module, and ${\tildeA_{1}}$ is the abelian group object in ${{{\ensuremath{\mathbf{Gpd}}}}}/\Pi$, ${\tildeA_{1}} = (\semi\Pi A\leftrightarrows\Pi)$, there is a full and faithful functor $${\text{\it Ext}}^2(\Pi,A)\rightarrow {\text{\it Tor}}^2_U(\Pi,\tilde{A}_1).$$
Consider a 2-extension of $\Pi$ by $A$, as in Definition \[2exgpd\], with $\c=(\bg,C,\delta)$ being the fiber crossed module. The commutativity of the triangle gives a commutative square of groupoids and functors $$\everyentry={\vphantom{\Big(}}
\xymatrix@C=1.25pc@R=1.5pc {\semi{\bg}{\ker\delta}\ar[d]\ar[r]^-\alpha &
\semi{\Pi}{A} \ar[d] \\ \bg\ar[r]_-q &
\Pi,}$$ which is a pullback. Let then $\tilde{\c}$ be the internal groupoid in groupoids corresponding to the crossed module by the functor ${\text{\sf gpd}}$ of Proposition \[xm as groupoids\]. The groupoid of connected components of $\tilde{\c}$ is $\pi_1(\c)=\Pi$, and the groupoid of endomorphisms of $\tilde{\c}$ is $\semi{\bg}{\ker\delta}$. Therefore $(\Pi,\tilde{\c},\alpha)$ is an $(\tilde{A}_1,2)$-torsor which is clearly $U$-split. Furthermore, it is a routine straightforward verification to see that the above construction is functorial. That the functor ${\text{\it Ext}}^2(\Pi,A)\rightarrow
{\text{\it Tor}}^2_U(\Pi,\tilde{A}_1)$ so defined is full and faithful is an immediate consequence of Proposition \[xm as groupoids\].
\[2-tor of gpd are ext\] For any groupoid $\Pi$ and $\Pi$-module $A$, $$H^2_{\bbg_1}(\Pi,\tilde{A}_1)\cong {\text{\it Ext}}^2[\Pi,A].$$ where $\bbg_1$ is the cotriple on ${{{\ensuremath{\mathbf{Gpd}}}}}/\Pi$ induced by the underlying graph functor $U:{{{\ensuremath{\mathbf{Gpd}}}}}\to{{{\ensuremath{\mathbf{Gph}}}}}$.
By Proposition \[same connected components\] it is sufficient to prove that the inclusion functor $ {\underline{{\text{\it Tor}}}}^2_U(\Pi,\tilde{A}_1){\hookrightarrow}{\text{\it Tor}}^2_U(\Pi,\tilde{A}_1) $ factors through the full and faithful functor of Proposition \[2-exts of gpd are tor\]. This follows from the fact that the free groupoid on a graph has as objects the vertices of the graph and that the counit map for the free adjunction is the identity on objects. This implies that the fiber groupoid of any $U$-split 2-torsor in ${\underline{{\text{\it Tor}}}}^2_U(\Pi,\tilde{A}_1)$ is actually a 2-groupoid and then, by Proposition \[xm as groupoids\], it is isomorphic to the groupoid associated by the functor to a crossed module which is the fiber of a 2-extension of $\Pi$ by $A$.
\#1\#2\#3\#4[\#4 $\eta_{#2}:P_{#2}({{\ensuremath{\boldsymbol{\cal C}}}})\to P_{#1}({{\ensuremath{\boldsymbol{\cal C}}}})$ uniquely corresponds to an element $$k_{#2} \in H^2_{{\ensuremath{\bbg_{#1}}}} \bpar{P_#1({{\ensuremath{\boldsymbol{\cal C}}}}),
\tilde{A}_#1}$$ where $A = \pi_{#2} \bpar{P_#1({{\ensuremath{\boldsymbol{\cal C}}}})}$. This cohomology element will be called the algebraic \#3 [[Postnikov]{} invariant]{} of the crossed complex ${{\ensuremath{\boldsymbol{\cal C}}}}$.]{} 12[second]{}[The last results show that the fibration]{}
For $n>1$, an observation about $q_n:\c_n\rightarrow\c_n/{\mathop{\rm im}}\partial_{n+1}$ similar to the one made for $q_1$ holds, namely that $q_n$ is (not only the cokernel of $\partial_{n+1}$, but also) the cokernel of $\bar{\partial}_{n+1}:\c_{n+1}/{\mathop{\rm im}}\partial_{n+2}\rightarrow\c_n$. As a consequence, $\eta_{n+1}$ represents a 2-extension of the $n$-crossed complex $P_n({{\ensuremath{\boldsymbol{\cal C}}}})$ by the $\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$-module $\pi_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})$, according to the following definition, which extends that of 2-extensions of groupoids:
\[ext of crsn\] If ${{\ensuremath{\boldsymbol{\cal C}}}}=(\c_n\xrightarrow{\partial_n}\cdots\xrightarrow{\partial_2} {{\boldsymbol1}_{_{\bg}}})$ is an $n$-crossed complex with $n>2$, and $A:\Pi\rightarrow{{\ensuremath{\mathbf{Ab}}}}$ is a $\Pi$-module over the fundamental groupoid $\Pi=\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$ of [[$\boldsymbol{\cal C}$]{}]{}, a 2-extension of [[$\boldsymbol{\cal C}$]{}]{} by $A$ is an exact sequence in the category of -groups. $$\everyentry={\vphantom{\big(}} \xymatrix@C=1.25pc@R=1.5pc {0\ar[r] & A\ar[r] &
E_1 \ar[r]^\sigma & E_0\ar[r]^\tau & C_n\ar[r] & 0,}$$ (where $A$ is considered as a -module via the canonical projection $q:\bg\rightarrow\Pi$), such that $$\label{n+1crs} \xymatrix@C=1.25pc@R=1.5pc {\e_1\ar[r]^\sigma & \e_0
\ar[r]^-{\partial_n\tau} & \c_{n-1}\ar[r] &\cdots\ar[r]&\c_2\ar[r]^{\partial_2}
& {{\boldsymbol1}_{_{\bg}}}}$$ is an $(n+1)$ -crossed complex, where $\e_i=(\bg,E_i,0),\;
i=0,1$. For $n=2$, we define a 2-extension of a crossed module $\c=(\bg,C,\delta)$ by a $\Pi$-module $A$ as an exact sequence of -groups $$\label{2ext xm} \everyentry={\vphantom{\Big(}} \xymatrix@C=1.25pc@R=1.5pc
{0\ar[r] & A \ar[r] & E_1 \ar[r]^\sigma & E_0\ar[r]^\tau & C\ar[r] & 0,}$$ such that $$\label{2ext xm2} \xymatrix@C=1.25pc@R=1.5pc {\e_1\ar[r]^\sigma &
\e_0\ar[r]^-{\delta\tau} & {{\boldsymbol1}_{_{\bg}}}}$$ is a 3-crossed complex, where $\e_0=(\bg,E_0,\delta\tau)$ and $\e_1=(\bg,E_1,0)$.
Let us note that, as in Definition \[2exgpd\], a 2-extension of a $n$-crossed complex [[$\boldsymbol{\cal C}$]{}]{} can be seen as an exact sequence $$\everyentry={\vphantom{\Big(}} \xymatrix@C=1.25pc@R=1.5pc {0\ar[r] & {{\ensuremath{\boldsymbol{\cal A}}}}\ar[r]
& {{\ensuremath{\boldsymbol{\cal E}}}}_1 \ar[r]^\sigma & {{\ensuremath{\boldsymbol{\cal E}}}}_0\ar[r]^\tau & {{\ensuremath{\boldsymbol{\cal C}}}}\ar[r] & 0,}$$ in the category $\bpar{{\ensuremath{\mathbf{Crs}}}_n}_{T_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}})}$ of $n$-crossed complexes with a fixed $(n-1)$-truncation, together with an extra structure in the central part that makes or an $(n+1)$-crossed complex. The $n$-crossed complexes ${{\ensuremath{\boldsymbol{\cal A}}}}$ and ${{\ensuremath{\boldsymbol{\cal E}}}}_i$, $i=0,1$, have at dimension $n$ the crossed modules ${\text{\sf zero}}(A) = (\bg,A,0)$ and $\e_i$ respectively.
Using those definitions, our discussion can be summarized in the following statement:
For all $n\geq 1$ the fibration $\eta_{n+1}$ of the Postnikov tower of ${{\ensuremath{\boldsymbol{\cal C}}}}$ provides a 2-extension of $P_n({{\ensuremath{\boldsymbol{\cal C}}}})$ by the $\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$-module $\pi_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})$.
For $n=1$, the 2-extension associated to $\eta_2$ is given by the crossed module $P_2({{\ensuremath{\boldsymbol{\cal C}}}})$ and the sequence . For $n>1$ the 2-extension associated to $\eta_{n+1}$ is given by the sequence $$\everyentry={\vphantom{\big(}} \xymatrix@C=1.25pc@R=1.5pc {0\ar[r] &
\pi_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})\ar[r] & C_{n+1}/{\mathop{\rm im}}\partial_{n+2}
\ar[r]^-{\bar{\partial}_{n+1}} & C_n \ar[r]^-{q_n} &
C_n/{\mathop{\rm im}}\partial_{n+1}\ar[r] & 0}.$$
As suggested above, it is not difficult to extend this proposition to the case $n=0$ by giving an appropriate definition of a 2-extension of a set $X$ by an $X$-indexed family of groups. The details are in [@Garcia2003].
As in the case of groupoids (case $n=1$), the 2-extensions of an $n$-crossed complex [[$\boldsymbol{\cal C}$]{}]{} by a fixed $\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$-module $A$ constitute a category, denoted ${\text{\it Ext}}^2({{\ensuremath{\boldsymbol{\cal C}}}},A)$, where morphisms between extensions are defined in the obvious way, that is, as commutative diagrams $$\everyentry={\vphantom{\big(}} \xymatrix@C=1.25pc@R=1.5pc {0\ar[r] &
A\ar@{=}[d]\ar[r] & E_1\ar[d] \ar[r]^\sigma & E_0\ar[d]\ar[r]^\tau &
C_n\ar[r]\ar@{=}[d] & 0, \\ 0\ar[r] & A \ar[r] & E_1'
\ar[r]^{\sigma'} & E_0'\ar[r]^{\tau'} & C_n\ar[r] & 0.}$$ Again, ${\text{\it Ext}}^2[{{\ensuremath{\boldsymbol{\cal C}}}},A]$ denotes the set of connected components of ${\text{\it Ext}}^2({{\ensuremath{\boldsymbol{\cal C}}}},A)$.
It is now necessary a more elaborate analysis than the one made for groupoids in order to show that the 2-extensions of crossed modules defined above represent cohomology elements in a cotriple cohomology of crossed modules (of course the one corresponding to the cotriple induced by $U_2$, Proposition \[pxm tripleable over agpd\]).
Let $\c=(\bg,C,\delta)\in{\ensuremath{\mathbf{Crs}}}_2={{\ensuremath{\mathbf{Xm}_{}}}}$, let $\Pi=\pi_1(\c)$ be its fundamental groupoid and let $A:\Pi\rightarrow{{\ensuremath{\mathbf{Ab}}}}$ be a system of local coefficients. It is not difficult to prove that the resulting crossed module ${\tildeA_{2}}$ has base groupoid equal to $\bg$ and structure $\bg$-group the functor (also denoted ${\tildeA_{2}}$ by abuse of notation) ${\tildeA_{2}}:\bg\to {{{\ensuremath{\mathbf{Gr}}}}}$ given by the cartesian product of groups, ${\tildeA_{2}}(x) = C(x)\times A(x)$, with action $\laction{u}{(v,b)} = (\laction uv,\laction ub)$. Furthermore, the crossed module connecting morphism of ${\tildeA_{2}}$, $\delta'$, has components $\delta'_x:C(x)\times A(x)\to{\text{\sf End}}_\bg(x)$ given by $\delta'_x\bpar{(u,a)} =
\delta(u)$. Regarding ${\tildeA_{2}}$ as an internal abelian group object in ${{\ensuremath{\mathbf{Xm}_{}}}}/\c$, it will be taken as a system of global coefficients.\[ab grp in xm\]
Let us consider now a 2-extension of by $A$ such as . By the condition that diagram be a 3-crossed complex, the action of ${\mathop{\rm im}}\delta\tau$ on $\widehat{E}_1$ is trivial and the cartesian product $\e_0\times\e_1$ in ${{\ensuremath{\mathbf{Xm}_{}}}}_{\bg}$ is given by $$\e_0\times\e_1 =(\bg,E_0\times E_1,\delta\tau p_0).$$ This crossed module has two obvious maps of crossed modules to $\e_0$, namely, the canonical projection $p_0:\e_0\times\e_1\rightarrow \e_0$ and the map determined by $t:(x,y)\mapsto x\sigma(y)$ from $E_0\times E_1$ to $E_0$. These two maps have a common section determined by the map $x\mapsto (x,0)$ and the resulting internal graph in [[$\mathbf{Xm}_{}$]{}]{}, $${{\ensuremath{\boldsymbol{\cal E}}}}: \xymatrix@C=1pc@R=1.75pc
{**[l]{\e_0\times\e_1}\ar@<.7ex>[rr]|-{\phantom{.}p_0\phantom{.}}\ar@<-0.5ex>[rr]_-t
& & \e_0 \ar@/_/@<-1.2ex>[ll] }$$ admits a unique groupoid structure in which the multiplication is determined by $$\langle(x,y),(x',y')\rangle\mapsto (x,yy').$$
Since the crossed module of connected components of [[$\boldsymbol{\cal E}$]{}]{} (the coequalizer of $p_0$ and $t$) is easily verified to be the canonical map $(\tau,1_\bg):\e_0\to
\c$ determined by $\tau:E_0\rightarrow C$ (and the identity of as change of base), we can take the structure of internal groupoid of [[$\boldsymbol{\cal E}$]{}]{} as the fiber groupoid of a $({\tildeA_{2}},2)$-torsor above . To define such a torsor we just need to give the corresponding cocycle map $\alpha:{\text{\sf End}}({{\ensuremath{\boldsymbol{\cal E}}}})\rightarrow {\tildeA_{2}}$. The domain of this map is the crossed module obtained as the equalizer of $p_0$ and $t$. Since the condition defining this equalizer is $x=x\sigma(y)$, one quickly finds that ${\text{\sf End}}({{\ensuremath{\boldsymbol{\cal E}}}})=(\bg,E_0\times\ker \sigma,\delta\tau p_0)$. The required morphism $\alpha$ is defined as the map of -crossed modules determined by the following map of -groups: $$E_0\times\ker\sigma\xrightarrow{\tau\times\bar{\sigma}} C\times (A\circ q),$$ where $\bar{\sigma}$ is the canonical isomorphism $\ker\sigma\cong A\circ q$ induced by the exactness of $0\rightarrow A\circ q\rightarrow
E_1\xrightarrow{\sigma} E_0$.
The above arguments have prepared the ground for the following:
\[2-ext of xm are tor\] For any crossed module $\c=(\bg,C,\delta)$ and any $\pi_1(\c)$-module $A:\pi_1(\c)\rightarrow{{\ensuremath{\mathbf{Ab}}}}$, there is a full and faithful functor $${\text{\it Ext}}^2(\c,A)\rightarrow {\text{\it Tor}}^2_{U_2}(\c,{\tildeA_{2}}),$$ where $U_2$ is the monadic functor of Proposition \[pxm tripleable over agpd\].
It is a simple exercise to verify that the construction given above indeed produces a 2-torsor above $\c$ with coefficients in ${\tildeA_{2}}$ from any 2-extension of $\c$ by $A$, and that this construction is functorial. Furthermore, if one examines the correspondence between morphisms of extensions $$\vcenter{\everyentry={\vphantom{\big(}} \xymatrix@C=1.25pc@R=1.5pc {0\ar[r] &
A\ar@{=}[d]\ar[r]^j & E_1\ar[d]_{f_1} \ar[r]^\sigma & E_0\ar[d]_{f_0}\ar[r]^\tau
& C \ar[r]\ar@{=}[d] & 0, \\ 0\ar[r] & A \ar[r]^{j'} & E_1'
\ar[r]^{\sigma'} & E_0'\ar[r]^{\tau'} & C\ar[r] & 0 }} \qquad
\begin{aligned}
\tau' \f_0&= \tau \\ f_0\sigma &=\sigma' f_1 \\ f_1 j &=j'
\end{aligned}$$ and morphisms of 2-torsors -7 mm $$\everyentry={\vphantom{\big(}} \xymatrix{& & E_0 \times \ker \sigma
\ar[dd]_{f_0\times f_1} \ar[dl]_{p_0} \ar[ddr]^{\bar{\alpha}} & \\
**[l]E_0\times E_1 \ar@<.7ex>[r]|-{\phantom{.}p_0}\ar@<-0.7ex>[r]_-t
\ar@<-3ex>[dd]_{f_0\times f_1} & E_0 \ar@/_1pc/[l] \ar[dd]_{f_0}
\ar[ddr]|<>(.4){\phantom{-}\sigma\phantom{-}} & \\ & & E_0'\times \ker \sigma'
\ar[dl]|-{\phantom{.}p_0'} \ar[r]^-{\bar{\alpha}'} & **[r]C\times (A\circ q)
\ar[dl]^{p_C} & \\ **[l]E_0'\times E_1'\labelmargin{2ex}
\ar@<.7ex>[r]|-{\phantom{.}p_0'}\labelmargin{.5ex} \ar@<-0.7ex>[r]_-{t'} & E_0'
\ar[r]_{\sigma'}\ar@/_1pc/[l] & C }
\mskip - 120 mu
\begin{aligned}
\\[7 mm]
(1_{\tilde{A}}\times f_0) \alpha &=\alpha' (f_0\times f_1) \\
f_0 t &=t' (f_0\times f_1)
\end{aligned}$$ it becomes evident that it is a bijective correspondence and therefore the functor from 2-extensions of by $A$ to $({\tildeA_{2}},2)$-torsors above is full and faithful.
It only remains to prove that the internal groupoid [[$\boldsymbol{\cal E}$]{}]{} in [[$\mathbf{Xm}_{}$]{}]{} that we have associated to an extension of by $A$ is $U_2$-split. For that, let us consider $U_2(\tau,1_\bg)=(\tau,1_\bg)$, where $\tau$ in the right-hand side is regarded a map of sets from the set of arrows of $\widehat{E}_0$ to the set of arrows of $\widehat{C}$. This map of sets is surjective and for any section $s$ of this map, the pair $(s,1_\bg)$ is a section of $(\tau, 1_\bg)$ in [[$\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}$]{}]{}.
We need to consider also the map $\langle p_0,t\rangle :\e_0\times\e_1\rightarrow\e_0\times_\c
\e_0$ and prove that $U_2(\langle p_0,t\rangle)$ has a section. Giving a section for this map is equivalent to giving, for each object $x\in\bg$, a map of sets $K_x:E_0(x)\times_{C(x)}
E_0(x)\rightarrow E_1(x)$ such that $\sigma_x(K_x(u,v))=u^{-1}v$. Taking into account that $E_0(x)\times_{C(x)}E_0(x)$ is the set of pairs $(u,v),\; u,v\in E_0(x)$ such that $\tau_x(u)=\tau_x(v)$, it is clear that for every $(u,v)\in E_0(x)\times_{C(x)} E_0(x)$ we have $u^{-1}v\in\ker \tau_x={\mathop{\rm im}}\sigma_x$. We can choose any section $\beta$ of the set map $E_1(x)\xrightarrow{\sigma_x}{\mathop{\rm im}}\sigma_x$ and define $K_x(u,v)=\beta(u^{-1}v)$. This proves $U_2(\langle p_0,t\rangle)$ has a section, and therefore [[$\boldsymbol{\cal E}$]{}]{} is $U_2$-split.
For any crossed module $\c$ and any $\pi_1(\c)$-module $A:\pi_1(\c) \to {{\ensuremath{\mathbf{Ab}}}}$, $$H_{{\ensuremath{\bbg_{2}}}}^2(\c,{\tildeA_{2}})\cong {\text{\it Ext}}^2[\c,A].$$ where ${\tildeA_{2}}$ is the abelian group object in ${{\ensuremath{\mathbf{Xm}_{}}}}$ obtained from $A$ as indicated in page .
This is consequence of Proposition \[same connected components\] and the fact that the inclusion ${\underline{{\text{\it Tor}}}}_{U_2}^2(\c,{\tildeA_{2}}){\hookrightarrow}{\text{\it Tor}}_{U_2}^2(\c,{\tildeA_{2}})$ factors through the full and faithful inclusion of Proposition \[2-ext of xm are tor\]. To see the last statement, observe that the counit map for the free adjunction $F_2:{{\ensuremath{\mathbf{A}{{{\ensuremath{\mathbf{Gpd}}}}}}}}\leftrightarrows
{{\ensuremath{\mathbf{Xm}_{}}}}:U_2$ is always the identity at the level of base groupoid. Therefore the fiber groupoid (internal in [[$\mathbf{Xm}_{}$]{}]{}) of any 2-torsor in ${\underline{{\text{\it Tor}}}}_{U_2}^2(\c,{\tildeA_{2}})$ lives in ${{\ensuremath{\mathbf{Xm}_{}}}}_\bg$. Finally it is straightforward to see that any internal groupoid in ${{\ensuremath{\mathbf{Xm}_{}}}}_\bg$ is isomorphic to the groupoid built from a 2-extension as in the proof of Proposition \[2-ext of crsn are tor\].
23[third]{}[The last results show that the fibration]{}
### The higher invariants. {#the-higher-invariants. .unnumbered}
Our next objective is to establish a general bijection between ${\text{\it Ext}}^2[P_n({{\ensuremath{\boldsymbol{\cal C}}}}),\pi_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})]$ and the set of elements in certain cotriple cohomology in the category of $n$-crossed complexes for $n\geq3$. An essential step in this process is determining the coefficients to be used to calculate the cohomology. These coefficients, (constituting an internal abelian group object in the category ${\ensuremath{\mathbf{Crs}}}_n/P_n({{\ensuremath{\boldsymbol{\cal C}}}})$) are obtained from the “homotopy group” $\pi_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})$ using the fact that $\pi_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})$ is a module over the fundamental groupoid of $P_n({{\ensuremath{\boldsymbol{\cal C}}}})$ (equal to the fundamental groupoid of ${{\ensuremath{\boldsymbol{\cal C}}}}$, in turn equal to $P_1({{\ensuremath{\boldsymbol{\cal C}}}})$). In general, if ${{\ensuremath{\boldsymbol{\cal C}}}}\in{\ensuremath{\mathbf{Crs}}}_n$ is a $n$-crossed complex for $n>1$, pulling back along the canonical map ${{\ensuremath{\boldsymbol{\cal C}}}}\to \Pi=\pi_1({{\ensuremath{\boldsymbol{\cal C}}}}) = P_1({{\ensuremath{\boldsymbol{\cal C}}}})$ (a finite product-preserving functor) produces an abelian group object in ${\ensuremath{\mathbf{Crs}}}_n/{{\ensuremath{\boldsymbol{\cal C}}}}$ from any abelian group object in ${\ensuremath{\mathbf{Crs}}}_n/\Pi$. This allows us to reduce the search for our coefficients to obtaining an abelian group object in ${\ensuremath{\mathbf{Crs}}}_n/\Pi$. For every $n\geq2$ there is a functor ${\text{\sf ins}}_n:{{\ensuremath{\mathbf{Ab}}}}^\Pi\to{\ensuremath{\mathbf{Crs}}}_n$ taking a $\Pi$-module $A:\Pi\to{{\ensuremath{\mathbf{Ab}}}}$ to the $n$-crossed complex over $\Pi$ $$\label{insert n} {\text{\sf ins}}_n(A) = ({\text{\sf zero}}(A)\to{{\boldsymbol0}_{_{\Pi}}}\to
\cdots\to{{\boldsymbol0}_{_{\Pi}}}\to{{\boldsymbol1}_{_{\Pi}}}).$$ This functor can be regarded as taking its values in ${\ensuremath{\mathbf{Crs}}}_n/\Pi$ via the obvious map $(1_\Pi,{\boldsymbol 0}):{\text{\sf ins}}_n(A)\to\Pi$, and when regarded this way it becomes a finite product preserving functor whose value at a $\Pi$-module $A$ will be denoted ${\tildeA_{n}}$. Since (the theory of abelian groups being commutative) $A$ has the structure of an internal abelian group object in ${{\ensuremath{\mathbf{Ab}}}}^\Pi$, the fact that ${\text{\sf ins}}_n:{{\ensuremath{\mathbf{Ab}}}}^\Pi\to{\ensuremath{\mathbf{Crs}}}_n/\Pi$ preserves finite products implies that ${\tildeA_{n}}$ has a structure of internal abelian group object in ${\ensuremath{\mathbf{Crs}}}_n/\Pi$.
If ${{\ensuremath{\boldsymbol{\cal C}}}}$ is fixed by the context and $\Pi = \pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$, the abelian group object in ${\ensuremath{\mathbf{Crs}}}_n/{{\ensuremath{\boldsymbol{\cal C}}}}$ obtained from a $\Pi$-module $A$ after pulling ${\text{\sf ins}}_n(A)\to\Pi$ back along the canonical map ${{\ensuremath{\boldsymbol{\cal C}}}}\to\Pi$ will be denoted ${\tildeA_{n}}$, so that we have, $$\label{abelian group in xm} \everyentry={\vphantom{\big(}} \vcenter{
\xymatrix@C=1.25pc@R=1.5pc {{\tildeA_{n}}\ar[d]\ar@{}[dr]|{\text{pb}} \ar[r] &
{\text{\sf ins}}_n(A) \ar[d] \\ {{\ensuremath{\boldsymbol{\cal C}}}}\ar[r]_-{\text{can.}} & \Pi} }$$
Let $n \geq 3$, [[$\boldsymbol{\cal C}$]{}]{} an $n$-crossed complex, $\Pi=\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$ and $A:\Pi\rightarrow{{\ensuremath{\mathbf{Ab}}}}$ a $\Pi$-module. We define ${\tildeA_{n}}$ by and take the resulting abelian group object in ${\ensuremath{\mathbf{Crs}}}_n/{{\ensuremath{\boldsymbol{\cal C}}}}$ (also denoted ${\tildeA_{n}}$) as a system of global coefficients for 2-torsors.
\[2-ext of crsn are tor\] Let $n \geq 3$, for any $n$-crossed complex [[$\boldsymbol{\cal C}$]{}]{}and any $\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$-module $A$ there is a full and faithful functor $${\text{\it Ext}}^2({{\ensuremath{\boldsymbol{\cal C}}}},A)\rightarrow {\text{\it Tor}}^2({{\ensuremath{\boldsymbol{\cal C}}}},{\tildeA_{n}}).$$
Let us consider a 2-extension of [[$\boldsymbol{\cal C}$]{}]{} by $A$, $$\everyentry={\vphantom{\big(}} \xymatrix@C=1.25pc@R=1.75pc {0\ar[r] & A\ar[r] &
E_1 \ar[r]^\sigma & E_0\ar[r]^\tau & C_n\ar[r] & 0,}$$ let us also denote $\e_i=(\bg,E_i,0)$ as in Definition \[ext of crsn\]. The -module $E_0\oplus E_1$ with the zero map gives a -crossed module $\e_0\oplus\e_1$ which, substituted for $\c_n$ in [[$\boldsymbol{\cal C}$]{}]{} with the boundary map $\partial_n\tau p_0:\e_0\oplus \e_1\rightarrow \c_{n-1}$, gives rise to an $n$-crossed complex over , $${{\ensuremath{\boldsymbol{\cal F}}}}_1: \e_0\oplus \e_1\xrightarrow{\partial_n\tau p_0}
\c_{n-1}\longrightarrow\cdots\longrightarrow\c_2\longrightarrow {{\boldsymbol1}_{_{\bg}}}.$$ We will take this as the object of arrows of an internal groupoid in ${\ensuremath{\mathbf{Crs}}}_n$. As the object of objects we take the $n$-crossed complex $${{\ensuremath{\boldsymbol{\cal F}}}}_0: \e_0\xrightarrow{\partial_n\tau}
\c_{n-1}\longrightarrow\cdots\longrightarrow\c_2\longrightarrow {{\boldsymbol1}_{_{\bg}}}.$$ The “source" map $s:{{\ensuremath{\boldsymbol{\cal F}}}}_1\rightarrow{{\ensuremath{\boldsymbol{\cal F}}}}_0$ is the obvious map of crossed complexes induced by the projection $p_0:E_0\oplus E_1\rightarrow E_0$, and the “target" map $t:{{\ensuremath{\boldsymbol{\cal F}}}}_1\rightarrow{{\ensuremath{\boldsymbol{\cal F}}}}_0$ is the one induced by the map $x\oplus
y\mapsto x\sigma(y)$ form $E_0\oplus E_1$ to $E_0$. Then, the canonical inclusion $E_0\hookrightarrow E_0\oplus E_1$ determines a common section for $s$ and $t$, and we obtain an internal groupoid in ${\ensuremath{\mathbf{Crs}}}_n$ in which composition is determined by the map $(x\oplus y,x'\oplus y')\mapsto x\oplus
yy'.$
It is a simple matter to show that the $n$-crossed complex of endomorphisms of this groupoid (the equalizer of $s$ and $t$ in ${\ensuremath{\mathbf{Crs}}}_n$) is $${{\ensuremath{\boldsymbol{\cal E}}}}: E_0\oplus (A\circ q) \longrightarrow
{{\boldsymbol0}_{_{\bg}}}\longrightarrow\cdots\longrightarrow {{\boldsymbol0}_{_{\bg}}}\longrightarrow
{{\boldsymbol1}_{_{\bg}}},$$ and the $n$-crossed complex of connected components of this groupoid (the coequalizer of $s$ and $t$ in ${\ensuremath{\mathbf{Crs}}}_n$) is [[$\boldsymbol{\cal C}$]{}]{}. Thus, the above internal groupoid could be taken as the fiber of a torsor in ${\text{\it Tor}}^2({{\ensuremath{\boldsymbol{\cal C}}}},{\tildeA_{n}})$ if a cocycle map $\alpha:{{\ensuremath{\boldsymbol{\cal E}}}}\rightarrow{{\ensuremath{\boldsymbol{\cal C}}}}\times{\tildeA_{n}}$ can be given. A simple calculation shows that ${{\ensuremath{\boldsymbol{\cal C}}}}\times{\tildeA_{n}}$ is the $n$-crossed complex $$\left( C_n\oplus (A\circ q), 0\right)\longrightarrow
\c_{n-1}\longrightarrow\cdots\longrightarrow\c_2\longrightarrow {{\boldsymbol1}_{_{\bg}}}$$ and the cocycle map $\alpha$ can be defined as the obvious map induced by $$\tau\oplus (A\circ q):E_0\oplus (A\circ q)\longrightarrow C_n\oplus (A\circ q),$$ note that the square $$\xymatrix@C=10ex{ E_0\oplus (A\circ q) \ar[d] \ar@{}[dr]|{\text{pb}}
\ar[r]^-{\tau\oplus (A\circ q)} & C_n\oplus (A\circ q) \ar[d] \\ E_0
\ar[r]_-\tau & C_n }$$ is a pullback in the category of -modules.
This defines the functor ${\text{\it Ext}}^2({{\ensuremath{\boldsymbol{\cal C}}}},A)\rightarrow {\text{\it Tor}}^2({{\ensuremath{\boldsymbol{\cal C}}}},{\tildeA_{n}})$ on objects. On morphisms the functor is defined by the obvious map, which establishes a bijection between the morphisms between two extensions and the morphisms of torsors between the corresponding torsors. In this way one gets the desired full and faithful functor.
We will define now a monadic functor $U_n$ in ${\ensuremath{\mathbf{Crs}}}_n$ and prove that the torsors obtained by the functor of Proposition \[2-ext of crsn are tor\] are $U_n$-split. Let ${{\ensuremath{\mathbf{A}{\ensuremath{\mathbf{Crs}}}{}}}}_{n-1}$ be the category whose objects are triples $(X,f,{{\ensuremath{\boldsymbol{\cal C}}}})$ where $X$ is a set, [[$\boldsymbol{\cal C}$]{}]{} is an ${(n-1)}$-crossed complex, and $f:X\rightarrow {\text{\sf arr}}(\widehat{\pi_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}})})$ is a map from $X$ to the set of arrows of the totally disconnected groupoid associated to $\pi_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}}):\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})\rightarrow
{{\ensuremath{\mathbf{Ab}}}}$. An arrow $(X,f,{{\ensuremath{\boldsymbol{\cal C}}}})\rightarrow(X',f',{{\ensuremath{\boldsymbol{\cal C}}}}')$ in ${{\ensuremath{\mathbf{A}{\ensuremath{\mathbf{Crs}}}{}}}}_{n-1}$ is a pair $(\alpha,\beta)$ where $\alpha:X\rightarrow X'$ is a map of sets and $\beta:{{\ensuremath{\boldsymbol{\cal C}}}}\rightarrow{{\ensuremath{\boldsymbol{\cal C}}}}'$ is a map in ${\ensuremath{\mathbf{Crs}}}_{n-1}$ such that $\tilde{\beta} f = f'
\alpha$ (where $\tilde{\beta}:\pi_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}})\rightarrow\pi_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}}')$ is the obvious natural map induced by $\beta$).
There is an obvious forgetful functor $U_{n}:{\ensuremath{\mathbf{Crs}}}_{n}\rightarrow{{\ensuremath{\mathbf{A}{\ensuremath{\mathbf{Crs}}}{}}}}_{n-1}$, taking an $n$-crossed module [[$\boldsymbol{\cal C}$]{}]{} to the triple $\big({\text{\sf arr}}(\widehat{C}_{n}),\widehat{\partial}_{n}, T_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}})\big)$ (the “simple truncation" functor $T_{n-1}$ is defined in section \[crs\]).
\[crsn tripleable over acrs\] For $n> 2$, the forgetful functor $U_{n}:{\ensuremath{\mathbf{Crs}}}_{n}\rightarrow{{\ensuremath{\mathbf{A}{\ensuremath{\mathbf{Crs}}}{}}}}_{n-1}$ is monadic.
We begin by defining a left adjoint. This is very similar to the case of Proposition \[pxm tripleable over agpd\]. The main difference is that in this case we directly get a crossed module with trivial connecting morphism instead of a simple pre-crossed module. Given $(X,f,{{\ensuremath{\boldsymbol{\cal C}}}})$ in ${{\ensuremath{\mathbf{A}{\ensuremath{\mathbf{Crs}}}{}}}}_{n-1}$, with $${{\ensuremath{\boldsymbol{\cal C}}}}: \c_{n-1}\xrightarrow{\partial_{n-1}}\c_{n-2}
\longrightarrow\cdots\longrightarrow\c_2\xrightarrow{\partial_2} {{\boldsymbol1}_{_{\bg}}}$$ and $\c_2=(\bg,C,\delta)$, we need to define a trivial -crossed module $\c_{n}=(\bg,C_{n},0)$ on which ${\mathop{\rm im}}(\delta)$ acts trivially. For this, it is sufficient to define a $\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$-module $\overline{C}_{n}:\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})\rightarrow{{\ensuremath{\mathbf{Ab}}}}$ and to put $C_{n}=\overline{C}_{n}\circ q$, where $q:\bg\rightarrow\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$ is the canonical map. For each object $x\in\bg$ we define the abelian group $$\overline{C}_{n}(x)= {\sff}_{\text{ab}}\bigg( \coprod_{z\in{\text{\sf obj}}(\bg)}
\Big(\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})(z,x)\times\coprod_{v\in\pi_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}})(x)} {\text{\sf fbr}}(f,v)\Big)
\bigg).$$ that is, $\overline{C}_{n}(x)$ is the free abelian group generated by all pairs $\langle t,u\rangle$ where $t:z\to x$ is a map in $\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$ and $u\in
X$ such that $f(u)\in \pi_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}})(z)$.
Given $s:x\to y$ in $\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$, $\overline{C}_{n}(s)$ is the homomorphism defined on the generators of $\overline{C}_{n}(x)$ by $$\overline{C}_{n}(s)\big({\langle t,u\rangle}\big) = {\langle st,u\rangle}.$$ This defines the functor $\overline{C}_{n}$. We now take $C_{n} =
\overline{C}_{n}\circ q$ and we obtain a -crossed module $\c_{n} = (C_{n},0)$ on which (by construction) ${\mathop{\rm im}}(\delta)$ acts trivially. Thus, to obtain an object in ${\ensuremath{\mathbf{Crs}}}_{n}$ we only need a morphism $\partial_{n}:\c_{n}\to\c_{n-1}$ such that $\partial_{n-1}\partial_{n} = 0$. Given $x\in\bg$ we define $(\partial_{n})_x:C_{n}(x)\to C_{n-1}(x)$ as the group homomorphism defined on the generators of $C_{n}(x)$ by $$(\partial_{n})_x({\langle t,u\rangle}) = \laction t{f(u)}
=C_{n-1}(t)\big(f(u)\big).$$
We have thus defined an $n$-crossed complex $$F_{n}\big(({{\ensuremath{\boldsymbol{\cal C}}}}, X, f)\big):\quad\c_{n} \xto{\partial_{n}} \c_{n-1}
\xto{\partial_{n-1}} \c_{n-2}\to \cdots \to \c_2 \xto{\partial_2}
\c_1={{\boldsymbol1}_{_{\bg}}},$$ It remains to define $F_{n}$ on arrows. Given an arrow $(\alpha,\beta):(X,f,{{\ensuremath{\boldsymbol{\cal C}}}})\to(X',f',{{\ensuremath{\boldsymbol{\cal C}}}}')$ in ${{\ensuremath{\mathbf{A}{\ensuremath{\mathbf{Crs}}}{}}}}_{n-1}$, a map of crossed complexes is determined by $\alpha$ with the additional component $\alpha_{n}$ being the natural transformation with components defined on the generators of $\c_{n}(x)$ by $$\big(\alpha_{n}\big)_x(\langle t,u\rangle) = \langle\alpha_0(t),\beta(u)\rangle,$$ where $\alpha_0$ is the “change of base groupoid” part of $\alpha$. It is now a straightforward exercise to verify that $F_{n}$ is a functor ${\ensuremath{\mathbf{Crs}}}_{n}\to{{\ensuremath{\mathbf{A}{\ensuremath{\mathbf{Crs}}}{}}}}_{n-1}$ and that this functor is left adjoint to the forgetful $U_{n}$ (see [@Garcia2003][, Proposición 3.2.10, pp. 161–166]{} for the details).
As in Proposition \[pxm tripleable over agpd\] is is easy to see that $U_n$ reflects isomorphisms. The proof is completed by a calculation similar to the one used in the said proposition, showing that $U_n$ preserves coequalizers of $U_n$-contractible pairs.
Note that the counit $\epsilon_{{\ensuremath{\boldsymbol{\cal C}}}}:F_{n}U_{n}({{\ensuremath{\boldsymbol{\cal C}}}}) \to {{\ensuremath{\boldsymbol{\cal C}}}}$ of the adjunction $F_{n}\dashv U_{n}$ is an identity at dimensions other than $n$. The same thing occurs with the unit $\eta_{{\ensuremath{\boldsymbol{\cal C}}}}: U_{n}F_{n}U_{n}({{\ensuremath{\boldsymbol{\cal C}}}}) \to U_{n}({{\ensuremath{\boldsymbol{\cal C}}}})$ and with its image by $F_{n}$.
\[2-ext of crsn are uspl tor\] In the hypothesis of Proposition \[2-ext of crsn are tor\], the 2-torsor obtained from any 2-extension of ${{\ensuremath{\boldsymbol{\cal C}}}}$ by $A$ is $U_n$-split. Therefore, the functor defined in Proposition \[2-ext of crsn are tor\] actually represents a full and faithful functor $${\text{\it Ext}}^2({{\ensuremath{\boldsymbol{\cal C}}}},A)\rightarrow {\text{\it Tor}}_{U_n}^2({{\ensuremath{\boldsymbol{\cal C}}}},\tilde{A}).$$
Since all components of the coequalizer map ${{\ensuremath{\boldsymbol{\cal F}}}}_0 \to {{\ensuremath{\boldsymbol{\cal C}}}}$ in dimensions $<n$ are identities, that the obtained torsor is $U_n$-split is equivalent to the fact that the $n$-dimensional component $\tau: F \to C_n$ is surjective.
We denote ${\ensuremath{\bbg_{n}}}$ the cotriple induced on ${\ensuremath{\mathbf{Crs}}}_n$ by the the monadic functor $U_n$, that is, ${\ensuremath{\bbg_{n}}} = F_nU_n$. Given a $n$-crossed complex [[$\boldsymbol{\cal C}$]{}]{}, the adjoint pair $(F_n,U_n)$ induces an adjoint pair (also denoted $(F_n,U_n)$) between the corresponding slice categories ${\ensuremath{\mathbf{Crs}}}_{n}/{{\ensuremath{\boldsymbol{\cal C}}}}$, and ${{\ensuremath{\mathbf{A}{\ensuremath{\mathbf{Crs}}}{}}}}_{n-1}/U_n({{\ensuremath{\boldsymbol{\cal C}}}})$. Furthermore, the induced $U_n$ on the slice is again monadic. Thus we obtain again a cotriple on the category ${\ensuremath{\mathbf{Crs}}}_{n}/{{\ensuremath{\boldsymbol{\cal C}}}}$ and this cotriple will also be denoted ${\ensuremath{\bbg_{n}}}$.
For any ${{\ensuremath{\boldsymbol{\cal C}}}}\in{\ensuremath{\mathbf{Crs}}}_n$ and any $A:\pi_1({{\ensuremath{\boldsymbol{\cal C}}}}) \to {{\ensuremath{\mathbf{Ab}}}}$, the inclusion functor ${\underline{{\text{\it Tor}}}}^2_{U_n}({{\ensuremath{\boldsymbol{\cal C}}}},{\tildeA_{n}}) {\hookrightarrow}{{\text{\it Tor}}}^2_{U_n}({{\ensuremath{\boldsymbol{\cal C}}}},{\tildeA_{n}})$ factors through the full and faithful functor of Proposition \[2-ext of crsn are uspl tor\]. As a consequence, $$H_{{\ensuremath{\bbg_{n}}}}^2({{\ensuremath{\boldsymbol{\cal C}}}},{\tildeA_{n}})\cong {\text{\it Ext}}^2[{{\ensuremath{\boldsymbol{\cal C}}}},A].$$
Again this proof is based on the fact that the counit of the adjunction $F_n\dashv U_n$ is the identity on the $(n-1)$ truncation, then the groupoid fiber of any 2-torsor in ${\underline{{\text{\it Tor}}}}^2_{U_n}({{\ensuremath{\boldsymbol{\cal C}}}},{\tildeA_{n}})$ is isomorphic to one which comes from a 2-extension.
In view of the last results, the fibrations in the [[Postnikov]{} tower]{} of a [crossed complex]{} [[$\boldsymbol{\cal C}$]{}]{} can be regarded as 2-extensions, and we have seen they represent a cotriple cohomology of crossed complexes. n[n+1]{}[$(n+1)^\text{th}$]{}[The fibration]{}
Singular cohomology and the topological invariants {#sec the sing coh}
--------------------------------------------------
The knowledge of the above algebraic Postnikov invariants is sufficient information to reconstruct the (homotopy type of the) crossed complex [[$\boldsymbol{\cal C}$]{}]{}. This solves the problem of calculating the Postnikov invariants of any space having the homotopy type of a crossed complex. However, in order to relate the above invariants to the usual calculation of the Postnikov invariants of a space (as elements in the singular cohomology of the space), we would like to show that our algebraically obtained invariants determine, in a natural way, singular cohomology elements.
Remember that the “fundamental crossed complex" functor (see [@BrHi1991]) associates a crossed complex to each simplicial set, $$\Pi:{{\ensuremath{\boldsymbol{\mathcal{S}}}}}\rightarrow{\ensuremath{\mathbf{Crs}}}.$$ This functor has a right adjoint “nerve of a crossed complex" which associates to each crossed complex [[$\boldsymbol{\cal C}$]{}]{} the simplicial set whose set of $n$-simplices is the set of maps of crossed complexes $$\ner({{\ensuremath{\boldsymbol{\cal C}}}})_n={\ensuremath{\mathbf{Crs}}}\bpar{\Pi(\Delta[n]),{{\ensuremath{\boldsymbol{\cal C}}}}}.$$ Applying this nerve functor to a fibration of crossed complexes one obtains a fibration of simplicial sets whose fibers have the same homotopy type as those of the initial fibration. Since the geometric realization functor ${{\ensuremath{\boldsymbol{\mathcal{S}}}}}\rightarrow{{\ensuremath{\mathbf{Top}}}}$ also preserves fibrations and the homotopy type of their fibers, we can define a functor “classifying space of a crossed complex" which again has the same property. If we restrict ourselves to the category of those spaces having the homotopy type of a crossed complex we obtain two functors $$\label{fund crs ladj to geom realiz}
\Pi:\t \to {\ensuremath{\mathbf{Crs}}}\,,\qquad B: {\ensuremath{\mathbf{Crs}}}\to \t$$ which induce an equivalence in the corresponding homotopy categories and which allow us to reduce the calculation of the Postnikov towers of the spaces in , to the calculation of Postnikov towers in [$\mathbf{Crs}$]{}.
We define the singular cohomology of a crossed complex [[$\boldsymbol{\cal C}$]{}]{} as the singular cohomology of its geometric realization, $B({{\ensuremath{\boldsymbol{\cal C}}}})$ or, equivalently, of the geometric realization of its nerve. Thus, if $\Pi = \pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$ and $A:\Pi\to{{\ensuremath{\mathbf{Ab}}}}$, we define $$\scoH^m({{\ensuremath{\boldsymbol{\cal C}}}},A) = \scoH^m\bpar{B\bpar{\ner({{\ensuremath{\boldsymbol{\cal C}}}})},A},$$ and our aim is to establish a natural map $$H^2_{{\ensuremath{\bbg_{n}}}}\bpar{P_n({{\ensuremath{\boldsymbol{\cal C}}}}),{\tildeA_{n}}} \xto{\ \
}\scoH^{n+2}\bpar{P_n({{\ensuremath{\boldsymbol{\cal C}}}}), A}.$$ In order to establish this map we will obtain a representation of the singular cohomology of an $n$-crossed complex by homotopy classes of maps in a certain category of simplicial $n$-crossed complexes (denoted ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$) which is naturally associated to the cotriple ${\ensuremath{\bbg_{n}}}$. Then the desired map will be induced by a natural map from Duskin’s representation of the cotriple cohomology to our representation in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$ of the singular cohomology. This representation is actually obtained as a “lifting” of the generalized Eilenberg-MacLane representation in simplicial sets (see [@GoJa1999] and [@BuFaGa2002]): $$\label{gener eilmacl}
\scoH^m\bpar{{{\ensuremath{\boldsymbol{\cal C}}}},A} =
\big[\ner({{\ensuremath{\boldsymbol{\cal C}}}}), L_\Pi(A,m)\big]_{{{\ensuremath{\boldsymbol{\mathcal{S}}}}}/\ner(\Pi)}.$$ where $L_\Pi(A,m)$ is the canonical fibration from the homotopy colimit of the functor $K\bpar{A(\cdot),m}$ to $K(\Pi,1)$ ($= \ner(\Pi)$) (see [@GoJa1999] and [@BuFaGa2002]). The “ladder” through which this lifting is achieved is a chain of adjunctions $$\xymatrix{{{\ensuremath{\boldsymbol{\mathcal{S}}}}}\ar@<.5 ex>[rr]^-{\f_1=G} && {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}} \ar@<.5
ex>[ll]^-{{{\ensuremath{\overline{W}}}}_1=\ner} \ar@<.5 ex>[r]^-{\f_2} &{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}} \ar@<.5 ex>[l]^-{{{\ensuremath{\overline{W}}}}_2}
\quad\cdots\quad {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n-1}\right)_{}}}}}\ar@<.5 ex>[r]^-{\f_n} &{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}} \ar@<.5
ex>[l]^-{{{\ensuremath{\overline{W}}}}_n}\ \cdots }$$ having sufficiently good properties so as to preserve homotopy classes of certain maps.
We will first establish the adjunctions and their properties in the cases $n=1$ (essentially done in [@DwKa1984]) and $n=2$, which are the harder cases. Later, the cases $n\geq 3$ will be dealt with.
Let us begin by introducing the categories ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$, $n\geq0$, whose definition is motivated by a special property of the simplicial [[crossed complex]{}es]{} arising from the cotriple ${\ensuremath{\bbg_{n}}}$.
Let $\cotrsr : {\ensuremath{\mathbf{Crs}}}_n \to {\ifthenelse{\equal{{{\ensuremath{\mathbf{Crs}}}_n}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{\ensuremath{\mathbf{Crs}}}_n}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}$ denote the functor associating to each [-[crossed complex]{}]{} [[$\boldsymbol{\cal C}$]{}]{} the simplicial resolution of [$\bbg_{n}$]{} in [[$\boldsymbol{\cal C}$]{}]{}, so that $\cotrsr({{\ensuremath{\boldsymbol{\cal C}}}})$ is a simplicial [crossed complex]{} which at dimension $k$ is equal to $\cotrsr({{\ensuremath{\boldsymbol{\cal C}}}})[k] = {\ensuremath{\bbg_{n}}}^{k+1}({{\ensuremath{\boldsymbol{\cal C}}}})$ (where ${\ensuremath{\bbg_{n}}}^k$ is the $k$-fold composition ${\ensuremath{\bbg_{n}}}\circ \cdots \circ{\ensuremath{\bbg_{n}}}$). We can state the following (obvious) lemma, which provides the definition of ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$:
\[cotriple factors\] For all $n\geq1$ and all $k\geq 0$ the [[crossed complex]{}es]{} ${\ensuremath{\bbg_{n}}}^k({{\ensuremath{\boldsymbol{\cal C}}}})$ have the same $(n-1)$-truncation and all faces and degeneracies of $\cotrsr({{\ensuremath{\boldsymbol{\cal C}}}})$ at all dimensions are morphisms of [[crossed complex]{}es]{} whose $(n-1)$-truncation is the identity of $T_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}})$. The functor $\cotrsr$ factors through the subcategory ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$ defined by the following pullback of categories: $$\label{k2 ex seq xmm}
\vcenter{
\everyentry={\vphantom{\big(}}
\xymatrix@C=1.5pc@R=1.5pc { &{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}\ar[r] \ar@{^{(}->}[d]
\ar@{}[dr]|{\text{\rm pb}} &
{\ensuremath{\mathbf{Crs}}}_{n-1}\ar@{^{(}->}[d]^{\text{\rm diag}}\\
{{\ensuremath{\mathbf{Crs}}}_{n}}\ar[r]_-{\cotrsr}\ar@{.>}[ur]&{{\ifthenelse{\equal{{{\ensuremath{\mathbf{Crs}}}_{n}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{\ensuremath{\mathbf{Crs}}}_{n}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}}\ar[r]_-{(T_{n-1})_*}
& {{\ifthenelse{\equal{{{\ensuremath{\mathbf{Crs}}}_{n-1}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{\ensuremath{\mathbf{Crs}}}_{n-1}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}}} }$$
The category ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}$ is the category of those simplicial groupoids having the same set of objects in all dimensions and all faces and degeneracies equal to the identity at the level of objects. This is precisely the category (denoted *Gd* in [@DwKa1984]) on which Dwyer and Kan define the *classifying complex* functor, ${{\ensuremath{\overline{W}}}}$, extending the classical functor ${{\ensuremath{\overline{W}}}}:{\ifthenelse{\equal{{{{\ensuremath{\mathbf{Gr}}}}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{{\ensuremath{\mathbf{Gr}}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}} \to {\ifthenelse{\equal{{{\ensuremath{\mathbf{Set}}}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}$ of [@EiMLa1954a] and [@May1967]. We will find it convenient to define ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{0}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{0}\right)_{}}}}} = {\ifthenelse{\equal{{{\ensuremath{\mathbf{Set}}}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}(={{\ensuremath{\boldsymbol{\mathcal{S}}}}})$ and to refer to ${{\ensuremath{\overline{W}}}}$ as ${{\ensuremath{\overline{W}}}}_1:{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}
\to {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{0}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{0}\right)_{}}}}}$. Note that ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}$ can also be described as the category of groupoids *enriched* in simplicial sets.
The category ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}$ is the subcategory of ${\ifthenelse{\equal{{{\ensuremath{\mathbf{Xm}_{}}}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{\ensuremath{\mathbf{Xm}_{}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}$ whose objects are simplicial crossed modules $\Sigma:{{\ensuremath{\Delta^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}} \to {{\ensuremath{\mathbf{Xm}_{}}}}$ such that for all dimensions the crossed modules $\Sigma_n$ have the same base groupoid and whose morphisms from $\Sigma$ to $\Sigma'$ are simplicial maps $\alpha:\Sigma\to\Sigma'$ having in each dimension the same change-of-base functor.
### The lifting to $Gd = {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}$.
As we said above ${{\ensuremath{\overline{W}}}}_1$ is the functor ${{\ensuremath{\overline{W}}}}$ defined in [@DwKa1984]. We repeat here the definition in order to have it handy. If $\Sigma\in{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}$ then the vertices of ${{\ensuremath{\overline{W}}}}_1(\Sigma)$ are the objects of $\Sigma$, the $n$-simplices ($n>0$) are the sequences of arrows $(z_n\xto{u_{n-1}}z_{n-1}\to \cdots \to z_1\xto{u_0} z_0)$ where $u_i$ is an arrow in the groupoid $\Sigma_i$ for $i=0,\dots,n-1$, and faces and degeneracies given by the formulas $$\begin{aligned}
d_i(u_0,\dots,u_{n-1}) &= (u_0,\dots,u_{n-i-2},u_{n-i-1}\cdot
d_0u_{n-i},d_1u_{n-i+1},\dots,d_{i-1}u_{n-1}),\\ s_i(u_0,\dots,u_{n-1})
&=(u_0,\dots,u_{n-i-1},id, s_0u_{n-i},\dots,s_{i-1}u_{n-1}).\end{aligned}$$ (Note the discrepancy with [@DwKa1984 p.383] where there is an error in the indices.)
Evidently, if $\Pi$ is a groupoid regarded as a constant simplicial groupoid, then the $n$-simplices of ${{\ensuremath{\overline{W}}}}(\Pi)$ are precisely the $n$-simplices of $\ner (\Pi)$ and in fact we have ${{\ensuremath{\overline{W}}}}_1(\Pi) = \ner(\Pi)$. Thus, ${{\ensuremath{\overline{W}}}}_1$ can be regarded as a functor $$\label{wbar}
{{\ensuremath{\overline{W}}}}_1:{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi \longrightarrow {\ifthenelse{\equal{{{\ensuremath{\mathbf{Set}}}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}/\ner(\Pi).$$
Let $\Pi$ be a groupoid, let $A:\Pi\to{{\ensuremath{\mathbf{Ab}}}}$ be a $\Pi$-module and let $\textcolor{black}{\tilde{A}_1 = {\text{\sf ins}}_1(A) }=\semi\Pi A$, seen as an abelian group object in ${{{\ensuremath{\mathbf{Gpd}}}}}/\Pi$. Then the simplicial object $K(\textcolor{black}{\tilde{A}_1},n)\in{\ifthenelse{\equal{\bpar{{{{\ensuremath{\mathbf{Gpd}}}}}/\Pi}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{\bpar{{{{\ensuremath{\mathbf{Gpd}}}}}/\Pi}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}$ $= {\ifthenelse{\equal{{{{\ensuremath{\mathbf{Gpd}}}}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{{\ensuremath{\mathbf{Gpd}}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}/\Pi$ actually belongs to ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi$ (where $\Pi$ is regarded as the trivial “constant” simplicial groupoid) and $$\label{w bar uno} {{\ensuremath{\overline{W}}}}_1\bpar{K(\textcolor{black}{\tilde{A}_1},n)} =
L_\Pi(A,n+1).$$
Most of what is said in the statement is evident, the essential part being the proof of . This can actually be deduced from a more general isomorphism between $L_\Pi(A,n+1)$ and the nerve of certain higher dimensional groupoid which is actually equal to ${{\ensuremath{\overline{W}}}}_1\bpar{K(\textcolor{black}{\tilde{A}_1},n)}$ (see , Prop. 2.4.9). Here we will not make use of these general facts but will indicate just how to verify the equality at the level of simplices so as to justify the dimensional jump, and will not bore the reader with the tedious verification of the equality of faces and degeneracies. Since the $m$-simplices in both simplicial sets are open horns for every $m>n+2$, its is sufficient to prove $${{{\ensuremath{\overline{W}}}}_1\bpar{K(\textcolor{black}{\tilde{A}_1},n)}}_m = {L_\Pi(A,n+1)}_m,\quad
m=0,\dots,n+2.$$ In the right hand side we have, $${L_\Pi(A,n+1)}_m =
\begin{cases}
\ner (\Pi)_m &\text{if}\quad m\leq n\\
\\
\displaystyle\coprod_{\xi\in\ner (\Pi)_{n+1}} A(x_0^\xi) &\text{if}\quad m = n+1,
\\
\displaystyle\coprod_{\xi\in\ner (\Pi)_{n+2}} A(x_0^\xi)^{n+2} &\text{if}\quad m
= n+2.
\end{cases}$$ where we used the representation of the generalized Eilenberg MacLane spaces $L_\Pi(A,n)$ given in [@BuFaGa2002]. On the other hand it is clear, using the fact that $K(\textcolor{black}{\tilde{A}_1},n)_m = (\Pi\xto{1_\Pi}\Pi)\in{{{\ensuremath{\mathbf{Gpd}}}}}/\Pi, m=0,\dots,n-1$, and the definition of the $m$-simplices of ${{\ensuremath{\overline{W}}}}$ given in [@DwKa1984], that for $m<n$, ${{\ensuremath{\overline{W}}}}_1\bpar{K(\textcolor{black}{\tilde{A}_1},n)}_m = \ner (\Pi)_m.$ By the same token, an $n$-simplex in ${{\ensuremath{\overline{W}}}}_1 \bpar{K(\tilde{A}_1,n)}$ is a sequence $(z_n\xto{u_{n-1}}z_{n-1}\to \cdots
\to z_1\xto{u_0} z_0)$ in $\Pi$ and we get again ${{\ensuremath{\overline{W}}}}_1\bpar{K(\textcolor{black}{\tilde{A}_1},n)}_n = \ner (\Pi)_n.$ Let’s now consider an $n+1$ simplex in ${{\ensuremath{\overline{W}}}}_1\bpar{K(\textcolor{black}{\tilde{A}_1},n)}$, that is, a sequence $\xi =
(z_{n+1}\xto{(u_n,a)}z_n\xto{u_{n-1}}z_{n-1}\to \cdots \to z_1\xto{u_0} z_0)$ where the $u_i$ are arrows in $\Pi$ and furthermore $z_{n+1}\xto{(u_n,a)}z_n$ is an arrow in $\semi\Pi A$. This means that $a$ is an arbitrary element in $A(z_{n}) \cong A(z_{n+1}) = A(x_0^\xi)$ (using the isomorphism $A(u_n^{-1}):A(z_n) \to A(z_{n+1})$). Thus, ${{\ensuremath{\overline{W}}}}_1\bpar{K(\textcolor{black}{\tilde{A}_1},n)}_{n+1} = \coprod_{\xi\in\ner (\Pi)_{n+1}}
A(x_0^\xi) = {L_\Pi(A,n+1)}_{n+1}$. Finally, let’s consider an $n+2$ simplex in ${{\ensuremath{\overline{W}}}}_1\bpar{K(\textcolor{black}{\tilde{A}_1},n)}$, that is, a sequence $\xi =
(z_{n+2}\xto{(u_{n+1},\textcolor{black}{\alpha})}z_{n+1}\xto{(u_{n},a)}z_{n}\to \cdots \to
z_1\xto{u_0} z_0)$ where the $u_i$ are arrows in $\Pi$ and furthermore $z_{n+1}\xto{(u_n,a)}z_n$ and $z_{n+2}\xto{(u_{n+1},\alpha)}z_{n+1}$ are arrows in $\semi\Pi A$ and $\big({\semi\Pi
A}\big)^{n+1}$ respectively. This means that $a\in A(z_n)$ and $\alpha=(a_1,\dots,a_{n+1})\in
{A(z_{n+1})}^{n+1}$, while $$\xi' = (z_{n+2}\xto{u_{n+1}}z_{n+1}\xto{u_n}z_n\to \cdots \to z_1\xto{u_0}
z_0) \in
\ner (\Pi)_{n+2}.$$ Thus, we get an $n+2$ simplex $(\xi',\alpha')$ in $L_\Pi(A,n+1)$ with $\alpha' = (a'_0, \dots, a'_{n+1}) \in
{A(z_{n+2})}^{n+2}$ where $$\begin{aligned}
a'_0 &= A(u_{n}u_{n+1})^{-1}(a)\\ a'_1 &= A(u_{n+1})^{-1}(a_1)\\
&\dots\\ a'_n &= A(u_{n+1})^{-1}(a_n)\\ a'_{n+1} &=
A(u_{n+1})^{-1}\Big(\sum_{i=1}^{n+1}(-1)^{n+1-i} a_{i}\Big).\end{aligned}$$ It is now straightforward to verify that the correspondences between simplices in $L_\Pi(A,n+1)$ and in ${{\ensuremath{\overline{W}}}}_1\bpar{K(\textcolor{black}{\tilde{A}_1},n)}$ we have established is a $(n+2)$-truncated bijective simplicial map whose $(n+2)$-component satisfies the cocycle condition, thus determining a simplicial isomorphism.
A simple extension of Theorem 3.3 in [@DwKa1984] yields without difficulty the following
\[dwka 3.3 extended\] The functor ${{\ensuremath{\overline{W}}}}_1$ in preserves fibrations and weak equivalences; it has a left adjoint $\f_1$ which also preserves fibrations and weak equivalences and for every pair of objects $X\in{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi$, $Y\in{\ifthenelse{\equal{{{\ensuremath{\mathbf{Set}}}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}/\ner(\Pi)$ a map $Y\to{{\ensuremath{\overline{W}}}}_1X$ is a weak equivalence if and only if its adjoint $\f_1Y\to X$ is a weak equivalence.
It follows from this that the adjunction goes through to the corresponding homotopy categories and as a consequence the set of classes of homotopic maps $Y\to{{\ensuremath{\overline{W}}}}_1X$ in ${\ifthenelse{\equal{{{\ensuremath{\mathbf{Set}}}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}/\ner(\Pi)$ is bijective with the set of classes of homotopic maps $\f_1Y\to X$ in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi$. Taking $X=K(\textcolor{black}{\tilde{A}_1},m)$ and $Y = \ner({{\ensuremath{\boldsymbol{\cal C}}}})$, we have
\[cor of dwka 3.3 extended\] For any crossed complex ${{\ensuremath{\boldsymbol{\cal C}}}}$, let $\Pi=\pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$ be its fundamental groupoid and let $A:\Pi\to{{\ensuremath{\mathbf{Ab}}}}$ be any $\Pi$-module. Then, if $\textcolor{black}{\tilde{A}_1} = \semi\Pi A$ is regarded as an abelian group object in ${{{\ensuremath{\mathbf{Gpd}}}}}/\Pi$, $$\scoH^{m+1}\bpar{{{\ensuremath{\boldsymbol{\cal C}}}},A} = \big[\f_1\ner({{\ensuremath{\boldsymbol{\cal C}}}}),
K(\textcolor{black}{\tilde{A}_1},m)\big]_{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi}.$$
In higher dimensions we will avoid trying to generalize Proposition \[dwka 3.3 extended\], proving instead directly the higher dimensional analog of Corollary \[cor of dwka 3.3 extended\].
### The lifting to ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}$.
We now define the functor ${{\ensuremath{\overline{W}}}}_2:{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}} \to {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}$.
Let ${{\ensuremath{\mathbf{SGd}}}}\subseteq {\ifthenelse{\equal{{{{\ensuremath{\mathbf{Gpd}}}}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}\times {{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{{\ensuremath{\mathbf{Gpd}}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}\times {{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}$ be the full subcategory of double simplicial groupoids determined by those simplicial groupoids all whose vertical and horizontal faces and degenerations are (functors of groupoids which are) the identity on objects. Equivalently, [[$\mathbf{SGd}$]{}]{}is the category of groupoids enriched in double simplicial sets. We first notice that the Artin-Mazur diagonal functor, ${{\ensuremath{\overline{W}}}}_{\!\text{A-M}}$, takes object in [[$\mathbf{SGd}$]{}]{} to objects in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}$, $$\xymatrix{{{\ensuremath{\mathbf{SGd}}}}\ar@{-->}[r]^{{{\ensuremath{\overline{W}}}}} \ar@{^{(}->}[d] & {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}\ar@{^{(}->}[d] \\
{{{\ensuremath{\mathbf{Gpd}}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}\times{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}} \ar[r]_{{{\ensuremath{\overline{W}}}}_{\!\text{A-M}}} & {{{\ensuremath{\mathbf{Gpd}}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}
\; .}$$ On the other hand, there is an isomorphism between the category of crossed modules and the subcategory $({\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}})_2 \subseteq {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}$ of simplicial groupoids with trivial Moore complex in dimensions $\geq 2$ (Corollary 3.1.10 in [@Garcia2003]) which, composed with the inclusion $({\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}})_2\hookrightarrow{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}$ induces a functor ${\ifthenelse{\equal{{{\ensuremath{\mathbf{Xm}_{}}}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{\ensuremath{\mathbf{Xm}_{}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}
\rightarrow {\ifthenelse{\equal{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}$ whose restriction to ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}$ takes its values in [[$\mathbf{SGd}$]{}]{}, $$\xymatrix{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}} \ar@{-->}[r]^\Theta \ar@{^{(}->}[d] & {{\ensuremath{\mathbf{SGd}}}}\ar@{^{(}->}[d] \\
{{\ensuremath{\mathbf{Xm}_{}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}} \ar[r] & ({\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}})^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}\; .}$$ We define ${{\ensuremath{\overline{W}}}}_2$ as the composition $${{\ensuremath{\overline{W}}}}_2:{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}\xrightarrow{\Theta} {{\ensuremath{\mathbf{SGd}}}}\xrightarrow{{{\ensuremath{\overline{W}}}}} {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}.$$ We next describe the action of ${{\ensuremath{\overline{W}}}}_2$ on objects. Let $(\bg,\Sigma)\in{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}$ so that for each $n\geq0$, $\Sigma_n = (\bg,C_n,\delta_n)$ is a $\bg$-crossed module. The simplicial groupoid ${{\ensuremath{\overline{W}}}}_2(\bg,\Sigma)$ has the same object as . Its groupoid of 0-simplices is $ {{\ensuremath{\overline{W}}}}_2(\bg,\Sigma)_0 = \bg$; its groupoid of 1-simplices can be identified with $ {{\ensuremath{\overline{W}}}}_2(\bg,\Sigma)_1 = \semi \bg C_0$ with face and degeneration morphisms given by the following formulas: $$\begin{split} d_0(f,a_0)& = (\delta_0)_y(a_0)f, \\ d_1(f,a_0)& =d_1^v(f)= f, \\
s_0(f)& = (f,0_{C_0(y)}),
\end{split}$$
In general, for $n\geq 2$, the set of arrows from $x$ to $y$ in the groupoid of $n$-simplices of ${{\ensuremath{\overline{W}}}}_2(\bg,\Sigma)$ is given by $$\label{formula w bar 2} Hom_{{{\ensuremath{\overline{W}}}}_2(\bg,\Sigma)_n}(x,y) = Hom_\bg(x,y) \times
C_0(y)
\times \ldots
\times C_{n-1}(y).$$ We therefore denote $${{\ensuremath{\overline{W}}}}_2(\bg,\Sigma)_n= \bg * C_0 *\cdots*C_{n-1}$$ the groupoid of $n$-simplices of ${{\ensuremath{\overline{W}}}}_2(\bg,\Sigma)$. The composition in this groupoid is given by the formula: $$\begin{gathered}
(g,b_0,b_1,\dots,b_{n-1})(f,a_0,a_1,\dots,a_{n-1}) \\ =
(gf,\,b_0+{^{(\delta_1)_z(b_1)\dots (\delta_{n-1})_z(b_{n-1})g}a_0},\dots\\
\dots,b_{n-2}+{^{(\delta_{n-1})_z(b_{n-1})g}a_{n-2}}, b_{n-1}+{^ga_{n-1}}).\end{gathered}$$ Note that again this groupoid is a kind of semidirect product. The face and degeneration operators $$({{\ensuremath{\overline{W}}}}_2(\bg,\Sigma))_{n+1} \xleftarrow{s_j} ({{\ensuremath{\overline{W}}}}_2(\bg,\Sigma))_n
\xrightarrow{d_i} ({{\ensuremath{\overline{W}}}}_2(\bg,\Sigma))_{n-1}$$ are given by the formulas: $$\label{cydw2}
\begin{split} d_0(f,a_0,a_1,\dots,a_{n-1}) & =
((\delta_{n-1})_y(a_{n-1})f,a_0,\dots,a_{n-2}), \\
d_i(f,a_0,a_1,\dots,a_{n-1}) & =
(f,a_0,\dots\\
&\phantom{mmn}\dots,a_{n-i-2},a_{n-i-1}+d_0a_{n-i},d_1a_{n-i+1},\dots
,d_{i-1}a_{n-1}),\\
d_n(f,a_0,a_1,\dots,a_{n-1}) & = (f,d_1a_1,\dots,d_{n-1}a_{n-1}),\\
s_j(f,a_0,a_1,\dots,a_{n-1}) & =
(f,a_0,\dots,a_{n-j-1},0_{C_{n-j}(y)},s_0a_{n-j},\dots,s_{j-1}a_{n-1}). \\
\end{split}$$
[**Example:**]{} If $\Pi$ is a groupoid regarded as a discrete constant simplicial crossed module then ${{\ensuremath{\overline{W}}}}_2(\Pi)$ is equal to $\Pi$ regarded as a constant simplicial groupoid.
If $\Pi$ is a groupoid, as a consequence of ${{\ensuremath{\overline{W}}}}_2(\Pi) = \Pi$, the functor ${{\ensuremath{\overline{W}}}}_2:{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}} \to
{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}$ induces a functor ${{\ensuremath{\overline{W}}}}_2:{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}/\Pi \to {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi$. Let $A:\Pi \to
{{\ensuremath{\mathbf{Ab}}}}$ be a $\Pi$-module. If, as before, for $n\geq1$, $\textcolor{black}{\tilde{A}_n} =
{\text{\sf ins}}_n(A)$ is regarded as an abelian group object in ${\ensuremath{\mathbf{Crs}}}_n/\Pi$, we have:
\[case n eq 2 of the wbars compose to the hocol\] $${{\ensuremath{\overline{W}}}}_2\bpar{K(\textcolor{black}{\tilde{A}_2}, m)} =
K(\textcolor{black}{\tilde{A}_1},m+1).$$
$K(\textcolor{black}{\tilde{A}_2}, m)$ is the simplicial crossed module $(\Pi,\Sigma)$ where $\Sigma_n=(\Pi,C_n,0)$ with $$C_n(y) = \begin{cases} 0 &\text{if }n<m,\\ A(y) &\text{if }n = m,\\ A(y)^{m+1}
&\text{if }n = m+1.
\end{cases}$$ Thus, formula yields in this case, $${{\ensuremath{\overline{W}}}}_2\bpar{K(\textcolor{black}{\tilde{A}_2}, m)}_n(x,y) =
\begin{cases}
\Pi(x,y) &\text{if }n \leq m,\\
\Pi(x,y)\times A(y) &\text{if }n = m+1,\\
\Pi(x,y)\times A(y)^{m+2} &\text{if }n = m+2.
\end{cases}$$ This is the same one obtains for $K(\textcolor{black}{\tilde{A}_1}, m+1)$.
We define now a left adjoint to ${{\ensuremath{\overline{W}}}}_2$, which is similar to the “loop groupoid” functor $\f_1= G$ defined in [@DwKa1984].
Let $\Sigma_n$ be the groupoid of $n$-simplices of the simplicial groupoid $\Sigma$. We have a simplicial diagram of pre-crossed modules which in dimension $n$ has the pre-crossed module $(\Sigma_0,K_{n-1},\delta)$ which, furthermore has a split augmentation by $(\Sigma_0, K_0, \delta)$ where for $n \geq 0$. $K_n$ denotes the $\Sigma_0$-group associated to the totally disconnected groupoid defined by $\ker(d_1d_2\cdots d_{n+1}:\Sigma_{n+1}
\rightarrow
\Sigma_0)$ with action given by conjugation via $s_ns_{n-1}\cdots s_0:\Sigma_0
\rightarrow
\Sigma_{n+1}$, and $\delta$ is the natural transformation whose $x$-component, for $x \in
{\text{\sf obj}}(\Sigma_0)$, is $$\delta_x(u) = \begin{cases}d_0(u), \text{ if } u \in
K_0(x)\\ d_0d_2\cdots d_{n+1}(u) \text{ if }u \in K_n(x)\ \text{with}\ n\geq
1.\end{cases}$$ Note that the face and degeneration operators $$(\Sigma_0, K_{n+1}, \delta)\xleftarrow{\sigma_j} (\Sigma_0, K_n,
\delta)\xrightarrow{\delta_i} (\Sigma_0, K_{n-1}, \delta)$$ for $1 \leq i \leq n$ and $0 \leq j \leq n$, and also the augmentation $\sigma_0:(\Sigma_0, K_n, \delta)
\rightarrow (\Sigma_0, K_{n+1}, \delta)$ for all $n \geq 0$, are given by restrictions of the faces $d_{i+1}:\Sigma_{n+1}
\rightarrow \Sigma_n$ and of the degeneracies $s_{j+1}:\Sigma_{n+1}
\rightarrow \Sigma_{n+2}$ of the simplicial groupoid $\Sigma$.
From this we build an augmented split simplicial crossed module by factoring out in each $\Sigma_0$-module, the Peiffer elements as well as those who are images by $s_0$. Such quotient determines, in each dimension, a crossed module $(\Sigma_0,\tilde{K}_n,\delta)$ and the face and degeneration operators go well with the quotients. In order to obtain the simplicial crossed module $\f_2(\Sigma)$ we just need to add in each dimension a new morphism of crossed modules which will be given by the morphism of $\Sigma_0$-groups $\delta_0:\tilde{K}_n
\rightarrow \tilde{K}_{n-1}$, in turn determined by the natural transformation $[d_0,d_1]:K_n \rightarrow K_{n-1}$ whose component on an object $x \in {\text{\sf obj}}(\Sigma_0)$ is given by $$[d_0,d_1]_x(u) = (d_1)_x(u) (d_0)_x(u)^{-1}(s_0d_1d_0)_x(u)$$ for each $u \in K_n(x)$. The functor so defined is left adjoint to ${{\ensuremath{\overline{W}}}}_2$ (see [@Garcia2003] Prop. 4.3.9).
If we regard now a groupoid $\Pi$ first as a constant simplicial groupoid and then as a constant simplicial crossed module, it is easy to check that $\f_2(\Pi)=\Pi$. In fact, we still have an adjoint pair of functors $$\f_2\dashv{{\ensuremath{\overline{W}_{\!2}}}}, \quad
\xymatrix@C=2.2pc@R=1.2pc{ {\vphantom{{{{\ensuremath{\mathbf{Gr}}}}}^\bg}}{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}/\Pi
\ar@<-.6ex>[r]_-{{{\ensuremath{\overline{W}_{\!2}}}}}
&{\vphantom{{{\ensuremath{\mathbf{Pxm}_{\bg}}}}}}{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi\ar@<-.4ex>[l]_-{\f_2}
} .$$ We next show that in certain cases the functors $\f_2$ and ${{\ensuremath{\overline{W}}}}_2$ preserve homotopy classes. First we note that in order that two morphisms $(F,\boldsymbol{\alpha}),(G,\boldsymbol{\beta}):(\bg, \Sigma) \rightarrow
(\bg',\Sigma')$ in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}$ be homotopic, both must have the same functor at the level of base groupoids, that is, $F=G$ and also that if ${{\ensuremath{\bar{h}}}}=(F,\bh)$ is a homotopy between them with $$\bh=\{h_j^n;\; 0\leq j\leq
n\}:\boldsymbol{\alpha}\rightsquigarrow\boldsymbol{\beta}:
\Sigma\rightarrow\Sigma'$$ where $h_j^n=(F,\gamma_j^n)$, the homotopy identities for ${{\ensuremath{\bar{h}}}}$ are essentially the homotopy identities for the homotopy $\boldsymbol{\gamma}=\{\gamma_j^n:C_n\rightarrow C'_{n+1}F\}$ between simplicial morphisms of $\bg$-groups.
The functor ${{\ensuremath{\overline{W}}}}_2$ preserves homotopy classes of simplicial morphisms.
Given a homotopy ${{\ensuremath{\bar{h}}}}=(F,\bh)$ as before, if we denote $\mathbf{F}={{\ensuremath{\overline{W}}}}_2(F,\boldsymbol{\alpha})$ and $\mathbf{F}'={{\ensuremath{\overline{W}}}}_2(F,\boldsymbol{\beta})$, then the components of $\mathbf{F}$ and $\mathbf{F}'$ in dimension $0$ are given by the functor $F_0=F'_0=F$ and in dimension $n$ by the functors $$F_n, F'_n: \bg*C_0*\cdots *C_{n-1} \rightarrow \bg'*C'_0*\cdots *C'_{n-1}.$$ These functors act on objects as the functor $F$ and, on an arrow $(f,a_0,\dots,a_{n-1})$ with codomain $y$, they act thus: $$\begin{array}{c} F_n(f,a_0,\dots,a_{n-1}) =
\big(F(f),(\alpha_0)_y(a_0),\dots,(\alpha_{n-1})_y(a_{n-1})\big),\\ [1pc]
F'_n(f,a_0,\dots,a_{n-1}) = \big(F(f),(\beta_0)_y(a_0),\dots,(\beta_{n-1})_y(a_{n-1})\big).
\end{array}$$ We have a homotopy $\mathbf{H}:\mathbf{F}\rightsquigarrow \mathbf{F}':
{{\ensuremath{\overline{W}}}}_2(\bg,\Sigma)
\rightarrow {{\ensuremath{\overline{W}}}}_2(\bg',\Sigma')$ with $$\mathbf{H}=\{H_j^n:\bg*C_0*\cdots *C_{n-1} \rightarrow \bg'*C'_0*\cdots *C'_n;\; 0
\leq j \leq n
\}.$$ where $H_j^n:\bg*C_0*\cdots *C_{n-1} \rightarrow \bg'*C'_0*\cdots *C'_n$, for $0\leq j \leq n$, acts on objects as the functor $F$ and, on each arrow $(f,a_0,\dots,a_{n-1})$ with codomain $y$, it acts thus: $$\begin{gathered}
H_j^n(f,a_0,\dots,a_{n-1})=
\big(F(f),(\alpha_0)_y(a_0),\dots\\
\dots,(\alpha_{n-j-1})_y(a_{n-j-1}),0,
(\gamma_0^{n-j})_y(a_{n-j}),\dots, (\gamma_{j-1}^{n-1})_y(a_{n-1})\big).\end{gathered}$$ It is easy to check that the homotopy identities for $\mathbf{H}$ follow from the corresponding identities for $\boldsymbol{\gamma}$. The details are in [@Garcia2003].
The functor $\f_2$ does not behave the same way as ${{\ensuremath{\overline{W}}}}_2$ with respect to homotopies. However, in certain cases $\f_2$ does take homotopic morphisms to homotopic morphisms. One of these cases is the following:
Let $\mathbf{F},\mathbf{F}':\Sigma \rightarrow \Sigma'$ be two morphisms in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}$ such that $F_0 = F'_0$ and let $\mathbf{H}$ be a homotopy between them, such that $$H_0^0 = s_0F_0 \qquad\mbox{and}\qquad H_i^j = s_iF_j \mbox{\; if \;} i<j$$ (note that $H_j^j$ is arbitrary for $j>0$), then the morphisms of simplicial crossed modules $\f_2(\mathbf{F})$ and $\f_2(\mathbf{F}')$ in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}$ are homotopic.
The two previous lemmas can easily be proved for slice categories. We have:
\[w2 conserva homo\] For each groupoid $\Pi$, the functor ${{\ensuremath{\overline{W}}}}_2:{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}/\Pi
\rightarrow
{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi$ preserves homotopy classes of simplicial morphisms in the corresponding slice categories.
\[f2 conserva homo\] Let $\Pi$ be a groupoid, $\mathbf{F}$ and $\mathbf{F}'$ two morphisms in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi$ $$\xymatrix{ \Sigma \ar@<0.6ex>[rr]^{\mathbf{F}} \ar@<-0.6ex>[rr]_{\mathbf{F}'}
\ar[dr] & & \Sigma'
\ar[dl] \\ &\Pi& }$$ such that $F_0 = F'_0$ and let $\mathbf{H}$ be a homotopy between them in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi$ $$H_0^0 = s_0F_0 \qquad\mbox{and}\qquad H_i^j = s_iF_j, \mbox{ if }\ i<j.$$ Then the morphisms of simplicial crossed modules $\f_2(\mathbf{F})$ and $\f_2(\mathbf{F}')$ in the slice category ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}/\Pi$ are homotopic.
Note that if in Lemma \[f2 conserva homo\] we take $\Sigma'=K(\tilde{A}_1,n) \in {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi$ for any $\Pi$-module $A$, then for any two homotopic morphisms in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi$ with codomain $K(\tilde{A}_1,n)$ there is a homotopy $\mathbf{H}$ in the hypothesis of the said lemma and therefore we can conclude that the functor $\f_2$ preserves homotopy classes of morphisms with codomain $K(\tilde{A}_1,n)$.
From the preceding reasoning it follows immediately,
Let $\Pi$ be a groupoid, $X$ a simplicial groupoid above $\Pi$, and let $A$ be $\Pi$-module. Then the adjunction $\f_2\dashv{{\ensuremath{\overline{W}_{\!2}}}}$, $$\xymatrix@C=2.2pc@R=1.2pc{ {\vphantom{{{{\ensuremath{\mathbf{Gr}}}}}^\bg}}{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}/\Pi
\ar@<-.6ex>[r]_-{{{\ensuremath{\overline{W}_{\!2}}}}}
&{\vphantom{{{\ensuremath{\mathbf{Pxm}_{\bg}}}}}}{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi\ar@<-.4ex>[l]_-{\f_2}
}$$ induces an isomorphism in homotopy classes: $$\label{eq iso in homotopy}
\left[\f_2(X), K({\tildeA_{2}},n) \right]_{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{2}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{2}\right)_{}}}}}/\Pi} \cong
\left[X , {{\ensuremath{\overline{W}_{\!2}}}}\bpar{K({\tildeA_{2}},n)} \right]_{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{1}\right)_{}}}}}/\Pi} .$$
### The lifting to ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$ for $n \geq 3$.
For $n\geq3$ each object of ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$ can be represented by a diagram of this form: $$\label{scrsn obj}
\xy
(-10,0)*+{\ldots}="z",
(0,0)*+{\c_n^i}="a",
(20,0)*+{\c_n^{i-1}}="b",
(30,0)*+{\ldots}="c",
(40,0)*+{\c_n^1}="d",
(60,0)*+{\c_n^0\,.}="e",
(30,-10)*+{\c_{n-1}}="g",
(30,-20)*+{\c_{n-2}}="h",
(30,-30)*+{\vdots}="i",
(30,-40)*+{\c_2}="j",
(30,-50)*+{{{\boldsymbol1}_{_{\bg}}}}="k",
\ar @/^-4ex/ "b";"a" |{s_0}
\ar @/^-5ex/@{..} "b";"a"
\ar @/^-6ex/ "b";"a" |{s_{i-1}}
\ar @{->}^{d_i} "a";"b" <3pt>
\ar @{..} "a";"b"
\ar @{->}_{d_0} "a";"b" <-3pt>
\ar @/^-4ex/ "e";"d" |{s_0}
\ar @{->}^{d_1} "d";"e" <3pt>
\ar @{->}_{d_0} "d";"e" <-3pt>
\ar @/^-3ex/ "a";"g"_{\partial_n^i}
\ar @{->}_{\partial_n^{i-1}} "b";"g"
\ar @{->}^{\partial_n^1} "d";"g"
\ar @/^3ex/ "e";"g"^{\partial_n^0}
\ar @{->}^{\partial_{n-1}} "g";"h"
\ar @{->} "h";"i"
\ar @{->} "i";"j"
\ar @{->}^{\partial_2} "j";"k"
\endxy$$ Giving such an object is equivalent to giving it’s “head”: $$\label{scrsn head}
\xy
(-20,0)*+{\bc_n:}="w",
(-10,0)*+{\ldots}="z",
(0,0)*+{\c_n^i}="a",
(20,0)*+{\c_n^{i-1}}="b",
(30,0)*+{\ldots}="c",
(40,0)*+{\c_n^1}="d",
(60,0)*+{\c_n^0}="e",
\ar @/^-4ex/ "b";"a" |{s_0}
\ar @/^-5ex/@{..} "b";"a"
\ar @/^-6ex/ "b";"a" |{s_{i-1}}
\ar @{->}^{d_i} "a";"b" <3pt>
\ar @{..} "a";"b"
\ar @{->}_{d_0} "a";"b" <-3pt>
\ar @/^-4ex/ "e";"d" |{s_0}
\ar @{->}^{d_1} "d";"e" <3pt>
\ar @{->}_{d_0} "d";"e" <-3pt>
\endxy$$ (a simplicial complex of $\bg$-modules, where $\bg$ is the base groupoid of all the involved crossed complexes), its “tail” $${{\ensuremath{\boldsymbol{\cal C}}}}=(\c_{n-1} \xto{\partial_{n-1}} \c_{n-2} \rightarrow \ldots \to \c_2
\xto{\partial_2}
{{\boldsymbol1}_{_{\bg}}})$$ (an $(n-1)$-crossed complex), and an augmentation $\bc_n\xrightarrow{\partial_n^0} C_{n-1}$ of $\bc_n$ over $C_{n-1}=\techo_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}})$ such that the compositions $$C_n^0\xrightarrow{\partial_n^0}C_{n-1}\xrightarrow{\partial_{n-1}}
C_{n-2}\qquad\mbox{and}\qquad
\hat{C}_2\xrightarrow{\partial_2}\g\xrightarrow{C_n^0}{{\ensuremath{\mathbf{Ab}}}}$$ are trivial.
We will represent the above $n$-crossed complex by the pair $({{\ensuremath{\boldsymbol{\cal C}}}},\bc_n)$ or the triple $(\bg,{{\ensuremath{\boldsymbol{\cal C}}}},\bc_n)$ in case we want to make explicit the base groupoid.
A map from $({{\ensuremath{\boldsymbol{\cal C}}}}, \bc_n)$ to $({{\ensuremath{\boldsymbol{\cal C}}}}', \bc_n')$ in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$ is a pair $(\bf,\boldsymbol{\alpha})$ where $\bf:{{\ensuremath{\boldsymbol{\cal C}}}}\rightarrow {{\ensuremath{\boldsymbol{\cal C}}}}'$ is a morphism of $(n-1)$-crossed complexes with $F=
{\text{\sf base}}(\bf)$ and $\boldsymbol{\alpha}:\bc_n \rightarrow F^\ast\,\bc_n'$ is a simplicial map of $\bg$-modules where $\bg={\text{\sf base}}({{\ensuremath{\boldsymbol{\cal C}}}})$, $\bg'={\text{\sf base}}({{\ensuremath{\boldsymbol{\cal C}}}}')$, and $F^\ast:{{\ensuremath{\mathbf{Ab}}}}^{\bg'} \to {{\ensuremath{\mathbf{Ab}}}}^\bg$ is the functor induced by $F={\text{\sf base}}(\bf)$.
It is possible to give a definition of ${{\ensuremath{\overline{W}}}}_n$ in terms of the Artin-Mazur diagonal as for the case $n=2$. But for $n>2$ it is also possible to give a direct description without going through double simplicial $n$-crossed complexes: $${{\ensuremath{\overline{W}}}}_n({{\ensuremath{\boldsymbol{\cal C}}}}, \bc_n) = (T_{n-2}({{\ensuremath{\boldsymbol{\cal C}}}}), \bc_{n-1})$$ where $\bc_{n-1}$ is the simplicial $\bg$-module (where $\bg={\text{\sf base}}({{\ensuremath{\boldsymbol{\cal C}}}})$) augmented over $C_{n-2} = \techo_{n-2}({{\ensuremath{\boldsymbol{\cal C}}}})$ given by $$C^0_{n-1}=C_{n-1}\qquad\mbox{and}\qquad C^i_{n-1}= C_n^{i-1} \oplus \cdots
\oplus C_n^0 \oplus C_{n-1},\qquad\mbox{if}\; 1\leq i,$$ where $\c_n^i =(\bg,C_n^i,0)$ for $i \geq 0$ and with augmentation $\partial_{n-1}^0=\partial_{n-1}:C_{n-1}\rightarrow C_{n-2}$ and with face and degeneration operators $$\begin{aligned}
\label{cydwn} (d_0)_x(u_{i-1},\dots,u_0,u) & =(u_{i-2},\dots,u_0,
\partial_n^{i-1}(u_{i-1})+u)\\ (d_i)_x(u_{i-1},\dots,u_0,u) & =
(d_{i-1}u_{i-1},\dots,d_1u_1, u)\\ (d_j)_x(u_{i-1},\dots,u_0,u) & =
(d_{i-1}u_{i-1},\dots,d_1u_{i-j+1},d_0u_{i-j}+u_{i-j-1},\dots,u_0,u)\tag{if $1\leq
j < i$}\\ (s_j)_x(u_{i-1},\dots,u_0,u) & =
(s_{j-1}u_{i-1},\dots,s_0u_{i-j},0,u_{i-j-1},\dots,u_0, u)\tag{if $0
\leq j \leq i$}.\end{aligned}$$ It is obvious how ${{\ensuremath{\overline{W}}}}_n$ acts on morphisms.
If a groupoid $\bg$ is regarded as the simplicial $n$-crossed complex $(\bg,0)$, we have $${{\ensuremath{\overline{W}}}}_n(\bg)={{\ensuremath{\overline{W}}}}_n(\bg,0)=(\bg,0)=\bg$$ As a consequence, ${{\ensuremath{\overline{W}}}}_n$ induces a functor between slice categories, $${{\ensuremath{\overline{W}}}}_n: {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}/\bg \to {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n-1}\right)_{}}}}}/\bg$$
If $({{\ensuremath{\boldsymbol{\cal C}}}},\bc_n)\in{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$ is such that $\bc_n$ has trivial Moore complex in dimensions $\geq m$, then upon applying ${{\ensuremath{\overline{W}}}}_n$, we get ${{\ensuremath{\overline{W}}}}_n({{\ensuremath{\boldsymbol{\cal C}}}},
\bc_n) = (T_{n-2}({{\ensuremath{\boldsymbol{\cal C}}}}), \bc_{n-1})$, where $\bc_{n-1}$ again has trivial Moore complex in dimensions $\geq m+1$.
[**Example:**]{} Let $\Pi$ be a groupoid and $A:\Pi
\rightarrow
{{\ensuremath{\mathbf{Ab}}}}$ a $\Pi$-module, for each $m \geq 1$ the triple $(\Pi,\Pi,K(A, m))$ determines a $n$-simplicial crossed complex in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$. By the previous lemma, $${{\ensuremath{\overline{W}}}}_n\big(\Pi, \Pi, K(A, m)\big) \cong (\Pi,\Pi,K(A,m+1)).$$ On the other hand, $$K(\tilde\Pi_n,m) = \big(\Pi, \Pi, K(A, m)\big)$$ and therefore we have,
\[the wbars compose to the hocol\] For every $n \geq 1$ , $${{\ensuremath{\overline{W}}}}_n\big(K(\tilde\Pi_n,m)\big) = K(\tilde\Pi_{n-1},m+1).$$
We define now the left adjoint to ${{\ensuremath{\overline{W}}}}_n$ on objects. Given $(\bg,{{\ensuremath{\boldsymbol{\cal C}}}},\bc_{n-1})
\in {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n-1}\right)_{}}}}}$, the $\bg$-modules $$\begin{array}{lc} K_0={\text{\sf ker}}(d_1:C_{n-1}^1 \rightarrow C_{n-1}^0), & \\ [1pc] K_i
=
{\text{\sf ker}}(d_1d_2\cdots d_{i+1}:C_{n-1}^{i+1} \rightarrow C_{n-1}^0), & i \geq 1.
\end{array}$$ together with the operators $d_j:K_i \rightarrow K_{i-1}$ and $s_{j-1}:K_{i-1} \rightarrow K_i$ induced by the face operators $d_j:C_{n-1}^{i+1} \rightarrow
C_{n-1}^i$ and degeneracies $s_{j-1}:C_{n-1}^i \rightarrow C_{n-1}^{i+1}$, for $2 \leq j \leq
i+1$ define an augmented split simplicial complex of $\bg$-modules. Factoring out by the $s_0:C_{n-1}^i \rightarrow C_{n-1}^{i+1}$-image of $K_{i-1}$ we get $\bg$-modules $$\widetilde{K}_i= K_i/ s_0(K_{i-1})$$ which again determine an augmented split simplicial complex of $\bg$-modules with face operators $\delta_j$ and degeneracies $\sigma_j$ induced by the corresponding quotients by the operators $d_{j+1}$ and $s_{j+1}$. Finally, the natural transformation whose component in an object $x\in {\text{\sf obj}}(\bg)$ is given by: $$(\delta_0)_x(u) = (d_1)_x(u)\, (d_0)_x(u)^{-1}(s_0d_1d_0)_x(u),$$ for each $u \in K_i(x)$, determines another, also be denoted $\delta_0:
\widetilde{K}_i
\rightarrow \widetilde{K}_{i-1}$, for $i>0$, which together with the previous diagram provides us with a simplicial complex of $\bg$-modules which will be denoted $\widetilde{\mathbf{K}}_{n}$. Furthermore, the restriction to $K_0$ of the face operator $d_0:C_{n-1}^1
\rightarrow C_{n-1}^0$ induces a morphism $\partial_n^0:\widetilde{K}_0 \rightarrow C_{n-1}^0$ such that $\widetilde{\mathbf{K}}_n
\xrightarrow{\partial_n^0} C_{n-1}^0$ is an augmented simplicial $\bg$-module. Thus, we define $$\f_n(\bg,{{\ensuremath{\boldsymbol{\cal C}}}},\bc_{n-1}) = (\bg,{{\ensuremath{\boldsymbol{\cal C}}}}_{n-1}^0,\widetilde{\mathbf{K}}_n),$$ where ${{\ensuremath{\boldsymbol{\cal C}}}}_{n-1}^0$ is the $(n-1)$-crossed complex whose $n-1$ truncation is ${{\ensuremath{\boldsymbol{\cal C}}}}$ ($T_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}}_{n-1}^0) ={{\ensuremath{\boldsymbol{\cal C}}}}$), and is such that $\techo_{n-1}({{\ensuremath{\boldsymbol{\cal C}}}}_{n-1}^0) = C_{n-1}^0$.
It is now a routine exercise to define this functor on arrows and to verify that it is left adjoint to ${{\ensuremath{\overline{W}}}}_n$.
For every $n > 3$, the functor ${{\ensuremath{\overline{W}}}}_n:{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}} \to {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n-1}\right)_{}}}}}$ is an equivalence of categories whose inverse is $\f_n$, and as a consequence, for $n>3$ the categories ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$ are equivalent to ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{3}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{3}\right)_{}}}}}$.
Let $\Pi$ be a groupoid, regarded as a [-[crossed complex]{}]{}. Evidently $\Pi$ can be regarded as a [-[crossed complex]{}]{} for any $k$, and we will do so as needed. Furthermore, we can consider the constant simplicial [crossed complex]{}${{\ensuremath{\Pi^{\scriptscriptstyle\text{ct}}}}}$ as an object in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{k}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{k}\right)_{}}}}}$ for any $k$. Such objects verify ${{\ensuremath{\overline{W}}}}({{\ensuremath{\Pi^{\scriptscriptstyle\text{ct}}}}}) =
{{\ensuremath{\Pi^{\scriptscriptstyle\text{ct}}}}}$ so that the functors ${{\ensuremath{\overline{W}}}}_k:{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{k}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{k}\right)_{}}}}} \to {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{k-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{k-1}\right)_{}}}}}$ induce functors ${{\ensuremath{\overline{W}}}}_k:{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{k}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{k}\right)_{}}}}}/{{\ensuremath{\Pi^{\scriptscriptstyle\text{ct}}}}} \to {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{k-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{k-1}\right)_{}}}}}/{{\ensuremath{\Pi^{\scriptscriptstyle\text{ct}}}}}$. In fact, for every $k\geq1$, we still have an adjoint pair of functors $$\f_k\dashv{{\ensuremath{\overline{W}_{\!k}}}}, \quad
\xymatrix@C=2.2pc@R=1.2pc{ {\vphantom{{{{\ensuremath{\mathbf{Gr}}}}}^\bg}}{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{k}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{k}\right)_{}}}}}/\Pi
\ar@<-.6ex>[r]_-{{{\ensuremath{\overline{W}_{\!k}}}}}
&{\vphantom{{{\ensuremath{\mathbf{Pxm}_{\bg}}}}}}{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{k-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{k-1}\right)_{}}}}}/\Pi\ar@<-.4ex>[l]_-{\f_k}
}$$ We next show that in certain cases the functors $\f_n$ and ${{\ensuremath{\overline{W}}}}_n$ preserve homotopy classes. First we note that the restriction to ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$ of the functor $(T_{n-1})_\ast:{\ifthenelse{\equal{{{\ensuremath{\mathbf{Crs}}}_n}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{\ensuremath{\mathbf{Crs}}}_n}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}} \rightarrow {\ifthenelse{\equal{{{\ensuremath{\mathbf{Crs}}}_{n-1}}}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{{{\ensuremath{\mathbf{Crs}}}_{n-1}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}$ takes homotopies to homotopies. Thus, a homotopy ${{\ensuremath{\bar{h}}}}:(F,\bf,\boldsymbol{\alpha})\rightsquigarrow (F,\bf,\boldsymbol{\beta})$ is given as a pair $${{\ensuremath{\bar{h}}}}=(\bf,\bh),$$ with $$\bh =\{h_j^i:C_n^i \rightarrow C_n^{\prime\, i+1}F;\; 0\leq j\leq
i\}:\boldsymbol{\alpha}\rightsquigarrow \boldsymbol{\beta}: \bc_n\rightarrow
F^\ast \bc_n'$$ a homotopy in the category ${\ifthenelse{\equal{({{\ensuremath{\mathbf{Ab}}}}^\bg)}{}}
{{\ensuremath{{{\ensuremath{\mathbf{Set}}}}^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}
{{\ensuremath{({{\ensuremath{\mathbf{Ab}}}}^\bg)^{{{\ensuremath{{{\ensuremath{\boldsymbol{\Delta}}}}^{\text{\rm\kern-0.1em o\kern-0.1em p}}}}}}}}}}$. Furthermore, the homotopy identities for ${{\ensuremath{\bar{h}}}}$ follow from the homotopy identities for $\bh$.
We can now prove
\[wnpch\] The functor ${{\ensuremath{\overline{W}}}}_n$ preserves homotopy classes of simplicial morphisms.
Let ${{\ensuremath{\bar{h}}}}=(\bf,\bh)$ be a homotopy as before, and let us put ${{\ensuremath{\overline{W}}}}_n(F,\bf,\boldsymbol{\alpha})= \big(F,T_{n-2}(\bf),\boldsymbol{\alpha}'\big)$ and ${{\ensuremath{\overline{W}}}}_n(F, \bf, \boldsymbol{\beta}) = \big(F, T_{n-2}(\bf),
\boldsymbol{\beta}'\big)$, where $\boldsymbol{\alpha}'$ and $\boldsymbol{\beta}'$ are simplicial morphisms of $\bg$-modules given in dimension $i$ by $$\begin{array}{c}
\alpha_{n-1}^{\prime\, i}(u_{i-1},\dots,u_0,u) = (\alpha_n^{i-1}(u_{i-1}),\dots,
\alpha_n^0(u_0),\alpha_{n-1}(u)),\\ [1pc] \beta_{n-1}^{\prime\,
i}(u_{i-1},\dots,u_0,u) =
(\beta_n^{i-1}(u_{i-1}),\dots,\beta_n^0(u_0),\beta_{n-1}(u)),
\end{array}$$ for each $(u_{i-1},\dots,u_0,u) \in C_n^{i-1}(x)\oplus \cdots \oplus C_n^0(x)
\oplus C_{n-1}(x)$. Then the homotopy ${{\ensuremath{\bar{h}}}}'=(T_{n-2}(\bf),\bar{\bh})$ with $$\bar{\bh}=\{\bar{h}_j^i;\; 0 \leq j \leq i\}:\boldsymbol{\alpha}' \rightarrow
\boldsymbol{\beta}'$$ where $\bar{h}_j^i$ is the natural transformation whose $x$-component for $x
\in {\text{\sf obj}}(\bg)$ acts thus: $$\begin{gathered}
{{\ensuremath{\bar{h}}}}_j^i(u_{i-1},\dots,u_0,u) =
\big(h_{j-1}^{i-1}(u_{i-1}),\dots,h_0^{i-j}(u_{i-j}),0,\alpha_n^{i-j-1}(u_{i-j-1}),\dots
\\
\dots, \alpha_n^0(u_0), \alpha_{n-1}(u)\big),\end{gathered}$$ for each $(u_{i-1},\dots,u_0,u) \in C_n^{i-1}(x)\oplus \cdots
\oplus C_n^0(x) \oplus C_{n-1}(x)$. It is immediate to check that the homotopy identities for $\bar{\bh}$ follow from the corresponding identities satisfied by $\bh$ (see [@Garcia2003] Lema 4.3.23 for the details).
The functor $\f_n$ does not behave the same way as ${{\ensuremath{\overline{W}}}}_n$ with respect to homotopies. However, in certain cases $\f_n$ does take homotopic morphisms to homotopic morphisms. One of these cases is the following:
Let $(F,\bf,\boldsymbol{\alpha}),
(F,\bf,\boldsymbol{\beta}):(\bg,{{\ensuremath{\boldsymbol{\cal C}}}},\bc_{n-1}) \rightarrow (\bg',{{\ensuremath{\boldsymbol{\cal C}}}}',\bc'_{n-1})$ be morphisms in the category ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n-1}\right)_{}}}}}$ such that $\alpha_{n-1}^0 = \beta_{n-1}^0:C_{n-1}^0 \rightarrow
C_{n-1}^{' 0}F$ and let ${{\ensuremath{\bar{h}}}}=(\bf,\bh)$ be a homotopy between the above morphisms such that $$h_0^0 = s_0\alpha_{n-1}^0 \qquad\mbox{and}\qquad h_i^j = s_i \alpha_{n-1}^j
\mbox{\; if \;} i<j$$ (note that $h_j^j$ is arbitrary for $j>0$). Then the morphisms of simplicial $n$-crossed complexes $\f_n(F,\bf,\boldsymbol{\alpha})$ and $\f_n(F,\bf,\boldsymbol{\beta})$ in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$ are homotopic.
The two previous lemmas can easily be proved for slice categories. We have:
\[wn conserva homo\] For each groupoid $\Pi$, the functor ${{\ensuremath{\overline{W}}}}_n:{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}/\Pi
\rightarrow
{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n-1}\right)_{}}}}}/\Pi$ preserves homotopy classes of simplicial morphisms in the corresponding slice categories.
\[fn conserva homo\] Let $\Pi$ be a groupoid, $(F,\bf,\boldsymbol{\alpha})$ and $(F,\bf,\boldsymbol{\beta})$ two morphisms in the slice category ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n-1}\right)_{}}}}}/\Pi$, $$\xymatrix{ (\bg, {{\ensuremath{\boldsymbol{\cal C}}}},\bc_{n-1}) \ar@<0.6ex>[rr]^{(F,\bf, \boldsymbol{\alpha})}
\ar@<-0.6ex>[rr]_{(F,\bf,\boldsymbol{\beta})} \ar[dr] & & (\bg',{{\ensuremath{\boldsymbol{\cal C}}}}',\bc'_{n-1})
\ar[dl] \\ &\Pi & },$$ such that $\alpha_0 = \beta_0:C_{n-1}^0 \rightarrow C_{n-1}^{\,\prime\, 0}F$, and let ${{\ensuremath{\bar{h}}}}=(\bf,\bh):(F,\bf, \boldsymbol{\alpha}) \rightsquigarrow
(F,\bf,\boldsymbol{\beta})$, with $$\bh=\{h_j^i:C_{n-1}^i \rightarrow C_{n-1}^{\, \prime\, i+1}F,\; 0 \leq j \leq
i\},$$ be a homotopy in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n-1}\right)_{}}}}}/\Pi$ from $(F, \bf,\boldsymbol{\alpha})$ to $(F,
\bf,\boldsymbol{\beta})$ such that $$h_0^0 = s_0 \alpha_{n-1}^0 \qquad \mbox{and} \qquad h_i^j= s_i \alpha_{n-1}^j
\; \mbox{if $i < j$}.$$ Then the morphisms $\f_n(F,\bf, \boldsymbol{\alpha})$ and $\f_n(F,\bf,
\boldsymbol{\beta})$ in the slice category ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}/\Pi$ are homotopic.
Note that if $A$ is any $\Pi$-module and in Lemma \[fn conserva homo\] we take $$(\bg',{{\ensuremath{\boldsymbol{\cal C}}}}',\bc'_{n-1}) = K(\tilde{A}_{n-1},m)=(\Pi,\Pi,K(A,m)) \in
{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n-1}\right)_{}}}}}/\Pi$$ for $m>0$, then for any two homotopic morphisms in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n-1}\right)_{}}}}}/\Pi$ with codomain $K(\tilde{A}_{n-1},m)$ there is a homotopy ${{\ensuremath{\bar{h}}}}$ satisfying the hypothesis of the said lemma and therefore we can conclude that the functor $\f_n$ preserves homotopy classes of morphisms with codomain $K(\tilde{A}_{n-1},m)$ and we have,
Let $\bg$ be a groupoid, $A$ a $\Pi$-module and $(\bg,{{\ensuremath{\boldsymbol{\cal C}}}},\bc_{n-1})$ a simplicial object in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n-1}\right)_{}}}}}/\Pi$. Then the adjunction $\f_n\dashv{{\ensuremath{\overline{W}_{\!n}}}}$, $$\xymatrix@C=2.2pc@R=1.2pc{ {\vphantom{{{{\ensuremath{\mathbf{Gr}}}}}^\bg}}{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{k}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{k}\right)_{}}}}}/\Pi
\ar@<-.6ex>[r]_-{{{\ensuremath{\overline{W}_{\!n}}}}}
&{\vphantom{{{\ensuremath{\mathbf{Pxm}_{\bg}}}}}}{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{k-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{k-1}\right)_{}}}}}/\Pi\ar@<-.4ex>[l]_-{\f_n}
}$$ induces an isomorphism in homotopy: $$\label{eq iso in homotopyn} \left[\f_n(\bg,{{\ensuremath{\boldsymbol{\cal C}}}},\bc_{n-1}), K(\tilde{A}_n,m)
\right]_{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}/\Pi} \cong \left[(\bg,{{\ensuremath{\boldsymbol{\cal C}}}},\bc_{n-1}) ,
{{\ensuremath{\overline{W}_{\!k}}}}(K(\tilde{A}_n,m))
\right]_{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n-1}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n-1}\right)_{}}}}}/\Pi} .$$
The proof follows trivially after the previous observations, together with Lemma \[wn conserva homo\] and the fact that the adjunction isomorphisms for ${{\ensuremath{\overline{W}}}}_n\dashv
\f_n$ are obtained by applying the functors ${{\ensuremath{\overline{W}}}}_n$ and $\f_n$, and composing with the unit and counit of the said adjunction.
### The representation of singular cohomology of $n$-crossed complexes as homotopy classes of maps of simplicial $n$-crossed complexes.\
Let now $${\frak{F}}_n = \f_n\f_{n-1}\cdots \f_2
\f_1.$$ Then, by repeated application of one gets: $$\big[{\frak{F}}_n\ner({{\ensuremath{\boldsymbol{\cal C}}}}), K({\tildeA_{n}},m)\big]_{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}/\Pi} \xto{\ \
\cong\ \ }
\big[\ner({{\ensuremath{\boldsymbol{\cal C}}}}), {{\ensuremath{\overline{W}}}}_1\cdots{{\ensuremath{\overline{W}}}}_n\bpar{K({\tildeA_{n}}, m)}\big]_{{{\ensuremath{\boldsymbol{\mathcal{S}}}}}/\ner(\Pi)}.$$ Combining this isomorphism with Proposition \[the wbars compose to the hocol\], and the generalized Eilenberg-MacLane representation of the singular cohomology, we get:
\[cor nat iso\] Let $1\leq m \leq n$. There is a natural isomorphism $$\scoH^{n+m}({{\ensuremath{\boldsymbol{\cal C}}}},A) \xto{\ \ \cong\ \ } \big[{\frak{F}}_n\ner({{\ensuremath{\boldsymbol{\cal C}}}}), K({\tildeA_{n}},m)\big]_{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}/\Pi}.$$
Obtaining the topological invariants from the algebraic ones {#sec the top invars}
------------------------------------------------------------
The main tool we use in this section is Duskin’s representation theorem, which in our context can be particularized like this:
\[duskins repr theo\]
If $\tilde{A}_n$ is any internal abelian group object in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}/\Pi$, $$H^m_{{\ensuremath{\bbg_{n}}}}({{\ensuremath{\boldsymbol{\cal C}}}}, \tilde{A}_n) \cong \big[\cotrsr({{\ensuremath{\boldsymbol{\cal C}}}}), K(\tilde{A}_n,
m)\big]_{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}/\Pi} .$$
We use this theorem together with Corollary \[cor nat iso\] to prove the following:
Let ${{\ensuremath{\boldsymbol{\cal C}}}}\in{\ensuremath{\mathbf{Crs}}}_n$, $\Pi = \pi_1({{\ensuremath{\boldsymbol{\cal C}}}})$, and let $A$ be a $\Pi$-module, $A:\Pi \to {{\ensuremath{\mathbf{Ab}}}}$, and $\tilde{A}_n$ the abelian group object in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}/\Pi$ defined in . For every $m\geq0$ there is a natural map $$H^m_{{\ensuremath{\bbg_{n}}}}({{\ensuremath{\boldsymbol{\cal C}}}}, {\tildeA_{n}}) \xto{\ \ \ } \scoH^{n+m}({{\ensuremath{\boldsymbol{\cal C}}}}, A).$$
By Corollary \[cor nat iso\] and Duskin’s representation theorem (Theorem \[duskins repr theo\]) it is sufficient to define a map $$\alpha : \big[\cotrsr({{\ensuremath{\boldsymbol{\cal C}}}}), K({\tildeA_{n}}, m) \big]_{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}/\Pi} \xto{\ \ \ }
\big[{\frak{F}}_n\ner({{\ensuremath{\boldsymbol{\cal C}}}}), K({\tildeA_{n}},m)\big]_{{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}/\Pi}.$$ For this, in turn, it is sufficient to give a morphism $$\label{the map} \boldsymbol{\eta} : {\frak{F}}_n\ner({{\ensuremath{\boldsymbol{\cal C}}}}) \to \cotrsr({{\ensuremath{\boldsymbol{\cal C}}}})$$ in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$ such that it defines a map in ${\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}/\Pi$. Note that in that case the induced map of sets $$\boldsymbol{\eta}_* : {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}\bpar{\cotrsr({{\ensuremath{\boldsymbol{\cal C}}}}), K({\tildeA_{n}}, m)} \xto{\ \ \ }
{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}\bpar{{\frak{F}}_n\ner({{\ensuremath{\boldsymbol{\cal C}}}}), K({\tildeA_{n}},m)}$$ automatically preserves homotopy classes.
To define $\boldsymbol{\eta}$ we first observe that the simplicial object ${\frak{F}}_n\ner({{\ensuremath{\boldsymbol{\cal C}}}})$ is free with respect to the cotriple [$\bbg_{n}$]{}[@Garcia2003] and therefore to give $\boldsymbol{\eta}$ it is sufficient to give a morphism $\eta_{-1}:\pi_0{\frak{F}}_n\ner({{\ensuremath{\boldsymbol{\cal C}}}})\rightarrow {{\ensuremath{\boldsymbol{\cal C}}}}$ where $\pi_0:{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}} \rightarrow {\ensuremath{\mathbf{Crs}}}_n$ is the connected components functor, defined as the left adjoint to the diagonal $\Delta: {\ensuremath{\mathbf{Crs}}}_n \rightarrow {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}$. Then we consider the two adjunctions $$\pi_0\dashv \Delta, \;\everyentry={\vphantom{\Bigg(}} \xymatrix@C=1.5pc@R=1.2pc{
{\ensuremath{\mathbf{Crs}}}_n \ar@<-.6ex>[r]_-{\Delta} & {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}}\ar@<-.4ex>[l]_-{\pi_0} } \mbox{\ \
and\ \ } {\frak{F}_n}\dashv {\frak W}_n={{\ensuremath{\overline{W}}}}_1\cdots{{\ensuremath{\overline{W}}}}_n,\;
\xymatrix@C=1.5pc@R=1.2pc{ {\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{n}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{n}\right)_{}}}}} \ar@<-.6ex>[r]_-{{\frak W}_n} &
{\ifthenelse{\equal{}{}}
{{\ensuremath{\mathbf{SCrs}_{0}}}}
{{\ensuremath{\left(\mathbf{SCrs}_{0}\right)_{}}}}}\ar@<-.4ex>[l]_-{{\frak F}_n}, }$$ and observe that the composition $\frak{W}_n \Delta$ is just the nerve functor $\ner$. Therefore $\pi_0{\frak{F}}_n\ner({{\ensuremath{\boldsymbol{\cal C}}}})=\pi_0\frak{F}_n\frak{W}_n
\Delta ({{\ensuremath{\boldsymbol{\cal C}}}})$ and we can take $\eta_{-1}$ as the ${{\ensuremath{\boldsymbol{\cal C}}}}$-component of the counit of the adjunction $\pi_0\frak{F}_n \dashv \frak{W}_n \Delta$.
By taking the case $m=2$ we obtain the desired map $$H^2_{{\ensuremath{\bbg_{n}}}}\bpar{{{\ensuremath{\boldsymbol{\cal C}}}}, {\tildeA_{n}}} \xto{\ \ \ } \scoH^{n+2}\bpar{{{\ensuremath{\boldsymbol{\cal C}}}}, A}.$$ This implies that for an arbitrary crossed complex ${{\ensuremath{\boldsymbol{\cal C}}}}$, if we denote $\tilde\pi^{(n)}_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})$ the abelian group object ${\tildeA_{n}}$ corresponding to the local coefficient system determined by $A=\tilde\pi_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})$, then we have a morphism $$H^2_{{\ensuremath{\bbg_{n}}}}\bpar{P_n({{\ensuremath{\boldsymbol{\cal C}}}}), \tilde\pi^{(n)}_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})} \xto{\ \ \ }
\scoH^{n+2}\bpar{P_n({{\ensuremath{\boldsymbol{\cal C}}}}), \pi_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})}.$$
The topological Postnikov invariant of a crossed complex ${{\ensuremath{\boldsymbol{\cal C}}}}$ is the image by this map of the algebraic Postnikov invariant $k_{n+1}\in H^2_{{\ensuremath{\bbg_{n}}}}
\bpar{P_n({{\ensuremath{\boldsymbol{\cal C}}}}), \tilde\pi^{(n)}_{n+1}({{\ensuremath{\boldsymbol{\cal C}}}})}$.
For any space $X$ having the homotopy type of a crossed complex, we can obtain its [[Postnikov]{} invariant]{}s by simply calculating the topological [[Postnikov]{} invariant]{}s of the fundamental crossed complex of is singular complex ${{\ensuremath{\boldsymbol{\cal C}}}}= \Pi(X)$.
2-Torsors and Cotriple Cohomology {#apen}
=================================
Torsors were developed by Duskin as the appropriate algebraic structure “representing” general cotriple cohomology. The main references for this subject are [@Duskin1975] and [@Glenn1982]. Our definitions differ slightly from those found in those references in the sense that we put special emphasis in the groupoid fibre of a torsor. This is closer to the way torsors are defined and used, for example, in [@BuCaFa1998].
2-Torsors
---------
Every arrow in a groupoid establishes a group isomorphism between the endomorphism group of its domain and that of its codomain. A *connected* 2-torsor with coefficients on a given abstract group $G$ is a connected groupoid (meaning that it has one single connected component) together with a “coherent" system of isomorphisms from the different groups of endomorphisms to the abstract group $G$. Thus, in order to specify a connected 2-torsor with coefficients in $G$ we must give a connected groupoid (called “*the fiber*" of the 2-torsor) and a natural system $\alpha=\{\alpha_x\}_{x\in{\text{\sf obj}}(\g)}$ of group isomorphisms $\alpha_x:{\text{\sf End}}_\g(x)\to G$.
This definition can be easily generalized to 2-torsors with a non-necessarily connected fiber groupoid. As it turns out, the general definition can be obtained as a particular case of the above, provided it is expressed in such a way that it makes sense in more general categories.
Let be a Barr exact category. Associated to we have the category ${{{\ensuremath{\mathbf{Gpd}}}}}(\e)$ of internal groupoid objects in and internal functors between them. If $s,t:M\to O$ are the “source" and “target" structural morphisms of an internal groupoid , we say that is connected if the coequalizer of $s$ and $t$ is the terminal object, and we say that is totally disconnected if $s=t$.
For a given object $O$ in , ${{\ensuremath{\mathbf{TdGpd}}}}_O(\e)$ denotes the category whose objects are totally disconnected internal groupoids in having $O$ as object of objects, and whose arrows are internal functors whose component at the level of objects is the identity of $O$. This category can be identified with the category of internal group objects in the slice category $\e/O$.
If is an internal groupoid in , having $O$ as object of objects, the category ${{{\ensuremath{\mathbf{Gr}}}}}(\e)^\g$ of internal -groups in is now defined in terms of -actions. Then an internal -group consists of a totally disconnected groupoid $\h\in{{\ensuremath{\mathbf{TdGpd}}}}_O$ together with a -action, that is a map in : $$M\times_O H\to H;\quad (f,h)\mapsto\; \laction fh,$$ where $M$ and $H$ denote the objects of arrows of and respectively, and $M\times_O H$ is the pullback object of the diagram $M\xto{s}O
\xleftarrow{s=t}H$, such that it satisfies the usual axioms for an action of groups. A morphism of internal -groups $\alpha:\h\to\h'$ is an equivariant functor in ${{\ensuremath{\mathbf{TdGpd}}}}_O(\e)$.
One of the basic examples of internal -group is the -group of endomorphisms of , ${\text{\sf End}}_\g$, defined by the totally disconnected groupoid ${\text{\sf End}}(\g)$ (the equalizer of $s$ and $t$ with the groupoid structure given by restriction of that in ) and the action by conjugation in .
If we write $\mathbf{1}$ for the internal groupoid in , having the terminal object as both object of arrows and object of objects, we can identify the category ${{{\ensuremath{\mathbf{Gr}}}}}(\e)^\mathbf{1}$ with ${{{\ensuremath{\mathbf{Gr}}}}}(\e)$ and therefore the canonical $\g\to
\mathbf{1}$ induces a functor ${{{\ensuremath{\mathbf{Gr}}}}}(\e)\to{{{\ensuremath{\mathbf{Gr}}}}}(\e)^\g$ which allows us to regard any internal group $G$ in $\e$ as a $\g$-group (the trivial action of $\g$ on $G$).
Note that any internal functor $f:\g'\to\g$ defines, in a standard way, a functor $$f^\ast:{{{\ensuremath{\mathbf{Gr}}}}}(\e)^\g\to{{{\ensuremath{\mathbf{Gr}}}}}(\e)^{\g'}.$$
A *connected 2-torsor* in consists then of a triple $(\g,G,\alpha)$ where $G$ is an internal group in , called the coefficients, is a connected groupoid in , called the fiber, and $\alpha$, the cocycle, is an isomorphism in the category ${{{\ensuremath{\mathbf{Gr}}}}}(\e)^\g$ from ${\text{\sf End}}_\g$ to $G$ (the trivial action of on $G$). Note that to give the cocycle $\alpha$ is equivalent to giving an arrow $\alpha:{\text{\sf End}}(\g)\to G$ that makes the following square a pullback in , $$\xymatrix@C=2.5pc@R=1.75pc@!=1em {{\text{\sf End}}(\g)\ar[d]_{s\,=\, t}\ar[r]^-\alpha & G \ar[d]\\
{\text{\sf obj}}(\g)\ar[r] & \mathbf{1}}$$ and satisfies:
- $\alpha(a b)=\alpha(a) \alpha(b)$ for all composable endomorphisms $a,b$ of ,
- $\alpha(\laction fa)=\alpha(f a f^{-1})= \alpha(a)$, for all arrows $f$ and all endomorphisms $a$ in , such that $s(f)=s(a)$.
The connected 2-torsors in whose group of coefficients is $G$ are the objects of a category, denoted ${\text{\it Tor}}^2(1,G)$, whose arrows form $(\g,G,\alpha)$ to $(\g',G,\alpha')$ are internal functors $f:\g\to\g'$ compatible with the cocycles $\alpha,\alpha'$ in the sense that $$\alpha = f^\ast(\alpha')= \alpha'\circ f.$$
If $T$ is an object in a Barr exact category , then the slice category $\e/T$ is again a Barr exact category and a *2-torsor above* $T$ in is defined as a connected 2-torsor in $\e/T$. In this definition it is understood that the coefficients are taken in an internal group object in $\e / T$. If $T$ is an object and $G$ an internal group in , then the canonical projection $G\times
T\to T$ gives an object of $\e/T$ which has a canonical structure of internal group object in $\e/T$. In this situation, a $(G,2)$-torsor above $T$ in is defined as a connected 2-torsor in $\e/T$ with coefficients in $G\times T$, and the category of such torsors is denoted ${\text{\it Tor}}^2(T,G)$. By ${\text{\it Tor}}^2[T,G]$ we denote the set of connected components of ${\text{\it Tor}}^2(T,G)$.
Let us suppose now that we have a functor $U:\e\to\s$ from a Barr exact category to a category with finite limits. Let be an internal groupoid in with source and target maps $s,t:M\to O$. If $q:O\to T$ is the coequalizer of $s$ and $t$ and $O\times_T O$ is the pullback of $q$ with itself, there is an induced map $(s,t):M\to O\times_T O$. We say that the groupoid is *$U$-split* if the maps $U(q)$ and $U(s,t)$ split in .
A $U$-*split* $(G,2)$-torsor above $T$ is a $(G,2)$-torsor above $T$ such that its fiber groupoid is $U$-split. The full subcategory of ${\text{\it Tor}}^2(T,G)$ determined by those $(G,2)$-torsors which are $U$-split is denoted ${\text{\it Tor}}_U^2(T,G)$. Correspondingly, the set of connected components of ${\text{\it Tor}}_U^2(T,G)$ is denoted ${\text{\it Tor}}_U^2[T,G]$.
\[condition U-split\] ${\text{\it Tor}}^2(T,G)$ is a category fibred over $\e$ via the composite functor $$\everyentry={\vphantom{\Big(}} \xymatrix@C=1.5pc@R=1.75pc
{{\text{\it Tor}}^2(T,G)\ar[r]^-{\fib} & {{{\ensuremath{\mathbf{Gpd}}}}}(\e)\ar[r]^-{{\text{\sf obj}}} & \e.}$$ Furthermore, if $(\g,\alpha)\in {\text{\it Tor}}^2(T,G)$ is $U$-split for some left exact functor $U:\e\to\s$, then $(\g',\alpha')$ is $U$-split if and only if the projection $q':O'\to T$ corresponding to $(\g',\alpha')$ is $U$-split.
For the first part we have to prove that if $(\g,\alpha)$ is a $(G,2)$-torsor above $T$ and $f:O'\to O={\text{\sf obj}}(\g)$ is a morphism in , then there is a $(G,2)$-torsor $(\g',\alpha')$ above $T$ and an $({\text{\sf obj}}\circ\fib)$-cartesian morphism of torsors $f':(\g',\alpha')\to(\g,\alpha)$ above $f$.
The idea for the construction of $\g'$ with object of objects $O'$ is that its arrows from $x\in O'$ to $y\in O'$ are the arrows $f(x)\to f(y)$ in and that the identity of $x\in O'$ is the identity of $f(x)$. The composition in $\g'$ will then be clearly induced by that of . Thus, the object of arrows of $\g'$, together with its source and target maps can internally be defined by the following pullback $$\xymatrix@C=1.6pc@R=1.75pc@!=1em {M'\ar[d]_u \ar[rr]^-{(s',t')} & & O'\times_T O' \ar[d]^{f\times
f}
\\ M\ar[rr]_-{(s,t)} && O\times_T O'}$$ where $O\times_TO$ is the pullback of $q$ with itself and $O'\times_TO'$ that of $q'=qf$ with itself. This construction produces a functor $f':\g'\to\g$ whose component on arrows is $u$ (and it is $f$ on objects). Note that ${\text{\sf End}}_{\g'}=
f^{\prime\ast}({\text{\sf End}}_\g)$. The cocycle map $\alpha'$ is defined as the image of $\alpha$ by the induced functor $f^{\prime\ast}:{{{\ensuremath{\mathbf{Gr}}}}}(\e)^\g\to{{{\ensuremath{\mathbf{Gr}}}}}(\e)^{\g'}$, that is $\alpha'=f^{\prime\ast}\alpha$.
Let us now assume that $(\g,\alpha)$ is $U$-split. For $(\g',\alpha')$ to be $U$-split it is necessary that $q'$ be $U$-split. From the exactness of $U$ and a splitting of $U(s,t)$ it follows that $(s',t')$ is $U$-split, therefore that $q'$ be $U$-split is also sufficient for $(\g',\alpha')$ to be $U$-split.
It only remains to prove that $f'$ is cartesian. Let $g:\g''\to\g$ be a morphism of internal groupoid in such that $g_0={\text{\sf obj}}(g):O''\to O$ factors through $f$ as $g_0=fh$. Then it is clear which is the only way to define a factorization $g=f' h'$ such that ${\text{\sf obj}}(h')=h$. This condition determines $h'$ on objects and $g$ determines it on arrows.
By a reasoning similar to the one given in [@Glenn1982] to prove Theorem 5.7.5, it is easy to prove that from any diagram in ${\text{\it Tor}}_U^2(T,G)$ of the form $(\g,\alpha) \to (\tilde\g,\tilde\alpha)\lto(\g',\alpha')$ one can obtain another of the form $ (\g,\alpha) \lto (\g'', \alpha'') \to (\g',\alpha')$ (see [@Garcia2003], Lema 4.2.6 for the details). As a consequence we have the following useful necessary (and, obviously, also sufficient) condition satisfied by torsors in the same connected component of ${\text{\it Tor}}_U^2(T,G)$.
\[adapted from Glenn\] If $(\g,\alpha)$ and $(\g',\alpha')$ are $U$-split 2-torsors which are in the same connected component of ${\text{\it Tor}}_U^2(T,G)$, then there is a torsor $(\g'',\alpha'')$ and maps $$(\g,\alpha) \lto
(\g'',\alpha'')\to(\g',\alpha')$$ in ${\text{\it Tor}}_U^2(T,G)$.
Cotriple Cohomology
-------------------
Let be a tripleable category over a category with cotriple . For any object $T\in\e$ and any abelian group object $A$ in $\e/T$, the cotriple cohomology groups $H^n_{\bbg}(T,A)$ can be represented in terms of homotopy classed of simplicial maps from the cotriple simplicial resolution of $T$ to the Eilenberg-Mac Lane complex $K(A,n)$. On the other hand, Duskin’s interpretation theorem for cotriple cohomology [@Duskin1975] provides an interpretation of the elements of $H^n_{\bbg}(T,A)$ in terms of $U$-split torsors, where $U:\e\to\s$ is the monadic functor associated to the cotriple . In the particular case $n=2$, Duskin’s theorem implies the following
\[Duskin’s interpretation\] In the above conditions, for any object $T\in\e$ and any abelian group object $A$ in $\e/T$, there is a natural bijection $$H^2_{\bbg}(T,A)\cong {\text{\it Tor}}^2_U[T,A].$$
In the presence of a cotriple, Proposition \[condition U-split\] has the following consequence:
\[cotriple torsor in same component\] In the hypothesis of Theorem \[Duskin’s interpretation\], if $(\g,\alpha)$ is a $U$-split $(A,2)$-torsor above $T$ with object of objects $O$, there is a $U$-split $(A,2)$-torsor above $T$, $(\g',\alpha')$, whose object of objects is $\bbg(T)$ and whose projection is the counit $\varepsilon_T:\bbg(T)\to T$. Furthermore, $(\g',\alpha')$ is connected to $(\g,\alpha)$ by a morphism $(\g',\alpha')\to(\g,\alpha)$ in ${\text{\it Tor}}^2_U(T,A)$.
Let $s:U(T)\to U(O)$ be a section of the image $U(q)$ of the projection $q:O\to T$ of $(\g,\alpha)$. Use Proposition \[condition U-split\] with $O'=\bbg(T)$ and $f$ equal to the composite $\bbg(T)\xrightarrow{F(s)}\bbg(O)\xrightarrow{\varepsilon_0} O$, where $F$ is the left adjoint to $U$ and $\varepsilon$ is the counit of ($=FU$). Then we obtain an $(A,2)$-torsor above $T$, $(\g',\alpha')$, whose projection is the composite $$qf=q\varepsilon:OF(s)= \varepsilon_T FU(q)F(s)=\varepsilon_T,$$ and a map $f':(\g',\alpha')\to(\g,\alpha)$ which is given by $f$ at the level of objects. Since $\varepsilon_T$ is a $U$-split map (with $U$-section given by $\eta_{U(O)}$ ), it follows that $(\g',\alpha')$ is $U$-split.
Let now ${\underline{{\text{\it Tor}}}}_U^2(T,A)$ denote the full subcategory of ${\text{\it Tor}}_U^2(T,A)$ determined by those torsors whose object of objects is $\bbg(T)$ and whose projection is the counit $\varepsilon_T$. Then Propositions \[cotriple torsor in same component\] and \[adapted from Glenn\] imply the following:
\[same connected components\] In the hypothesis of Theorem \[Duskin’s interpretation\], let $F:\be\to{\text{\it Tor}}_U^2(T,A)$ be a full and faithful functor such that the inclusion ${\underline{{\text{\it Tor}}}}_U^2(T,A){\hookrightarrow}{\text{\it Tor}}_U^2(T,A)$ factors through $F$. Then, $F$ establishes a bijection between the set $[\be]$ of connected components of $\be$ and ${\text{\it Tor}}_U^2[T,A]$. Hence there is a natural bijection $$H^2_\bbg(T,A)\cong [\be].$$
Let $A,B\in\be$ such that $F(A)$ and $F(B)$ are in the same connected component of ${\text{\it Tor}}_U^2(T,A)$. We just need to show that $A$ and $B$ are in the same connected component in $\be$. By Proposition \[cotriple torsor in same component\] there are 2-torsors $X, Y\in {\underline{{\text{\it Tor}}}}_U^2(T,A)$ and morphisms $h:X\to F(A)$ and $k:Y\to F(B)$, such that $X$ and $Y$ are in the same connected component of ${\text{\it Tor}}_U^2(T,A)$. Hence, by Proposition \[adapted from Glenn\] we get a diagram $\xymatrix@1@C=1pc{X&Z\labelmargin{1pt} \ar[l]_(.45)f
\labelmargin{1.5pt}\ar[r]^g &Y}$ in ${\text{\it Tor}}_U^2(T,A)$ where, by Proposition \[cotriple torsor in same component\] we can suppose that $Z$ is in ${\underline{{\text{\it Tor}}}}_U^2(T,A)$. By the hypothesis that the inclusion of ${\underline{{\text{\it Tor}}}}_U^2(T,A)$ factors through $F$, we get a diagram $$\label{first diagram}A' \lto C \to B'$$ in $\be$ such that $F(A') = X$ and $F(B') = Y$. Thus, the maps $h, k$ and the fullness of $F$ allow us to extend diagram to a diagram $$A \lto A' \lto C \to B' \to B$$ proving that $A$ and $B$ are in the same connected component.
[10]{}
M. Barr and J. Beck. , volume 80 of [*Lecture Notes in Math.*]{} Springer, New York, 1969.
F. Borceux. , volume 51 of [*Encyclopedia of Mathematics and its Applications*]{}. Cambridge Univ Press, 1994.
R. Brown and P. J. Higgins. On the algebra of cubes. , 21:233–260, 1981.
R. Brown and P. J. Higgins. Colimit theorems for relative homotopy groups. , 22:11–41, 1981. Communicated by A. Heller.
R. Brown and P. J. Higgins. The classifying space of a crossed complex. , pages 95–119, 1991.
R. Brown and J. Huebschmann. Identities among relations. In R. Brown and T. L. Thickstun, editors, [*Low-dimensional topology*]{}, volume 48 of [*London Math. Soc. Lecture Note Ser.*]{} Cambridge Univ. Press, Cambridge, 1982.
R. Brown and C. D. Wensley. Computing crossed modules induced by an inclusion of a nomal subgroup, with applications to homotopy 2-types. , 2(1):3–16, 1996.
M. Bullejos. The [P]{}ostnikov tower of a crossed complex. unpublished, 1998.
M. Bullejos, J. G. Cabello, and E. Faro. On the equivariant 2-type of a $g$-space. , 129:215–245, 1998.
M. Bullejos, E. Faro, and M. A. Garc[í]{}a. Homotopy colimits and cohomology with local coefficients. Accepted for publication in the Cahiers Topologie Géom. Différentielle Catég., 2002.
P. Carrasco, A. Cegarra, and A. R. Grandjean. (co)homology of crossed modules. , 2001.
A. Cegarra and E. Aznar. An exact sequence in the first variable for torsor cohomology: The 2-dimensional theory of obstructions. , 39:197–250, 1986.
E. B. Curtis. Simplicial homotopy theory. , 6:107–209, 1971.
J. Duskin. Simplicial methods and the interpretation of triple cohomology. , 3 - 2, 1975.
W. G. Dwyer and D. M. Kan. Homotopy theory and simplicial groupoids. , 87(4):379–389, Dec 1984.
G. J. Ellis. Homology of 2-types. , (2) 46:1–27, 1991.
S. Eilenberg and S. Mac Lane. On the groups $h(\pi, n)$ [I]{}. , 58:55–106, 1953.
S. Eilenberg and S. Mac Lane. On the groups $h(\pi, n)$ [II]{}. , 60:513–557, 1954.
S. Eilenberg and S. Mac Lane. On the groups $h(\pi, n)$ [III]{}. , 70:49–137, 1954.
M. A. García-Muñoz. . thesis, Universidad de Jaén, 2003. (http://www4.ujaen.es/$\sim$magarcia/TESIS.pdf).
P. G. Glenn. Realization of cohomology classes in arbitrary exact categories. , (25):33–105, 1982.
P. G. Goerss and J. F. Jardine. , volume 174 of [*Progress in Mathematics*]{}. Birkhäuser, Basel, Oct 1999.
P. A. Griffiths and J. W. Morgan. , volume 16 of [*Progress in Mathematics*]{}. Birkhäuser, Boston, 1981.
C. Hog-[A]{}ngeloni, W. Metzler, and A. J. Sieradski, editors. . Number 197 in London Math. Soc. Lecture Notes Series. Cambridge University Press, Cambridge, 1993.
J. Howie. Pullback functors and crossed complexes. , 20:281–295, 1970.
D. M. Kan. On homotopy theory and c. s. s. groups. , 68, No 1:38–53, 1958.
J. P. May. . Van Nostran, 1967.
T. Porter. N-types of simplicial groups and crossed n-cubes. , 32(1):5–24, 1993.
T. Porter. Crossed modules, crossed $n$-cubes and simplicial groups. Preprint, University College of North Wales, 1989. For René Lavendhomme on his birthday.
T. Porter. Abstract homotopy theory. the interaction of category theory and homotopy theory. Preprint, University College of North Wales, 1992. Lectures at the Corso Estivo Categorie e Topologia. Bressanone, Settembre 1991.
D. Quillen. , volume 43 of [*Lecture Notes in Math.*]{} Springer, New York, 1967.
Ratcliffe. , volume 43 of [*Lecture Notes in Math.*]{} Springer, New York, 1980.
A. P. Tonks. . dissertation, University of Wales, Sep 1993.
J. H. C. Whitehead. Combinatorial homotopy [II]{}. , 55:453–496, 1949.
[^1]: Supported by DGI: BFM2001-2886.
|
---
abstract: 'Evolutionary biology shares many concepts with statistical physics: both deal with populations, whether of molecules or organisms, and both seek to simplify evolution in very many dimensions. Often, methodologies have undergone parallel and independent development, as with stochastic methods in population genetics. We discuss aspects of population genetics that have embraced methods from physics: amongst others, non-equilibrium statistical mechanics, travelling waves, and Monte-Carlo methods have been used to study polygenic evolution, rates of adaptation, and range expansions. These applications indicate that evolutionary biology can further benefit from interactions with other areas of statistical physics, for example, by following the distribution of paths taken by a population through time.'
address:
- 'I.S.T. Austria. Klosterneuburg A-3400, Austria'
- 'Institute of Evolutionary Biology, University of Edinburgh. Edinburgh, United Kingdom'
author:
- 'Harold P. de Vladar'
- 'Nicholas H. Barton'
title: The contribution of statistical physics to evolutionary biology
---
Population Genetics ,Statistical Thermodynamics ,Evolutionary dynamics ,Haldane’s Principle ,Selection ,Drift ,Diffusion Equation ,Fitness Flux ,Entropy ,Information ,Travelling Waves ,Monte Carlo.
Parallel foundations of evolution and statistical physics {#sect:intro .unnumbered}
=========================================================
In the late 19th century, Boltzmann established the theoretical foundations of [**statistical mechanics**]{}, in which the behaviour of ensembles of particles explains large-scale phenomena [@Boltzmann1896]. For example, the position and velocity of the particles in a gas can fluctuate between very many states (termed micro-states), but averages over all the configurations that give the same observable macroscopic states (temperature and pressure, say) [@Callen1985]. A similar averaging over equivalent micro-states is made in both population and quantitative genetics: we average over individual gene combinations to describe a population by its allele frequencies, and we can further average over all the allele frequencies that are consistent with a given mean and variance of a quantitative trait. In this sense, physicists and evolutionary biologists both model populations (a gas or a gene pool) rather than precise types (individual particles or genotypes). This “statistical” description in terms of a few variables, the macro-states, summarizes the many possible configurations of the micro-states (degrees of freedom), which cannot be accurately measured or described. Furthermore, the macro-states are then sufficient to predict other properties without reference to the micro-states. For example, thermodynamics describes macroscopic properties without referring to individual particles; similarly, quantitative genetics does not refer to allele frequencies to predict the trait mean in the next generation.
Hence, evolutionary biology and statistical physics often use similar theoretical methodologies, although studying very different phenomena. We argue that there are close analogies between evolutionary genetics and statistical physics. Physical techniques had an early influence on molecular biology (\[Box:History\]). But more specifically, non-equilibrium methods are based on the same theory of stochastic processes that is used in population genetics. Thus, some physical theories promise further developments that can deepen our understanding of evolution in two ways: either by applying common mathematical techniques (e.g. diffusion equations, see \[Box:DiffEq\]), or by developing precise analogies that incorporate new concepts (e.g. ensemble averaging, information, or [**entropy**]{}). These techniques are being applied in different aspects of evolutionary biology. This article focuses mainly in those that we consider most promising for population genetics. We aim to introduce to the reader to these methods by reviewing representative examples in the literature.
The cost of selection, entropy and information {#sect:SelectionEntropyInformation .unnumbered}
==============================================
It is extraordinary that the selection of random mutations has created complex organisms that appear exquisitely designed to fit their environment. Selection can be seen as taking information from the environment, and coding it into the DNA sequence [@MaynardSmith:2000p4611]: thus, the gene pool contains information about those specific sequences that confer high fitness. This idea can be quantified using the concepts of entropy and information [@Shannon:1948p4599]. Entropy is a measure of the number of different states in which a population is likely to be found: thus, selection of one specific genotype, or genotype frequency, corresponds to minimal entropy. A asking how the genotype of an individual, or a population, depends on the selection that they have experienced, can be quantified by an entropy that measures how strongly selection has clustered the population around a specific genotype. Haldane [@Haldane:1957p213] showed that the number of [**selective deaths**]{} needed to fix an allele is independent of the selection pressure, and Kimura [@Kimura:1961p5973] pointed out that Haldane’s “cost of natural selection” is exactly the information gained by fixing a specific allele. This relation applies very generally to asexual populations [@Worden:1995fk] but fails with recombination (see below). The theory of [**quasispecies**]{} (a model of mutation-selection balance), emphasizes that the reproductive rate (selection) limits the amount of information that can be maintained in the face of random mutations [@Eigen:1977p4220]. However, this constraint can be relaxed with recombination and epistasis [@Kimura:1966p7222]. Analogies with statistical physics help us to understand how selection accumulates information. We first consider infinite populations – evolving deterministically – and then the more general case where random drift in finite populations drives evolution.
Deterministic evolution and the role of recombination {#deterministic-evolution-and-the-role-of-recombination .unnumbered}
-----------------------------------------------------
The information content of a single large population that is evolving deterministically is measured by the entropy, defined as $S=\sum_x p_x \log(p_x)$ where $p_x$ is the frequency of alleles or genotype $x$ [@Haldane:1957p213; @Eigen:1977p4220]. This entropy reflects the information accumulated and maintained by evolution, and is closely related to Shannon’s information \[3-4\]. Remarkably, the [**replicator dynamics**]{} can be obtained by maximizing a different measure, [**Fisher information**]{}: $F=\sum_x p_x \left(\frac{d}{dt}\log(p_x) \right)^2$ , which measures divergence between two distributions [@Frieden:2001p4597; @Frank:2009p4318]. Fisher’s information is the “acceleration” of the entropy, i.e. $d^2S/dt^2=F $, where we interpret $\frac{d}{dt}\log(p_x)$ as the information contained about selection when we observe how the frequency of $x$ has changed. For example, for a beneficial allele under selection, this would be proportional to the selective value $s$. Normally, we predict the change of the frequency distribution when we know $s$. Fisher’s information takes the parent and the offspring distributions as given, and measures the effect of selection from the difference between these two [@Frank:2009p4318].
#### **Sexual vs. asexual reproduction**
It has long been understood that, when combined with [**truncation selection**]{}, sexual reproduction is much more efficient than asexual in fixing beneficial genotypes [@Crow:1964uq; @Ewens:1979 Ch. 2]. In the former case, the maximum information increases as $n^{1/2}$ ($n$ being the number of loci), while in the latter, it increases by only one unit per generation [@MaynardSmith:2000p4611; @MacKay2003]. The maximum number of loci that can be maintained despite the randomizing effect of mutation is $~1/\mu$ for asexuals [@MacKay2003], whilst for sexuals (with free recombination) it can be as high as $~1/\mu^2$, where $\mu$ is the mutation rate at each locus [@MacKay2003; @Peck:2010p6334]. (More loci would produce more mutants, and hence decrease the amount of information). According to Haldane’s principle [@Haldane:1937kx] every deleterious mutation must be eliminated by a failure to reproduce (a “selective death”). Therefore, the mutation load is independent of selection strength, and is half as great if selection eliminates two copies in a recessive homozygote at the same time. In haploids, redundancy leads to a similar gain in efficiency [@Peck:2010p6334; @Watkins:2002vn].
Stochastic evolution: the diffusion of allele frequencies {#sect:StochasticEvolution .unnumbered}
=========================================================
The diffusion approximation shows how the distribution of allele frequencies at many loci changes through time in finite populations. In this case, selection, mutation and migration are modelled as deterministic factors, and genetic drift introduces random fluctuations to populations within an ensemble (\[Box:DiffEq\]). (Other treatments are possible, where mutations are regarded as carrying random changes to individuals within a single population.) In other fields, constant diffusion coefficients have been widely used, leading to simple Gaussian solutions (\[Box:DiffEq\]). However, Gaussian solutions are not appropriate for population genetics, because allele frequencies range between zero and one, and sometimes cluster near fixation, in a bimodal distribution. After its introduction by Fisher [@Fisher:1922lh], Kolmogorov applied the more general diffusion method to the neutral island model [@Kolmogorov:1935zr], which Wright [@Wright:1931] had already solved by different means. Kimura relied on the diffusion approximation to model the evolution of finite populations [@Kimura:1955ly], and for his neutral theory of molecular evolution [@Kimura:1985].
The diffusion approximation, central to both population genetics and statistical physics, provides a way to model many factors in a mathematically tractable way. Crucially, it approximates a wide variety of more detailed models. Mathematically, it is equivalent to the coalescent process that describes the evolution of samples from a population, and to path ensemble methods that describe the distribution of population histories (see below). In physics, diffusion equations describe non-equilibrium processes and are hard to relate to quantities like temperature, entropy, or free energy, which are well-defined only in thermodynamic equilibrium through the [**Boltzmann distribution.**]{}
Wright showed that selection, mutation and drift give an explicit distribution, proportional to $\bar{W}^{2N}$ , where $\bar{W}$ is the mean fitness of a population of size $N$ [@Wright:1931]. This is closely analogous to the Boltzmann distribution $ \left( \sim e^{-E/kT} \right) $ [@Boltzmann1896; @Callen1985], with $\log(\bar{W})$ corresponding to (negative) energy, $-E$, and $1/2N$ to the temperature, $kT$ (\[Box:DiffEq\]). This result was the basis for Wright’s metaphor of an adaptive landscape: a surface of mean fitness laid over the multidimensional space of allele frequencies [@Wright:1988p1226] (\[Box:DiffEq\] and Fig. \[Fig:DESols\]).
#### **Jumps between adaptive peaks**
When the [**stationary distribution**]{} is clustered around alternative peaks in the adaptive landscape, the rate at which random drift causes shifts between these states is approximated by a general formula that is proportional to the probability of being at the saddle point (adaptive valley) that separates them, and to the leading eigenvalue that describes the instability at that point [@Barton:1987p4186; @Wright:1941ve]. Wright [@Wright:1941ve] worked out transition rates for chromosome rearrangements, ideas rigorously formulated later using diffusions [@Lande:1985qf]. Rouhani and Barton[@Barton:1987p4186; @Rouhani:1987bh] found the rate of peak shifts in a spatially structured population, borrowing from an identical analysis of transitions between alternative vacuum states.
#### **Traveling waves**
The distribution of a quantitative trait, or of fitness itself, can be seen as a [**traveling wave**]{} that travels at a steady rate as the population adapts, either in actual or in phenotypic space. Most analyses have been of asexuals, which increase their fitness by accumulation of favourable mutations, or decline under Muller’s ratchet [@Wilke:2004dq; @Rouzine:2007p3220; @Burger:1999cr]. Beneficial mutations increase in frequency independently at the wave front, where frequencies are low and subject to drift, but the rest of the wave follows deterministically [@Hallatschek:2011p7231]. The wave thus moves at a velocity proportional to the mutation rate, and which depends logarithmically on the population size because of strong random drift at the leading edge [@Wilke:2004dq]. This approach has been extended to low rates of recombination [@Rouzine:2010nx; @Neher:2010oq]. With sexual reproduction, random drift has much less effect, and the population adapts much more quickly [@Burger:1999cr; @Peck:1999kl]. However, when there is a very high rate of substitution and recombination, [**Hill-Robertson interference**]{} limits adaptation rate [@Barton:2009tg].
#### **Spatial evolution and range expansions**
Fisher introduced a simple non-linear diffusion equation describing the spread of a beneficial mutation through space [@Fisher:1937p5980]. Though motivated by an evolutionary problem, this model raised interest among physicists and mathematicians (establishing a sub-discipline studying the *Fisher-KPP model* –for Kolmogorov-Piskounov-Petrovski, co-discoverers of the model [@Kolmogorov:1991fk]). Travelling waves explain the decreased genetic diversity that arises from hitchhiking at the leading edge [@Hallatschek:2008hc; @Ralph:2010ij]. This approach also provides a practical way to measure selection coefficients [@Bauer:1989jl], and perhaps, a means to distinguish fixation due to selective sweeps from simple drift [@Korolev:2010gb; @Hallatschek:2010bs].
Statistical mechanics and the quantitative genetics of finite populations {#sect:StatMech .unnumbered}
=========================================================================
Although the diffusion equation provides an exact description of evolution, the joint distribution at many loci is hard to grasp. Statistical mechanics simplifies the problem by following just a few variables that summarize all the allele frequencies (or in physics, the particles’ states). These map the fitness landscapes for allele frequency onto a simpler one for quantitative traits [@Lande:1976fv], which are analogous to macroscopic quantities in statistical physics (\[Box:DiffEq\], Fig. \[Fig:SimpsonLandscape\])[@Rattray:2001p19].
#### **Maximization of entropy**
This reduction in dimensionality requires a way to account for the degrees of freedom lost in averaging over the underlying genetic states. This can be achieved by applying the principle of entropy maximization: we assume that the unknown micro-states follow a distribution that maximizes their entropy, $S$, given the values of macroscopic quantities [@PrugelBennett:1997p202]. Entropy can be defined in several ways. The definition appropriate here is analogous to the above, but extends to the case when $\underline{p}$ (the vector of allele frequencies at each locus) is the random variable: $S=-\int \psi \log[\psi/\varphi]d\underline{p} $. This defines the dispersion of the distribution of allele frequencies, $\psi$, relative to a base distribution, $\varphi$: it is maximized when the distribution is selectively neutral ($\psi=\varphi$) and decreases as the distribution becomes more tightly clustered around states that are a priori improbable [@Barton:2008p226; @Barton:2009p952; @Iwasa:1988p12]. If $S$is maximized whilst constraining the expectations of some macroscopic variables,$\langle A_i \rangle= \int A_i \psi d\underline{p}$, we obtain a distribution of allele frequencies $\psi = Z^{-1} \varphi \exp[2N \sum_i \alpha_i A_i]$ [@Barton:2008p226], where $Z$ normalizes the distribution and $N$ is the population size. Remarkably, this distribution corresponds exactly to the stationary solution of the diffusion equation (\[Box:DiffEq\]), when the $A$’s are chosen according to the particular mode of selection (quantitative traits, genetic variance, etc.) and heterozygosity, and are conjugated with the $\alpha$’s , which are the selection coefficients, mutation rates, etc. [@Barton:2008p226; @Barton:2009p952; @Iwasa:1988p12]. This analogy between statistical mechanics and evolution of a finite population has yielded several results, of which we will mention a few.
The dynamics of polygenic evolution can be approximated by a quasi-equilibrium assumption, that is, that the transient distribution of allele frequencies behaves as if the entropy is maximized at all times, given the current values of macroscopic variables. In this way, the change through time of quantitative characters – including their genetic variance – can be computed for populations affected by mutation, selection and drift, for an arbitrary number of loci [@Barton:2008p226; @deVladar:2011p5333]. In physics, macroscopic systems often change far more slowly than the microscopic fluctuations, justifying this approximation. In biology, we do not have such a stark separation. But nevertheless, the approximation is remarkably accurate even when the environment changes abruptly [@Barton:2008p226; @Barton:2009p952; @deVladar:2011p5333]; traveling waves may provide an explanation [@Hallatschek:2011p7231]
#### **Adaptive landscapes and detailed balance**
Wright’s formula for the stationary distribution [@Wright:1931] requires [**detailed balance**]{} [@Sella:2005p1584]. Population geneticists have shown that detailed balance is generally violated when there are more than two alleles at a locus [@Wright:1931], when recombination or migration are comparable with the strength of selection, or under frequency-dependent selection [@Taylor:2006p7233]. Without detailed balance, the dynamics cannot be represented by an adaptive landscape, and can mathematically intractable (though see [@Ao:2008p1853]). Phylogenetic analysis reveals deviations from detailed balance – for example, when genomic GC content changes over time [@Galtier:2001p7165]. So, we need methods for analyzing populations that are in a stationary state that violates detailed balance, or that are not at a statistical equilibrium at all.
#### **Path ensembles**
An alternative method that holds without detailed balance is the[**path ensemble**]{} [@Barton:1987p4186; @Mustonen:2010p5306]. Instead of describing the distribution of allele frequencies at any single time, we follow the distribution of paths of allele frequencies between two states at different time-points (Fig. \[Fig:FitnessFlux\]). The probability of any path can be written down in a simple form, and the chance of a transition from one state to another obtained (in principle) by integrating over paths (\[Box:DiffEq\]). The trajectories are weighted with respect to an optimal one, through three terms: Fisher’s information, the variance in fitness, and the [**fitness flux**]{}, $\phi$ (\[Box:PathEnsemble\]) [@Mustonen:2010p5306]. The latter measures the net amount of adaptation given a population’s history. It is defined as $\phi=s \frac{dp}{dt}$ , where $s$ is the selective coefficient of the beneficial allele; $\phi$ is the increase in mean fitness that is expected from changes in allele frequency –but without allowing for changes in selection. The fitness flux is distinct from the change in mean fitness, which in general is not well-defined when selection changes through time. Fitness flux includes changes in allele frequencies due to all evolutionary processes, and to the extent that these interfere with selection, can be negative. In considering the history of a population, the path ensemble methods give an understanding of the adaptation and evolution of complex traits that accounts for historical contingencies, an advantage over models that only consider a population’s state at a given time.
Evolutionary biology and Monte-Carlo methods {#sect:EvolBiolMC .unnumbered}
============================================
Monte Carlo methods are now widely used in statistical inference. When many variables are involved it is not feasible to explore the whole space of possible states (e.g. all possible phylogenetic trees amongst multiple species). A group working on nuclear weapon development at Los Alamos introduced a simple but widely used algorithm [@Metropolis:1953p4288]. One simply makes a random change to the microscopic variables, accepting it if it increases some measure, $L$ (for example, mean fitness). Changes that decrease $L$ to $L^*$, are accepted with probability $L^*/L$. This ensures that the microscopic variables will follow a distribution proportional to the stationary distribution of the diffusion equation, which would in turn, be determined solely by the random changes, multiplied by $L$ (Fig. \[Fig:DESols\]). This *Metropolis algorithm* has been developed in a statistical context [@Hastings:1970p4287], and applied to generate likelihood surfaces for statistical inference [@Szymura:1986dz; @Geyer:1992fu; @Beaumont:2010uq]. Intriguingly, this algorithm uses a simple form of selection to generate a distribution equal to the product of a neutral base distribution, and the measure $L$ – just as selection and random drift lead to Wright’s distribution under the diffusion approximation (see \[Box:DiffEq\]). Both rely on detailed balance, but a path ensemble approach allows extension to more general cases[@Barton:1987p4186].
Obstacles to overcome {#section:obstacles .unnumbered}
=====================
#### **Toy models and method-oriented analyses**
Over the last decade, physicists have shown strong interest in evolution. For example, in the last five years, over 2000 publications on evolution appeared in physics journals (chiefly *Physical Review* journals, *Physica A*, and *PNAS*). Unfortunately, most of these works pay little attention to the fundamental biology, because the motivation is often the specific methods rather than the biological questions. Consequently, many of these contributions remain unconnected to the rest of the evolutionary theory; for the most part, there is very little communication between the disciplines. Two examples follow. In the *Bak-Sneppen model* [@Bak:1993kl], populations evolve by removing the least fit individual together with two unrelated neighbours, and replacing them by three new individuals with random fitness. A “critical value” is reached, but with repeated periods where the fitness distribution spreads, and then re-organizes to the critical value. The Bak-Sneppen model attempted to explain the distributions of extinction episodes [@SNEPPEN:1995p5544], and patterns of experimental evolution [@Elena:2005qa], but had little impact in biology because it lacks any mechanistic basis. Notably, only 13 of 700 citations of the Bak-Sneppen model [@Bak:1993kl] were by non-physicists. Second, in the *Penna bit-string model of ageing* [@Penna:1995mi], the position in the genome of an allele dictates the age at which its detrimental effect is expressed. A threshold for the total number of such deleterious mutations is set arbitrarily, and the population evolves under mutation and competition. Senescence arises because selection is less effective in late life –a phenomenon already well-understood from Hamilton’s general analysis [@Hamilton:1966zt]. Here, out of roughly 230 to ref. [@Penna:1995mi], only 5 did not include physicists. These two approaches, and others alike, are not taken seriously since they rest on “toy models” that are not connected with biological reality.
#### **Two problems that restrict communication between disciplines**
First, the language and nomenclature employed by physicists are often not consistent with basic concepts in genetics: they employ terms such as energy, spin glass, magnetization, Ising chain, etc. where they should use mean fitness, polygenes, directional selection, or polygenic trait [@Baake:1997ys; @Baake:2001p5991; @Barton:2009p952; @Hermisson:2002pi]. Standard population genetics notation is largely ignored, making even the most basic equations appear unfamiliar. To take a central example, the diffusion equation includes deterministic and stochastic “forces”. In evolution, the stochastic part models genetic drift. However, the term “drift” is used in physics to refer to the deterministic part! Different nomenclatures make it difficult for physicists to address important biological questions, and for biologists to understand the questions posed by physicists. This is amplified when new ideas are introduced. For example, in an explanation of the advantages of sex, the idea of mixability was introduced [@Livnat:2010p5546]: i.e. sex favours alleles that are fit across different genetic backgrounds. A recently proposed measure of ÔÔmixability” [@Livnat:2010p5546] is identical to Fisher’s analysis of variance, which was devised precisely as a measure of epistasis [@Fisher1918]. Take another example: a statistical mechanics approach was used to find the distributions of contributions made by individual ancestors to future generations. This defined the statistical “weight” of each individual’s contribution in a lineage [@Derrida:2000ff], which, in biological terms, is just the reproductive value of an individual – again, a concept introduced by Fisher [@Fisher30].
Second, known results are often rediscovered due to the lack of a common language. For example, the original result that [**free fitness**]{} increases in evolution was illustrated with several examples from population and quantitative genetics, and was interpreted in terms of selection and drift [@Iwasa:1988p12]. Yet, the same principle was twice rediscovered by physicists decades later but with more restricted scope [@Sella:2005p1584] Another example is the *NK model*, where the fitness landscape can be“tuned", altering the degree of epistasis for fitness, was used to show that recombination is an evolvable trait [@Kauffman1993]. Yet, the theoretical analysis of the evolution of sex and recombination has been a thriving field since the 1970’s [@MaynardSmith1978]. No doubt population geneticists have re-derived results well known in physics (e.g. Wright’s calculation of rates of shift between adaptive peaks), but these are not usually published as new physics, and are typically studied for their biological implications. Nevertheless, physicists have also had a serious commitment to subjects meaningful to evolution. Significant works include those discussed in this article, clonal interference in asexuals [@Rouzine:2007p3220; @Rouzine:2010nx; @Neher:2010oq; @Hallatschek:2008hc], an application of percolation theory to speciation [@Gavrilets:1997p155], extending Haldane’s principle to a multilocus trait with partial dominance, epistasis and sexual reproduction [@Baake:1997ys; @Baake:2001p5991], and ecological explanations of replicator dynamics [@Demetrius:1983kx; @Demetrius:2007p7235]. All these are aimed directly at a biological audience, published in appropriate journals. Generally, physicists often have a sharp intuition about their models, which greatly helps in finding solutions.
Statistical physics is based on universal physical laws. In contrast, biological concepts are relative, plastic, or even arbitrary (e.g. mean fitness, traits). Hence the analogies with statistical-mechanical models are limited, depending on the nature of epistasis, physical linkage of the genes, unpredictable fluctuating selection, etc. Moreover, there are different ways in which precise analogies can be drawn, limiting their scope: some factors act deterministically (e.g. selection) and other stochastically (mutations or drift).
Conclusions {#setc:conclusions .unnumbered}
===========
Many of the fundamental processes of both population genetics and statistical physics are described by diffusion. In evolution, it provides a common framework for features such as the change in allele frequencies [@Wright:1931; @Wright:1937p5322], genealogies [@Barton:2004p4637], and spatial dispersal [@Fisher:1937p5980]. All these, and others, can benefit from methods of non-equilibrium statistical mechanics, which is a major and active field in physics.
The concept of a path ensemble is especially useful, shifting the paradigm from tracking frequencies at each point in time, to considering selection over the whole history of the alleles [@Barton:2004p4637]. This can be applied to both, deterministic [@Leibler:2010p6108] and stochastic evolution [@Barton:1987p4186]. In turn, long-standing questions about the efficiency of natural selection in building complex phenotypes [@MaynardSmith:2000p4611; @Haldane:1957p213; @Kimura:1961p5973], and evolution under fluctuating selection, can be re-addressed.
Of course, we can ask whether the mathematical paraphernalia that we advocate is of any practical use. Although we should not take mathematical models too literally, they are useful both for generating hypotheses about evolution, and for making sense of ecological and genetic data. Most notably, the neutral theory provides the conceptual framework for analyses of sequence data [@Kimura:1985], and quantitative genetics predicts the effects of selection on complex traits [@LynchWalsh:1998]. Ideas from statistical mechanics may help by providing new ways to describe the evolution of complex traits, and by suggesting constraints on the efficacy of selection. A clearer understanding of concepts such as fitness flux and entropy suggest new ways to think about the evolution of quantitative traits. To understand adaptation, we need to contemplate not only the current state of populations, but also their history. This is of course an old idea, but the rationale that we review, suggest new ways to understand the process of adaptation in a historical and quantitative way.
#### **Acknowledgments**
We would like to thank J.P. Bollback, R. Cipriani, J. Hermisson, J. Polechova, and D. Weissman for their comments and observations. This research was funded by the ERC-2009-AdG Grant for project 250152 SELECTIONINFORMATION.
A probability measure of the microscopic states of a physical system that is composed of classical (i.e. not quantum) particles in thermodynamic equilibrium. This distribution has a density proportional to the factor $\exp (-E/kT)$, where $E$ is the energy of a state, $k$ is Boltzmann’s constant, and $T$ is the absolute temperature.
An equilibrium where the probability flux of the transitions between any two states is equal in either direction. In population genetics this implies that the numbers of adaptive and deleterious substitutions have to be equal on average.
A measure of the number of possible configurations of a system. The classical measure of entropy is due to Boltzmann: $S=-k \log \Omega $, where $\Omega $ is the number (or density) of microscopic states (e.g. allele frequencies) that a system can realize for a given macroscopic state (mean fitness, a quantitative variable, etc.) and $k$ is Boltzmann’s constant. Relative entropy is defined as $S=-\int \psi \log (\psi /\varphi ) d\underline {p} $, where the $\underline {p}$ are the microscopic states, and the sum goes over all possible realizations; $\psi $ is the distribution of micro-states, and $\varphi $ is a base or reference distribution (satisfying $ \varphi = 2N V_{\delta p}$). However, when $\varphi =$ const. we have Shannon’s entropy, which is the form used in statistical physics. Entropy is also equivalent to the log-likelihood of $\varphi$ (the proposed distribution), and $\psi$ is the sampling probability of the actual distribution.
A measure of how much an infinitesimal change in an unknown parameter $\theta $ affects the likelihood $\psi $ of an observed data set, $p$. Fisher’s information is defined as $F=\int \psi (p;\theta ) \left ( \frac {\partial }{\partial \theta }\log [\psi (p;\theta )] \right )^2 dp $. When the parameter $\theta $ is time, Fisher’s information describes the amount of information gained through selection.
A measure of adaptation defined as $\phi (t)=s(p,t) dp/dt$, where $s$ is the selection coefficient (fitness gradient) and $p$ is the allelic frequency. Geometrically, it is the strength of fitness change (since s is the gradient of fitness, $W$), along the direction of evolution (given by $dp/dt$). The cumulative fitness flux, $ \Phi = \int \phi dt$, is a measure of the total amount of adaptation through the history of a population.
The expected gain in log-mean fitness after selection; after an analogy with the free energy of a physical system, that is the amount of work that can be done in a thermodynamic system. Free fitness ($I$) emerges naturally when computing the gain in entropy $S$ after an allele or a trait underwent selection [@Iwasa:1988p12], and has an equivalent expression to free energy, i.e. $I=\langle \log (\bar {W})\rangle -S/2N$ (in physics $\langle \log (\bar {W})\rangle $ should be replaced by $\langle E \rangle $ , and $2N$ by $1/kT$; see entry for Boltzmann distribution).
Interference in the selective sweep of an allele, due to the selective effects at another linked loci. Hill-Robertson interference implies that in the presence of recombination, genotypes with multiple mutations arise easier by recombining existing single mutations than by multiple mutation events.
A formalism of non-equilibrium statistical mechanics and quantum mechanics where the description of the system emphasizes not the states of a population of entities, but rather the distribution of possible stochastic paths that such a population can follow.
Population of replicators (typically asexual) with a high genotypic variability maintained by elevated mutation rates.
Dynamical equations that describe the change in time the frequency $p$ of the different types (in particular genotypes). It has the general form $dp/dt = p\Delta W + T$ , where $\Delta W$ is the difference between the fitness of the type and the mean fitness, and $T$ are the “transmission” terms, that may involve mutation, migration, recombination, etc.
Failure to survive or reproduce due to differences in genotype.
A probability distribution that does not change in time. This is found from the diffusion equation by setting $\partial \psi / \partial t = 0$ , and solving the resulting differential equation that is independent of time. A stationary solution might not exist (e.g. if selection is changing in time in particular ways), and if it exists, it might require detailed balance.
A mathematical framework explaining the relationship between the macroscopic properties of a system, in terms of the dynamics of the microscopic variables. At equilibrium, it leads to the classical concepts of entropy, free energy, and temperature, for example. Out of equilibrium, these quantities cannot be defined formally, and current research focuses in finding probabilistic measures that apply in general, but are still based on the microscopic dynamics. Based principally on the properties of stochastic processes (e.g. the diffusion equations, or path ensembles), these measures can be applied to the distribution of allele frequencies (e.g. Fisher’s information and fitness flux).
Solutions to non-linear differential equations characterized by functions that are of stable shape, and move at a certain velocity either in physical space, or in genetic space. (Traveling waves are also known as *solitons* in the physics and mathematics literature.)
Scheme where individuals that have traits outside a prescribed range are eliminated. This type of selection is popular in artificial selection.
![In the solution to the diffusion equation, the effects of fitness (blue) combine with neutral factors (green) to give the distribution of allele frequencies (red). The Metropolis-Hastings algorithm has an analogous structure: the acceptance weights (blue) and the random fluctuations (green) combine to give the distribution that is being estimated (red).[]{data-label="Fig:DESols"}](./DE.pdf)
![Mapping the genetic fitness landscape to a quantitative-trait fitness landscape. Left: different combinations of allele frequencies, lie in a hyperspace (shown only for a projection of 4 loci), where the axes represent the frequency of each allele. In this plot each point represents a population. The dense cloud of points towards the centre is an optimal peak, set at 0011. The other clouds are at sub-optimal adaptive peaks one mutation away from the optimum. However, each genotype determines a trait, and the population is mapped to a space of trait means, $z$, and genetic variance, $\nu$. Thus, mean fitness, trait mean, and genetic variance, although related by the allele frequencies, generate a fitness landscape in quantitative variables (yellow surface, the height indicating log-mean fitness). The number of variables (degrees of freedom) is collapsed from a hyperspace of an arbitrary number of allele frequencies at each locus to two quantitative variables: trait mean and genetic variance.[]{data-label="Fig:SimpsonLandscape"}](./SimpsonLandscape.pdf)
![The top panel shows the distribution of allele frequencies through time (shown as contour levels). Initially, populations follow the neutral distribution (left axis; $N\mu=0.7$). Directional selection $Ns=2.5$ is then applied, and populations settle to a new distribution (right axis). Any actual realization (red curve) fluctuates stochastically around an optimal one (illustrated with the green curve). The white dashed line is the deterministic solution, shown as a reference. The lower panel shows the fitness flux (upper curve, green) and the decrease in entropy (lower curve, red). When selection changes abruptly, as here, fitness flux is substantially greater than the decrease in entropy. However, if selection were to change slowly, the two would be equal throughout.[]{data-label="Fig:FitnessFlux"}](./Ensemple.pdf "fig:") ![The top panel shows the distribution of allele frequencies through time (shown as contour levels). Initially, populations follow the neutral distribution (left axis; $N\mu=0.7$). Directional selection $Ns=2.5$ is then applied, and populations settle to a new distribution (right axis). Any actual realization (red curve) fluctuates stochastically around an optimal one (illustrated with the green curve). The white dashed line is the deterministic solution, shown as a reference. The lower panel shows the fitness flux (upper curve, green) and the decrease in entropy (lower curve, red). When selection changes abruptly, as here, fitness flux is substantially greater than the decrease in entropy. However, if selection were to change slowly, the two would be equal throughout.[]{data-label="Fig:FitnessFlux"}](./FitnesFlux.pdf "fig:")
Evolution and the material basis of heredity {#Box:History}
============================================
Early in the last century, Fisher embarked on the mathematical formalization of the Mendelian principles of heredity, following the earlier development of biometrics by Galton, Pearson and Weldon, all with the aim of quantifying evolution by natural selection. Fisher used the diffusion approximation (see \[Box:DiffEq\]) to describe the evolution of allele frequencies [@Fisher:1922lh]. In 1929, he introduced the Fundamental Theorem of Natural Selection [@Fisher30]; comparing it to the second law of thermodynamics, the increase of entropy, he intended the theorem to be an exact result, a “biological law” [@Price:1972tw; @Ewens:1989qo]. Although his comparison with the second law is flawed, it shows how the quantitative approach to heredity was influenced by statistical thermodynamics (indeed, Fisher had studied with E.T. Jeans, a physicist). That mechanistic basis of evolution, population genetics, was formulated without knowledge of the physical nature of the Mendelian genes, which was still unknown in the 1930’s: the structure of DNA was not established until 1953. In the following decades, Delbr[ü]{}ck, formerly an astrophysicist, started a collaboration employing ionizing radiation on *Drosophila* to understand the physical nature of the genes, as a working system to try to identify fundamental physical laws that would account for living and non-living matter [@Timofeeff-Ressovsky:1935il]. Later, the ingenious Luria-Delbrück experiment proved the basis of Darwinian evolution. They performed a statistical comparison between the number of bacteria developing resistance to lysogenic viruses and its expected distribution, which was derived from a mathematical analysis, an unusual quantitative approach for the biologists of the time [@Luria:1943p4720]. Soon after, the quantum theorist Schrödinger published *What is life?* [@Schroedinger1944], posing fundamental biological questions in physicists’ language – partly based on Delbrück’s discoveries. This book gave strong motivation to the first molecular biologists (among them Perutz, Wilkins, Crick and Watson) to find how DNA transmitted the heritable information to future generations [@Judson1998]. Molecular biology was influenced in large part by the use of physical techniques such as X-ray crystallography to determine biological structures. Evolution, however, while resting on that material basis of DNA, is not explained by it. Indeed, the population genetic framework that we use today was developed prior to the discovery of the structure of DNA, and was not changed by the establishment of molecular biology, since it rests only on MendelÕs laws. However, the theoretical methods that are common to statistical physics and to evolutionary biology give a deeper understanding of the evolutionary consequences of heredity.
The Diffusion Equation {#Box:DiffEq}
======================
The diffusion equation originated in Bachelier’s models of fluctuations in share prices in 1900, and was rediscovered 73 years later as the Black-Scholes formula, disastrously popular amongst economists. Diffusion theory in economics is equivalent to the theory of Brownian motion, devised by Einstein to explain random molecular collisions, and soon after extended in physical applications [@Davis:2006mb]. Fisher [@Fisher:1922lh] compared Mendelian genetics to “the theory of gases” and introduced the diffusion methods for the allele frequencies. Kolmogorov [@Kolmogorov:1931ye] gave a more formal approach to selection and drift. Kimura [@Kimura:1964p5898] later extended this formalism to non-equilibrium cases. For population genetics, the diffusion equation is a rather convenient representation of evolution of finite populations where genetic drift is present. We could choose to model the change in allele frequencies directly, in what is known a Wright-Fisher process. But genetic drift evolves stochastically, making the outcomes of evolution unpredictable. The diffusion equation gauges these outcomes in a probabilistic way, describing the distribution of allele frequencies at each time. (A third way to describe an evolving populations is to use the whole history as a random variable instead of the allele frequencies at a time, in what is known as a path ensemble; see \[Box:PathEnsemble\]). In short, the diffusion equation is a partial differential equation describing the change in time of the probability density $\psi$ of the allele frequencies $p$, namely $$\frac{\partial \psi}{\partial t}=-\frac{\partial}{\partial p}(M_{\delta p} \psi)+\frac{1}{2}\frac{\partial^2}{\partial p^2}(V_{\delta p}\psi) ~,$$ where $M_{\delta p} $ are the deterministic factors, due to selection, mutation, migration, etc. and $V_{\delta p}$ is the variance of the fluctuations by drift usually of the form $p(1-p)/2N$. Making the left-hand side equal to zero, leads to the stationary solution derived by Wright by other means \[16\]: $$\psi =C\bar{W}^{2N} \left[p(1-p)\right]^{4N\mu-1} .$$ For details of the derivation see [@Kimura:1964p5898]. In particular, the term $\bar{W}$ defines the “fitness landscape” which can be thought as a surface in the space of allele frequency (Fig. \[Fig:DESols\]), or in the quantitative variables (Fig. \[Fig:SimpsonLandscape\]).
The diffusion equation, the coalescent process, and the path ensemble all describe the same process and are mathematically equivalent. Each has different advantages and limitations; whereas a stochastic differential equation, the diffusion equation and the path ensemble do not require detailed balance, the stationary distribution above, does. Yet, this solution is exact, quite general and relatively simple.
Path ensembles and fitness flux {#Box:PathEnsemble}
===============================
A path ensemble considers all the possible histories of a population between two fixed states $p_0$ and $p_T$ at times $0$ and $T$. In this description, each history is the variable being described. The probability of a particular trajectory $\rho(t)$ is proportional to the factor $\exp\left[ -N\int \left(\frac{dp}{dt}- M_{\delta p} \right)^2\frac{dt}{p(1-p)}\right]~,$ where the allele frequencies $p$ are evaluated at each point of the history $\rho(t)$. Here, $M_{\delta p}$ is the same factor in the diffusion equation –suggesting the connection between the two methods. The path integral can be understood as a sum if the history is sampled at discrete times, $\rho=\{\rho_0, \rho_1, \ldots, \rho_T\}$. Notice that because the integral is always positive, if it achieves a minimum for a given history, then that history has the highest probability. To understand the meaning of the integral we may develop the binomial expression inside the integral into three terms: $F-2\phi+\nu$, and consider the case of selection, $M_{\delta p}=p(1-p)s$ ; $F$ is Fisher’s information, $\phi=\frac{M_{\delta p}}{p(1-p)}\frac{dp}{dt}=s\frac{dp}{dt}$ is the fitness flux, and $\nu$ is the additive genetic variance in fitness. Thus the histories occur as a compromise between minimizing Fisher’s information and genetic variance –both regarded as measures of the speed of adaptation– and maximizing the fitness flux.
Fitness flux is a measure of adaptation of beneficial alleles [@Mustonen:2010p5306] the cumulative flux $\left(\Phi=\int \phi dt \right)$ of a population history is the equivalent measure to the fitness of a population (if we think of successive substitutions, it is the total of all the selection coefficients associated with each substitution). The expectation of cumulative fitness flux is necessarily greater than the reduction in entropy between the initial and final equilibrium states (which can be understood as the information gained by the population) \[49\]: $2N\langle \Phi \rangle\geq -\Delta S$. That is, it takes a certain amount of selection (measured precisely by the fitness flux) to move the allele frequency distribution away from its neutral state (as measured by the decrease in entropy). This result is quite generally valid, and is not restricted to, say, constant selection. Moreover, if selection changes slowly so that the distribution stays close to the stationary state, then $2N\langle \Phi \rangle = -\Delta S$ ; such changes are termed “reversible”. For example, assume that the allele frequencies initially follow a neutral distribution (mutation-drift balance). Suddenly, directional selection is applied so that $\bar{W}=\exp(sp)$, and loci move toward a new distribution under selection and drift. The fitness flux is then substantially greater than the decrease in entropy (Fig. \[Fig:FitnessFlux\]). If, on the other hand, selection were increased very slowly, eventually to reach the same strength, the net fitness flux would necessarily be much smaller, and equal to the decrease in entropy (lower curve in Fig. \[Fig:FitnessFlux\]). The fitness flux method is surprisingly general. However, its relation with quantities that might actually constrain the extent of selection. In particular, the additive variance in fitness is proportional to $s^2p(1-p)$ ; we see that the additive genetic variance in fitness is just twice the fitness flux, when that includes only the change in allele frequency due to selection, $\Delta_s p=s p(1-p)$. Further understanding could emerge relating the decrease in entropy due to selection to the additive genetic variance in fitness [@Barton:2000p5970].
Last, it is relevant that fitness flux presents an extension of Fisher’s Fundamental Theorem of Natural selection [@Fisher30]: it considers not only the change due to selection, but also the effects of drift, and unlike Fisher’s theorem, the fitness flux theorem holds also for weak selection ($Ns\sim 1$) [@Mustonen:2010p5306].
[10]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{}
L. Boltzmann, Vorlesungen [ü]{}ber Gastheorie, J.A. Barth, 1896.
H. B. Callen, Thermodynamics and an introduction to thermostatics, John Wiley $\&$ Sons, 1985.
J. Maynard-Smith, The concept of information in biology, Phil. Sci. 67 (2000) 177–194.
C. E. Shannon, A mathematical theory of communication, Bell Syst. Tech. J. 27 (1948) 379–423.
J. B. S. Haldane, The cost of natural selection, Genetics 55 (1957) 511–524.
M. Kimura, Natural selection as the process of accumulating genetic information in adaptive evolution, Genet. Res. 2 (1) (1961) 127–140.
R. P. Worden, A speed limit for evolution, J. Theor. Biol. 176 (1995) 137–152.
M. Eigen, P. Schuster, The hypercycle: A principle of natural self-organization. [P]{}art [A]{}: Emergence of the hypercycle, Naturwissenschaften 64 (11) (1977) 541–565.
M. Kimura, T. Maruyama, Mutational load with epistatic gene interactions in fitness, Genetics 54 (6) (1966) 1337–[&]{}.
B. Frieden, A. Plastino, B. Soffer, Population genetics from an information perspective, J. Theor. Biol. 208 (1) (2001) 49–64.
S. Frank, Natural selection maximizes fisher information, J. Evol. Biol 22 (2) (2009) 231.
J. Crow, M. Kimura, The theory of genetic loads, Proc. 11th Intern. Congr. Genet 3 (1964) 495–506.
W. J. Ewens, Mathematical Population Genetics, Springer - Verlag, Berlin, Germany, 1979.
D. J. C. Mackay, Information theory, inference, and learning algorithms, Cambridge University Press, 2003.
J. Peck, D. Waxman, Is life impossible? [I]{}nformation, sex and the origin of complex organism, Evolution 64 (11) (2010) 3300–3309.
J. B. S. Haldane, The effect of variation in fitness, Am. Nat. 72 (1937) 337–349.
C. J. C. H. Watkins, The channel capacity of evolution: ultimate limits on the amount of information maintainable in the genome, Proc. 3rd Intern. Conf. Bioinf. Genome Reg. Struct. 2 (2002) 58–60.
R. A. Fisher, On the dominance ratio, Proc. Roy. S. Edinb. 42 (1922) 321–341.
A. N. Kolmogorov, Deviations from [H]{}ardy’s formula in partial isolation, C.R. Acad Sci. U.R.S.S. 8 (1935) 129–132.
S. Wright, Evolution in [M]{}endelian populations, Genetics 16 (1931) 97–159.
M. Kimura, Stochastic processes and distribution of gene frequencies under natural selection, Cold Spring Harb. Symp. Quan.t Biol. 20 (1955) 33–53.
M. Kimura, The neutral theory of molecular Evolution, Cambridge Univ. Press, Cambridge, UK, 1985.
S. Wright, Surfaces of selective value revisited, Am. Nat. 131 (1) (1988) 115–123.
N. H. Barton, S. Rouhani, The frequency of shifts between alternative equilibria, J. Theor. Biol. 125 (4) (1987) 397–418.
S. Wright, On the probability of fixation of reciprocal translocations, Am. Nat. 75 (1941) 513–522.
R. Lande, The fixation of chromosomal rearrangements in a subdivided population with local extinction and colonization, Heredity 54 (1985) 323–332.
S. Rouhani, N. H. Barton, Speciation and the shifting balance in a continuous population, Theor. Popul. Biol. 31 (1987) 465–492.
C. O. Wilke, The speed of adaptation in large asexual populations, Genetics 167 (2004) 2045–2053.
I. M. Rouzine, [É]{}. Brunet, C. Wilke, The traveling-wave approach to asexual evolution: Muller’s ratchet and speed of adaptation, Theor. Pop. Biol. 73 (2007) 24–46.
R. B[ü]{}rger, Evolution of genetic variability and the advantage of sex and recombination in changing environments, Genetics 153 (1999) 1055–1069.
O. Hallatschek, The noisy edge of traveling waves, P. Natl. Acad. Sci. USA 108 (5) (2011) 1783–1787.
I. M. Rouzine, J. M. Coffin, Multi-site adaptation in the presence of infrequent recombination, Theor. Popul. Biol. 77 (467-481).
R. A. Neher, B. I. Shraiman, D. S. Fisher, Rate of adaptation in large sexual populations, Genetics 181 (2010) 467–481.
J. R. Peck, D. Waxman, Sex and adaptation in a changing environment, Genetics 153 (1999) 1041–1053.
N. H. Barton, Why sex and recombination?, Cold Spring Harb. Symp. Quant. Biol. 74 (2009) 158–170.
R. Fisher, The wave of advance of advantageous genes, Ann. Eugen. 7 (1937) 355–369.
A. N. Kolmogorov, I. Petrovskii, N. Piscounov, A study of the diffusion equation with increase in the amount of substance, and its application to a biological problem, in: V. M. Tikhomirov (Ed.), Selected Works of [A. N. K]{}olmogorov [I]{}, Kluwer, 1991, pp. , 248–270.
O. Hallatschek, D. R. Nelson, Gene surfing in expanding populations, Theor. Pop. Biol. 73 (2008) 158–170.
P. Ralph, G. Coop, Parallel adaptation: one or any waves of advance of an advantageous allele?, Genetics 186 (2010) 647–668.
G. J. Bauer, J. S. McCaskill, H. Otten, Traveling waves of in vitro evolving [RNA]{}, Proc. Natl. Acad. Sci. U.S.A. 20 (1989) 7937–7941.
M. Korolev, K. S amd Avlund, O. Hallatschek, D. R. Nelson, Genetic demixing and evolution in linear stepping stone models, Rev. Mod. Phys. 82 (2010) 1691–1718.
O. Hallatschek, D. R. Nelson, Life at the front of an expanding population, Evolution 64 (2010) 193–206.
R. Lande, Natural selection and random genetic drift in phenotypic evolution, Evolution 30 (1976) 314–334.
M. Rattray, J. Shapiro, Cumulant dynamics of a population under multiplicative selection, mutation, and drift, Theor. Pop. Biol. 60 (2001) 17–32.
A. Pr[ü]{}gel-Bennett, Modelling evolving populations, J. Theor. Biol. 185 (1) (1997) 81–95.
N. H. Barton, H. P. de Vladar, Statistical mechanics and the evolution of polygenic quantitative traits, Genetics 181 (3) (2009) 997–1011.
N. H. Barton, J. Coe, On the application of statistical physics to evolutionary biology, J. Theor. Biol. 259 (2009) 317–324.
Y. Iwasa, Free fitness that always increases in evolution, J. Theor. Biol. 135 (1988) 265–281.
H. P. de Vladar, N. H. Barton, The statistical mechanics of a polygenic character under stabilizing selection, mutation and drift, J. Roy. Soc. Interface 8 (58) (2011) 720–739.
G. Sella, A. Hirsh, The application of statistical physics to evolutionary biology, P. Natl. Acad. Sci. USA 102 (27) (2005) 9541–9546.
C. Taylor, Y. Iwasa, M. A. Nowak, A symmetry of fixation times in evoultionary dynamics, J. Theor. Biol. 243 (2) (2006) 245–251.
P. Ao, Emerging of stochastic dynamical equalities and steady state thermodynamics from darwinian dynamics, Commun. Theor. Phys. 49 (5) (2008) 1073–1090.
N. Galtier, G. Piganeau, D. Mouchiroud, L. Duret, [GC]{}-content evolution in mammalian genomes: The biased gene conversion hypothesis, Genetics 159 (2) (2001) 907–911.
V. Mustonen, M. Lassig, Fitness flux and ubiquity of adaptive evolution, P. Natl. Acad. Sci. USA 107 (9) (2010) 4248–4253.
N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, E. Teller, Equation of state calculations by fast computing machines, J. Chem. Phys. 21 (6) (1953) 1087.
W. Hastings, Monte [C]{}arlo sampling methods using [M]{}arkov chains and their applications, Biometrika 57 (1) (1970) 97.
J. Szymura, N. H. Barton, Genetic-analysis of a hybrid zone between the fire-bellied toads, *Bombina bombina* and *Bombina variegata*, near [C]{}racow in southern [P]{}oland, Evolution 40 (1986) 1141–1159.
C. J. Geyer, E. A. Thompson, Constrained monte carlo maximum likelihood for dependent data (with discussion)., J. Roy. Statist. Soc. B 54 (1992) 657–699.
M. A. Beaumont, Approximate [B]{}ayesian computation in evolution and ecology, Ann. Rev.Ecol. Evol. Syst. 41 (2010) 379–406.
P. Bak, K. Sneppen, Punctuated equilibrium and criticality in a simple model of evolution, Phys. Rev. Lett. 71 (1993) 4083–4086.
K. Sneppen, P. Bak, H. Flyvbjerg, M. Jensen, Evolution as a self-organized critical phenomenon, P. Natl. Acad. Sci. USA 92 (11) (1995) 5209–5213.
S. F. Elena, R. Sanju[á]{}n, [RNA]{} viruses as complex adaptive systems, Biosystems 81 (2005) 31–41.
T. J. P. Penna, A bit-string model for biological aging, J. Stat. Phys. 78 (1995) 629–1633.
W. Hamilton, Moulding of senescence by natural selection, J. Theor. Biol. 12 (1966) 12–45.
E. Baake, M. Baake, H. Wagner, Ising quantum chain is equivalent to a model of biological evolution, Phys. Rev. Lett. 78 (1997) 559–562.
E. Baake, H. Wagner, Mutation-selection models solved exactly with methods of statistical mechanics, Genet. Res. 78 (1) (2001) 93–117.
J. Hermisson, O. Redner, H. Wagner, E. Baake, Mutation-selection balance: ancestry, load and maximum principle., Theor. Pop. Biol. 62 (2002) 9–46.
A. Livnat, C. Papadimitriou, N. Pippenger, M. W. Feldman, Sex, mixability, and modularity, P. Natl. Acad. Sci. USA 107 (4) (2010) 1452–1457.
R. A. Fisher, The correlation between relatives on the supposition of [M]{}endelian inheritance, Trans. Roy. Soc. Edinb. 52 (1918) 399–433.
B. Derrida, S. Manrubia, D. Zanette, Distribution of repetitions of ancestors in genealogical trees, Physica A 281 (2000) 1–16.
R. A. Fisher, The genetical theory of natural selection, 1st Edition, Clarendon, Oxford, UK, 1930.
S. A. Kauffman, The origins of order, Oxford University Press, 1993.
J. Maynard-Smith, The evolution of sex, Cambridge University Press, 1978.
S. Gavrilets, Evolution and speciation on holey adaptive landscapes, Trends Ecol. Evol. 12 (8) (1997) 307–312.
L. Demetrius, Statistical mechanics and population biology, J. Stat. Phys. 30 (1983) 709–753.
L. Demetrius, M. Ziehe, Darwinian fitness, Theor. Pop. Biol. 72 (3) (2007) 323–345.
S. Wright, The distribution of gene frequencies in populations, P. Natl. Acad. Sci. USA 23 (1937) 307–320.
N. H. Barton, A. Etheridge, The effect of selection on genealogies, Genetics 166 (2) (2004) 1115.
S. Leibler, E. Kussell, Individual histories and selection in heterogeneous populations, P. Natl. Acad. Sci. USA 107 (29) (2010) 13183–13188.
M. Lynch, B. Walsh, Genetics and Analysis of Quantitative Traits, Sinauer Associates, Sunderland, USA, 1998.
G. R. Price, Fisher’s fundamental theorem made clear, Ann. Hum. Genet. 36 (1972) 129–140.
W. J. Ewens, An interpretation and proof of the fundamental theorem of natural selection, Theor. Popul. Biol. 36 (1989) 167–180.
N. Timof[é]{}eff-Ressovsky, K. Zimmer, M. Delbr[ü]{}ck, [Ü]{}ber die natur der genmutation und der genstruktur, Nachr. Ges. Wiss. Gottingen 1 (1935) 189–245.
S. Luria, M. Delbruck, Mutations of bacteria from virus sensitivity to virus resistance, Genetics 28 (6) (1943) 491–511.
E. Schr[ö]{}dinger, What is life?, Cambridge University Press, 1944.
H. F. Judson, The eigth day of creation: makers of a revolution in biology, Plainview, 1998.
M. Davis, A. Etheridge, Louis Bachelier’s Theory of Speculation: the origins of modern finance, Princeton Univ. Press, 2006.
A. N. Kolmogorov, On the analytical methods in probability calculations, Math. Ann. 104 (1931) 415–458.
M. Kimura, Diffusion models in population genetics, J. Appl. Probab. 1 (2) (1964) 177–232.
N. Barton, L. Partridge, Limits to natural selection, Bioessays 22 (12) (2000) 1075–1084.
|
---
abstract: 'Lindsay and Basak (2000) posed the question of how far from normality could a distribution be if it matches $k$ normal moments. They provided a bound on the maximal difference in c.d.f.’s, and implied that these bounds were attained. It will be shown here that in fact the bound is not attained if the number of even moments matched is odd. An explicit solution is developed as a symmetric distribution with a finite number of mass points when the number of even moments matched is even, and this bound for the even case is shown to hold as an explicit limit for the subsequent odd case.'
author:
- 'Stephen Portnoy[$ ^1 $]{}'
title: 'Exact Probability Bounds under Moment-matching Restrictions'
---
Portnoy (2015) presents results partially correcting claims in Lindsay and Basak (2000) concerning the worst-case approximation of a normal distribution by a distribution that matches a given number of moments. The formal mathematical statements and proofs are given here.
\[Vandersol\] Let $\{ x_1 , \, \cdots \, , \, , x_n \}$ be any domain of different non-zero values with associated probabilities $\{ p_1 , \, \cdots \, , \, , p_n \}$. Suppose the moments are matched $$\label{momeq}
\sum_{i=1}^n p_i x_i^j = M_j \qquad j = 1 , \, \cdots \, , \ n$$ where $$\label{normom}
M_\ell \equiv E \, Z^\ell \,\, , \qquad Z \sim {\cal{N}}(0, \, 1) \,\, .$$ Then, for $\, j = 1 , \, \cdots \, , \ n \,$, $$\label{pjdef}
p_j = \frac{ \sum_{i=1}^n (-1)^i \, M_{n-i+1} \, e_{i-1}(\sim x_j) }
{ \prod_{i=1}^n \, (x_i - x_j) }$$ where $\, e_m(y_1 , \, \cdots Ê\, , \, \ y_n) \,$ denotes the $m$th elementary symmetric function of its arguments, and the argument $( \sim y_j)$ denotes the $(n-1)$-vector $\, (y_1 , \, \cdots Ê\, , \, \ y_n) \,$ with $\, y_j \,$ deleted. Furthermore, $$\label{sumpsol}
r \equiv \sum_{j=1}^n p_j = \frac{ \sum_{i=1}^n (-1)^i \, M_{n-i+1} \,
e_{i-1}(x_1, \, x_2, \, . . . \, , \, x_n ) }
{ \prod_{i=1}^n \, x_i } \,\, .$$
Consider the Vandermonde matrix $$\label{Vandef}
V \equiv \left( \begin{array}{c c c c c}
1 & 1 & 1 & \cdots & 1 \\
0 & x_1 & x_2 & \cdots & x_n \\
0 & x_1^2 & x_2^2 & \cdots & x_n^2 \\
\cdots & \cdots & \cdots & \cdots & \cdots \\
0 & x_1^n & x_2^n & \cdots & x_n ^n
\end{array} \right)$$ Introduce $\, p_0 \equiv 1 - r = 1 - \sum_{j=1}^n p_j \, $. Then with $p$ denoting the $n$-vector with coordinates $p_j$, equation [(\[momeq\]) ]{} yields the matrix equations: $$\label{Vaneqs}
V \left( \begin{array}{c} p_0 \\ p \end{array} \right) = \left( \begin{array}{c} 1 \\ M \end{array} \right)
\qquad \qquad V_{22} \, p = M \, ,$$ where $V_{22}$ is the lower left $\, n \times n \,$ submatrix of $V$ and $M$ is the vector of moments. Note also that the argument here does not require $M$ to be the moments of a Normal distribution: any vector will provide the same formulas, though I have no general result providing conditions under which the $p_j$’s solving [(\[momeq\]) ]{} need be in $[ 0 , \, 1 ]$.
Eisinberg and Fedele (2006) provide a formula for the inverse element of $V$ (where in the notation of that paper, we have taken $\, x_0 = 0$). Specifically, for $\, i = 0 , \, 1 , \, \cdots \, , \, , n \,$:
$$(V^{-1})_{ij} = \phi_{nj} \, \Psi_{n1i}$$ where $$\phi_{nj} = 1 / \prod_{k=1}^n (x_k - x_j)$$ from the recursion in equation (6) of Eisinberg and Fedele (2006) plus a direct induction argument; and $$\Psi_{n1i} = (-1)^{i+1} \, e_{n+1-i}(\sim x_j)$$ from equation (26) of that paper.
Thus, noting that $\, V_{11}^{-1} = 1 \,$, the first row of [(\[Vaneqs\]) ]{} yields $$\begin{aligned}
r = \sum_{j=1}^n p_j & = & \sum_{i=1}^n (-1)^{i+1} \, M_i
e_{n-i}(x_1 , \, . . . \, , \, x_n) / \prod_{j=1}^k (x_j - x_0) \\
& = & \sum (-1)^i \, M_{n-i+1}
e_{i-1}(x_1 , \, . . . \, , \, x_k) / \prod_{j=1}^n (x_j) \,\, .\end{aligned}$$
Note that the block-triangular form for $V$ shows that the formula above also gives the corresponding elements of $\, V_{22}^{-1} $. Thus, as above, the second row of matrix equation [(\[Vaneqs\]) ]{} immediately provides [(\[pjdef\]) ]{}.
To show that it suffices to consider symmetric discrete distributions,
It is possible to simplify computations by considering only the positive values in a symmetric domain. Specifically, let $\, \{ \, 0 < y_1 < \, \cdots \, < y_n \, \} \, $ be any domain of positive mass points. Consider the symmetric mixture with (positive) probabilities $\, p_j/2 \,$ at each of $\, \pm y_j \,$ and probability $\, p_0 \,$ at zero. Suppose the $\, p_j$’s generate $m$ even moments: $$\sum_{j=1}^n \, y_j^{2i} = M_{2i} / 2 \equiv E Z^{2i} / 2 \qquad i = 1 , \, \cdots \, , \, m \,\, .$$ Define $$\label{r*def}
r^* \equiv \sum_{j=1}^n p_j \,\, .$$ Then, the c.d.f. of the mixture at zero is just $\, F(0) = 1 - r^* \,$ and $\, p_0 = 1 - 2 r^* \,$. Thus, maximizing the c.d.f. at zero is the same as minimizing $r^*$.
The following result shows that symmetric distributions suffice. Since the result is not required for the subsequent optimality results, the proof is only sketched.
\[thm2\] Let $\, \epsilon > 0 \,$ be given. Let $\{ x_1 , \, \cdots \, , \, , x_n \}$ be any domain for which there are (non-zero) associated probabilities with moments matching $k$ normal moments and whose c.d.f. at zero is within $\epsilon$ of the maximal value (over all distributions with $k$ matching moments). Then there is a set of symmetric points, $\{ \pm y_j \, : \, y_j > 0 \}$, with associated positive probabilities also matching $k$ Normal moments and for which the c.d.f. at zero is also within $\, \epsilon \,$ of its maximal value.
.
Choose a perturbation of the domain, $ \{ x_i^* \}$ so that all $\, | x_i^* | \,$ are different and so that the (associated probabilities) $\, p_i^* \,$ satisfying the moment equalities are all positive are such that the c.d.f at zero is within $\, \epsilon \,$ of the optimum. Introduce the symmetrized domain $\, \{ \{ x_i^* \} , \, \{ - x_i^* \} \} \,$. Consider the linear programming problem given by equations (1) and (2) of Portnoy (2015) applied to the symmetrized domain. Since the original distribution in the statement of the Theorem provides a feasible solution, the linear programming solution provides an optimal value that is also within $\epsilon$ of the original optimal value. By symmetry, the solution to the liner programming problem is also symmetric.
Theorem \[oddNoSol\] (below) requires the derivatives of the probabilities in [(\[pjdef\]) ]{} (with respect to the mass points), which can be computed easily:
\[sgnAlt\] Consider a set of positive mass points $\, \{ 0 < y_1 \, < \, y_2 \, < \, \cdots \, < \, y_n \} \,$. Let $p_j$ be the corresponding probabilities matching moments and let $r^*$ be the sum of probabilities given by [(\[sumpsol\]) ]{} (see Theorem \[Vandersol\]). Then $\, \frac{\partial r^*}{\partial y_j} \,$ alternates in sign for $\, j = 1 , \, , \cdots \, , \, n \,$, with the partial derivative with respect to $y_1$ being positive.
For each $\, j \,$, factor $\, 1 / y_j \,$ from $\, r^* \,$: $$\begin{aligned}
\label{roveryj}
r^* & = & \frac{1}{y_j}
\frac{ \sum_{i=1}^k (-1)^i \, M_{k-i+1} \, e_{i-1}^k(- y_j) } { 2 \, \prod_{i \ne j} \, y_i }
\, + \, g(\sim y_j) \\
& = & p_j^* \frac{ \prod_{i \neq j} \, (y_i - y_j) } { \prod_{i \neq j} \, y_i } \, + \, g(\sim y_j)\end{aligned}$$ where the term $ \, g(\sim y_j) \,$ does not depend on $\, y_j \,$ (since each of the numerator terms in $g$ has a factor of $y_j$ that is cancelled by the $y_j$ factor in the denominator product). Since the $\, y_j$’s are positive, the product $\, \prod_{i \neq j} \, (y_i - y_j) \,$ alternates in sign, and the theorem follows.
Finally, the basic optimality results are presented. First, assume the number of even moments, $k$, is even, and let $\{ y_1 , \, \, \cdots \, , \, y_{k/2} \}$ be the squares of the non-zero roots of the Hermite polynomial $He(2k+1)$. Note that the $k$ non-zero roots of $He(2k+1)$ are located symmetrically about zero, and so there are only $k/2$ squares.
Let $M_j$ denote the normal moments (see [(\[normom\]) ]{}). By the standard theory for Gaussian quadrature and symmetry, $M_j$ is given by twice the sum of the $k/2$ even gaussian quadrature weights times the $jth$ power of $y_i$ ($\, j = 1 , \, \cdots \, , \, k \, )$. Thus, the weights can be determined by any $k/2$ even moment equalities. Let $p_j$ $( \, j = 1 , \, \cdots \, , \, k/2 \, )$ satisfy: $$\sum_{i=1}^{k/2} p_i y_i^{2j} = M_{2j}/2 \qquad j = 1 , \, \cdots \, , \, k/2 \, .$$
That is, the $p_j$’s are just the even weights. Since these are known to be positive and sum to less than one, they can define a discrete probability distribution by symmetrization and introduction of $p_0$. The following theorem shows that this distribution is least favorable in the sense of maximizing $p_0$ (or, equivalently, maximizing the difference from the normal c.d.f. at zero).
\[p0EQlb\] If the number of even moments, $k$, is even, then the solution described above achieves 1/2 of the bound given in Theorem 2 of Lindsay (2000), and therefore is least favorable.
From [(\[sumpsol\]) ]{} the sum of the $p_j$’s is $$\label{sumnum}
r* = (1 - s/M_{2k})/2$$ where $$\begin{aligned}
s & = & \sum_{i=1}^k (-1)^i \, M_{2(k-i+1)} \, e_{i-1}^k (y_1, \, y_2, \, . . . \, , \, y_k ) \\
& = & \sum_{i=1}^{k} He[2k+2]_{i+2} M_{2i}
\end{aligned}$$ Here, $M_\ell$ is given by [(\[normom\]) ]{} above, and we use the fact that the coefficients of the Hermite polynomial $He[n]$ are just the elementary symmetric functions of the squared roots, $\{ y_j \}$ in opposite order.
Let $M$ be the Hankel moment matrix of even moments: $$\label{Hankel}
M = \left( \begin{array}{ c c c c c }
1 & M_2 & M_4 & \cdots & M_{2k} \\
M_2 & M_4 & M_6 & \cdots & M_{2k+2} \\
& \cdots & & & \\
M_{2k} & M_{2k+2} & M_{2k+4} & \cdots & M_{4k}
\end{array} \right) \, .$$
The upper bound from Lindsay(2000) is $\, 1 / M^{-1}_{1,1} \, $. As noted by Lindsay, by symmetry of the normal distribution, if there is a distribution achieving $\, .5 / M^{-1}_{1,1} \, $, this distribution must be least favorable (that is, it maximizes the difference from the normal distribution evaluated at zero).
Let C be the matrix whose rows contain zeros and the non-zero coefficients of the Hermite polynomials as follows: $$\label{Cmatrix}
\begin{array}{c}
He[2k+2]_{(2, \, 4 , \, \cdots \, , \, 2k+2)} \\
(0 \,\, , \, He[2k]_{(2, \, 4 , \, \cdots \, , \, 2k)} ) \\
(He[2k-1]_{(1, \, 3, \, \cdots \, , \, 2k-1) } , \,\, 0 ) \\
(0, \,\, 0, \,\, He[2k-2]_{(2, \, \cdots \, , 2k-2) } ) \\
(0, \,\, He[2k-3]_{(1, \, 3, \, \cdots \, , \,2k-3) } , \,\, 0) \\
\cdots \\
(-1, \, 1, \, 0, \, \cdots \, , \, 0)
\end{array}$$
Note that $$\label{zeroBYortho}
\sum_{j=0}^\ell M_{2j+2i} \, He[2 \ell+2i +2]_{2j}
= \int_ x^{2i} \sum x^{2j} \, He[2 \ell+2i +2]_{2j} \, \varphi(x) dx = 0$$ by orthogonality of the Hermite polynomials.
Note also that the signs of the coefficients alternate, and $He[2k]_2$ is the first nonzero coefficient and it equals $M_{2k}$.
Thus, using [(\[zeroBYortho\]) ]{}, $D \equiv M C\, $ has first row and first column equal to $ ( D_{1 1} , \, 0 \, , 0 , \, \cdots \, , \,0 ) $, where $$\label{D11}
D_{11} = \sum_{i=0}^{k} He[2k+2]_{i+2} \, M_{2i} \, = \, He[2k+2]_2 - s$$ (from [(\[sumnum\]) ]{}).
That is, $M C$ equals a partitioned block diagonal matrix with a $\, 1 \times 1 \,$ and a $(2k-1) \times (2k-1)$ submatrix. Thus, the inverse of $D$ is a similarly partitioned block-diagonal matrix with first row and column $$(1/D[1,1] , 0 , 0 , . . . , 0) = ( 1 / (He[2k+2]_2 - s) , 0 , . . . , 0)$$
Now since $\, M = D C^{-1} \,$ , $\, M^{-1} = C D^{-1}$. It follows that the upper (1, 1) element of $M^{-1}$ is $$He[2k+2]_2 / (He[2k+2]_2 - s) = M_{2k} / (M_{2k} -s)$$
Finally, $$.5/M^{-1}[1,1] = .5 (M_{2k} - s)/M_{2k} = (1 - s/M_{2k})/2 \, ,$$ which agrees with $r^*$ [(\[sumnum\]) ]{}.
Finally, consider the case when the number of matched even moments is odd.
\[oddNoSol\] If the number of matched even moments, $k$, is odd, then there is no distribution matching the $k$ moments that maximizes the difference from the normal c.d.f. at zero. In fact, the maximum difference among moment-matching distributions approaches the maximal value for matching $(k-1)$ even moments, and is the limit through a sequence of discrete mixtures whose maximal mass point, $y_k \rightarrow \infty$.
Assume there is a solution (to the symmetrized problem) with a finite number of mass points $\, {y_1 , y_2 , . . . y_n} \,$. (Note, there is a solution with $\, n=k$). If $\, n > k \,$, the moment equalities determine a manifold of dimension $\, n-k > 0 \,$, and so it must be possible to move at least one $y_j$ to make $p_0$ larger (since all partial derivatives are non-zero). Thus, if there is a solution, there is one with $k$ (odd) mass points.
Since all $p_j$’s are strictly positive for such a solution (in order to match moments), $r^*$ would increase as $y_k$ increases, since the derivative of $r^*$ with respect to $y_k$ is positive by Lemma \[sgnAlt\]. This contradicts the assumed optimality, and thus proves nonexistence of a solution.
Note that $\, p_k \leq M_k / y_k^k \,$ (since $M_k$ is matched by positive values). Thus, $p_k$ must decrease (to zero) as $y_k$ increases. Furthermore, for $j < k$ , $$p_k \, y_k^j \leq M_k / y_k^{k-j} \rightarrow 0, .$$
Thus the first $k-1$ moments are nearly determined by $ \, \{ y_1, \, \cdots \, , \, y_{k-1} \} \,$, and (since $ p_k$ also tends to 0), the optimal $p_j^* $ is also (nearly) determined by the first $(k-1)$ $y_i$’s. That is, the optimal $\, p_j^*$’s (for case $k$) are obtained as the limit (as $\, y_k \rightarrow \infty \,$) of terms converging to the optimal $p_j$’s for the $(k-1)$ case.
[99]{}
Eisinberg and Fedele (2006). On the inversion of the Vandermonde matrix, [*Applied mathematics and computation, 174*]{}, 1384-1397.
Lindsay, B. and Basak, P. (2000). Moments determine the tail of a distribution (but not much else), [*The American Statistician*]{}, 54, 248-251.
Portnoy, S. (2015). Maximizing Probability Bounds under Moment-matching Restrictions, [*The American Statistician, 69*]{}, 41-44.
|
---
abstract: |
The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo (MCMC) framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices—corresponding to thermodynamic parameters such as temperature or alchemical coupling variables—can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to poor mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.
*Keywords: replica exchange simulation, expanded ensemble simulation, the method of expanded ensembles, parallel tempering, simulated scaling, generalized ensemble simulations, extended ensemble, Gibbs sampling, enhanced mixing, convergence rates, Markov chain Monte Carlo (MCMC), alchemical free energy calculations*
author:
- 'John D. Chodera'
- 'Michael R. Shirts'
bibliography:
- 'gibbs-sampling.bib'
title: |
Replica exchange and expanded ensemble simulations as Gibbs sampling:\
Simple improvements for enhanced mixing
---
[^1]
Introduction {#section:introduction}
============
A broad category of simulation methodologies known as *generalized ensemble* [@okamoto:biopolymers:2001:generalized-ensemble] or *extended ensemble* [@iba:intl-j-mod-phys-c:2001:extended-ensemble] algorithms have enjoyed increasing popularity in the field of biomolecular simulation over the last decade. The two most popular algorithmic classes within this category are undoubtedly *replica exchange*, [@geyer:conference-proceedings:1991:replica-exchange] which includes parallel tempering [@hukushimi-nemoto:j-phys-soc-jpn:1996:parallel-tempering; @hansmann:chem-phys-lett:1997:parallel-tempering-monte-carlo; @sugita-okamoto:chem-phys-lett:1999:parallel-tempering-md] and Hamiltonian exchange [@sugita-kitao-okamoto:jcp:2000:hamiltonian-exchange; @fukunishi-watanabe-takada:jcp:2002:hamiltonian-exchange; @jang-shin-pak:prl:2003:hamiltonian-exchange; @kwak-hansmann:prl:2005:hamiltonian-exchange], among others, and its serial equivalent, the method of *expanded ensembles* [@lyubartsev:jcp:1992:expanded-ensembles], which includes simulated tempering [@marinari-parisi:europhys-lett:1992:simulated-tempering; @geyer-thompson:j-am-stat-assoc:1995:expanded-ensembles] and simulated scaling [@li-fajer-yang:jcp:2007:simulated-scaling]. In both classes of algorithms, a mixture of thermodynamic states are sampled within the same simulation, with each simulation walker able to access multiple thermodynamic states through a stochastic hopping process, which we will generically refer to as consisting of *swaps* or *exchanges*. In expanded ensemble simulations, the states are explored via a biased random walk in state space; in replica exchange simulations, multiple coupled walks are carried out in parallel without biasing factors. Both methods allow estimation of equilibrium expectations at each state as well as free energy differences between states. In both cases, stochastic transitions between different thermodynamic states can reduce correlation times and increase sampling efficiency relative to straightforward Monte Carlo or molecular dynamics simulations by allowing the system avoid barriers between important configuration substates.
Because of their popularity, these algorithms and their properties have been the subject of intense study over recent years. For example, given optimal weights, expanded ensemble simulations have been shown to have provably higher exchange acceptance rates than replica exchange simulations using the same set of thermodynamic states [@park:pre:2008:simulated-tempering]. Higher exchange attempt frequencies have been demonstrated to improve mixing for replica exchange simulations [@sindhikara-meng-roitberg:jcp:2008:exchange-frequency; @sindhikara-emerson-roitberg:jctc:2010:exchange-often-and-properly]. Alternative velocity rescaling schemes have been suggested to improve exchange probabilities [@nadler-hansmann:pre:2007:optimized-replica-exchange-moves]. Other work has examined the degree to which replica exchange simulations enhance sampling relative to straightforward molecular dynamics simulations [@rhee-pande:biophys-j:2003:multiplexed-replica-exchange; @zuckerman-lyman:jctc:2006:replica-exchange-efficiency; @gallicchio-levy:pnas:2007:replica-exchange; @nymeyer:jctc:2008:replica-exchange-efficiency; @tavan:cpl:2008:pseudoconvergence; @rosta-hummer:jcp:2009:replica-exchange-efficiency; @rosta-hummer:jcp:2010:simulated-tempering-efficiency]. Numerous studies have examined the issue of how to optimally choose thermodynamic states to enhance sampling in systems with second-order phase transitions [@kofke:2002:jcp:acceptance-probability; @katzberger-trebst-huse-troyer:j-stat-mech:2006:feedback-optimized-parallel-tempering; @trebst-troyer-hansmann:jcp:2006:optimized-replica-selection; @nadler-hansmann:pre:2007:generalized-ensemble; @gront-kolinski:j-phys-condens-matter:2007:optimized-replica-selection; @park-pande:pre:2007:choosing-weights-simulated-tempering; @shenfeld-xu:pre:2009:thermodynamic-length], though systems with strong first-order-like phase transitions (such as two-state protein systems) remain challenging [@neuhaus-magiera-hansmann:pre:2007:parallel-tempering-first-order; @straub:jcp:2010:generalized-replica-exchange]. A number of combinations [@fenwick-escobedo:jcp:2003:replica-exchange-expanded-ensembles; @mitsutake-okamoto:jcp:2004:rest] and elaborations [@mitsutake-sugita-okamoto:2003:remuca; @rhee-pande:biophys-j:2003:multiplexed-replica-exchange; @simmerling:jctc:2007:reservoir-replica-exchange; @gallicchio-levy-parashar:j-comput-chem:2008:asynchronous-replica-exchange; @hansmann:physica-a:2010:replica-exchange] of these algorithms have also been explored. A small number of publications have examined the mixing and convergence properties of replica exchange and expanded ensemble algorithms with mathematical rigor [@madras-randall:annals-appl-prob:2002:decomposition-theorem; @bhatnagar-randall:acm:2004:torpid-mixing; @woodard_conditions_2009; @woodard_sufficient_2009], but there remain many unanswered questions about these sampling algorithms at a deep theoretical level.
Standard practice for expanded ensemble and replica exchange simulations is that exchanges are to be attempted only between “neighboring” thermodynamic states—for example, the states with temperatures immediately above or below the current temperature in a simulated or parallel tempering simulation [@hukushimi-nemoto:j-phys-soc-jpn:1996:parallel-tempering; @hansmann:chem-phys-lett:1997:parallel-tempering-monte-carlo; @sugita-okamoto:chem-phys-lett:1999:parallel-tempering-md; @sugita-kitao-okamoto:jcp:2000:hamiltonian-exchange; @fukunishi-watanabe-takada:jcp:2002:hamiltonian-exchange; @jang-shin-pak:prl:2003:hamiltonian-exchange; @kwak-hansmann:prl:2005:hamiltonian-exchange]. The rationale behind this choice is that states further away in state space will have low probability of acceptance due to diminished phase space overlap, and thus attempts should focus on the states for which exchange attempts are most likely to be accepted. Increasing the proximity of neighboring thermodynamic states in both kinds of simulations can further increase the probability that exchange attempts will be accepted. However, restricting exchange attempts to neighboring states can then result in slow overall diffusion in state space due to the larger number of replicas needed to span the thermodynamic range of interest [@machta:pre:2009:parallel-tempering]. Some exchange schemes have been proposed to improve this diffusion process, such as all-pairs exchange [@izaguirre:jcp:2007:all-pairs-exchange], and optimized exchange moves [@nadler-hansmann:pre:2007:optimized-replica-exchange-moves] but the problem is still very much a challenge (see Ref. [@tavan:cpl:2009:exchange-schemes] for a recent comparison). The problem of slow diffusion is exacerbated in “multidimensional” simulations that make use of a 2D or 3D grid of thermodynamic states [@sugita-kitao-okamoto:jcp:2000:hamiltonian-exchange; @paschek-garcia:prl:2004:temperature-pressure-replica-exchange; @jiang_free_2010], where diffusion times in state space increase greatly due to the increase in dimensionality [@rudnick_elements_2010].
Here, we show how the many varieties of expanded ensemble and replica exchange simulations can all be considered to be forms of *Gibbs sampling*, a sampling scheme well-known to the statistical inference literature [@geman-geman:1984:gibbs-sampling; @jun-s-liu:mcmc], though unrelated to simulations in the “Gibbs ensemble” for determining phase equilibria [@panagiotopoulos:mol-phys:1987:gibbs-ensemble; @panagiotopoulos:mol-phys:1988:gibbs-ensemble; @footnote1]. When viewed in this statistical context, a number of alternative schemes can readily be proposed for updating the thermodynamic state while preserving the distribution of configurations and thermodynamic states sampled by the algorithm. By making simple modifications to the exchange attempt schemes, we show that great gains in sampling efficiency can be achieved under certain conditions with little or no extra cost. There is essentially no drawback to implementing these algorithmic improvements, as the additional computational cost is negligible, their implementation sufficiently simple to encourage widespread adoption, and there appears to be no hindrance of sampling in cases where these schemes offer no great efficiency gain. Importantly, we also demonstrate that schemes that encourage mixing in state space can also encourage mixing of the overall Markov chain, reducing correlation times in coordinate space, leading to more uncorrelated samples being generated for a fixed amount of computer time.
This paper is organized as follows. In *Theory* (Section \[section:theory\]), we describe expanded ensemble and replica exchange algorithms in a general way, casting them as a form of Gibbs sampling. [In *Algorithms* (Section \[section:algorithms\]), we propose multiple approaches to the state exchange process in both classes of algorithm with the aim of encouraging faster mixing in among the thermodynamic states accessible in the simulation, and hence the overall Markov chain.]{} In *Illustration* (Section \[section:illustration\]), we illustrate how and why these modified schemes enhance mixing of the overall chain for a simple one-dimensional model system. In *Applications* (Section \[section:applications\]), we apply these algorithmic variants to some examples from physical chemistry, using several different common benchmark systems from biomolecular simulation, and examine several metrics of simulation efficiency. Finally, we make recommendations for the adoption of simple algorithmic variants that will improve efficiency in *Discussion* (Section \[section:discussion\]).
Theory {#section:theory}
======
Before describing our suggested algorithmic modifications (*Algorithms*, Section \[section:algorithms\]), we first present some theoretical tools we will use to analyze expanded ensemble and replica exchange simulations in the context of Gibbs sampling.
Thermodynamic states and thermodynamic ensembles
------------------------------------------------
To be as general as possible, we describe the expanded ensemble and replica exchange algorithms as sampling a mixture of $K$ thermodynamic states. Here, a *thermodynamic state* is parameterized by a vector of time-independent thermodynamic parameters $\lambda$. For notational convenience and to make what follows general, we define the *reduced potential* [@shirts-chodera:jcp:2008:mbar] $u(x)$ of a physical system, $$\begin{aligned}
u(x) &=& \beta \left[ H(x) + p V(x) + \sum_i\mu_i n_i(x) + \cdots \right], \label{equation:reduced-potential}\end{aligned}$$ corresponding to its thermodynamic state, where $x$ denotes the configuration of the system specifying any physical variables allowed to change, including the volume $V(x)$ (in the case of a constant pressure ensemble) and $n_i(x)$ the number of molecules of each of $i=1,\ldots,M$ components of the system, in the case of a (semi)grand ensemble. The reduced potential is a function of the Hamiltonian $H$, the inverse temperature $\beta = (k_B T)^{-1}$, the pressure $p$, and the vector of chemical potentials for each of $M$ components $\mu_i$. Other thermodynamic parameters and their conjugate coordinates can be included in a similar manner, or some of these can be omitted, as required by the physics of the system. We denote the set of all thermodynamic parameters by $\lambda \equiv \{\beta, H, p, \vec{\mu}, \ldots\}$.
We next denote a configuration of the molecular system by $x \in \Omega$, where $\Omega$ is allowed configuration space, which may be continuous or discrete. A choice of thermodynamic state gives rise to set of configurations of the system that are sampled by a given time-independent probability distribution at equilibrium. So each $x$ will have associated unnormalized probability density $q(x)$, which is a function of $\lambda$, where $q(x) > 0$ for all $x \in \Omega$. If we define the normalization constant, or *partition function*, $Z$ as: $$\begin{aligned}
Z &\equiv& \int_\Omega dx \, q(x) \end{aligned}$$ we can define a normalized probability density $$\begin{aligned}
\pi(x) &=& Z^{-1} \, q(x).\end{aligned}$$
A physical system in equilibrium with its environment obeying classical statistical mechanics will sample configurations distributed according to the Boltzmann distribution, $$\begin{aligned}
q(x) &\equiv& e^{-u(x)} .\end{aligned}$$ In this paper, we consider a set of $K$ thermodynamic states defined by their thermodynamic parameter vectors, $\lambda_k \equiv \{\beta_k, H_k, p_k, \vec{\mu}_k, \ldots\}$, with $k = 1,\ldots,K$, where $H_k$ denotes any modifications of the Hamiltonian $H$ as a function of $k$, including biasing potentials. Each new choice of $k$ gives rise to a reduced potential $u_k$, unnormalized and normalized probability distributions $q_k(x)$ and $\pi(x,k)$, and a partition function $Z_k$. Although in this paper, we generally assume a Boltzmann distribution, there is nothing to prevent some or all of the states from being sampled using non-thermodynamic (non-Boltzmann) statistics using alternative choices of the unnormalized density $q_k(x)$, as in the case of multicanonical simulations [@mezei:j-comp-phys:1987:muca] or Tsallis statistics [@tsallis:j-stat-phys:1988:tsallis-statistics]. To ensure that any configuration $x$ has finite, nonzero density in all $K$ thermodynamic states, we additionally require that the same thermodynamic parameters be specified for all thermodynamic states, though their values may of course differ.
Gibbs sampling
--------------
Suppose we wish to sample from the joint distribution of two random variables, $x$ and $y$. We denote this joint distribution by $\pi(x,y)$. Often, it is not possible to directly generate uncorrelated sample pairs $(x,y)$ from the joint distribution due to the complexity of the function $\pi(x,y)$. In these cases, a standard approach to sampling is to use some form of Markov chain Monte Carlo (MCMC) [@jun-s-liu:mcmc], such as the Metropolis-Hastings algorithm [@metropolis:jcp:1953:metropolis-monte-carlo; @hastings:biometrika:1970:metropolis-hastings] or hybrid Monte Carlo [@duane:1987:phys-lett-b:hybrid-monte-carlo]. While general in their applicability, MCMC algorithms suffer from the drawback that they often must generate *correlated* samples, potentially requiring long running times to produce a sufficient number of effectively uncorrelated samples to allow the computation of properties to the desired precision [@mueller-krumbhaar:j-stat-phys:1973:monte-carlo; @janke:2002:statistical-error].
Assume we can draw samples, either independently or through some Markov chain Monte Carlo procedure, from the *conditional* distributions of one or more of the variables, $\pi(x | y)$ or $\pi(y | x)$, where the value of the second variable is fixed. To generate a set of sample pairs $\{(x^{(1)}, y^{(1)}), \, (x^{(2)}, y^{(2)}), \ldots\}$ from $\pi(x,y)$, we can iterate the update scheme: $$\begin{aligned}
x^{(n+1)} | y^{(n)} \:\:\:\:\: &\sim& \pi(x | y^{(n)}) \nonumber \\
y^{(n+1)} | x^{(n+1)} &\sim& \pi(y | x^{(n+1)}) \nonumber\end{aligned}$$ where $x \sim \pi$ denotes that the random variable $x$ is sampled or “updated”) from the distribution $\pi(x)$.
This procedure is termed *Gibbs sampling* or *the Gibbs sampler* in the statistical literature, and has been employed and studied extensively [@geman-geman:1984:gibbs-sampling; @jun-s-liu:mcmc]. In many cases, it may be possible to draw uncorrelated samples from either or both distributions, but this is not required [@footnote2]. The choice of which variable to update—in this example, $x$ or $y$—can be either deterministic (e.g. update $x$ then $y$) or stochastic (e.g. a random number determines whether $x$ or $y$ is to be updated); both schemes sample from the desired joint distribution $\pi(x,y)$. However, each method has different dynamic properties and can introduce different correlation structure in the sequence of sample pairs. In particular, we note that a stochastic choice of which variable to update obeys detailed balance, while a deterministic choice obeys the weaker balance condition [@deem:jcp:1999:balance]. In both cases, the distribution $\pi(x,y)$ is preserved.
In the sections below, we describe how expanded ensemble and replica exchange simulations can be considered as special cases of Gibbs sampling on the probability distribution $\pi(x,k)$, which is now a function of both coordinates and thermodynamic states, and how this recognition allows us to consider simple variations of these techniques that will enhance mixing in phase space with little or no extra cost. In the algorithms we consider here, the thermodynamic state variable $k$ is discrete, but continuous $k$ are also completely valid in this formalism. v
Expanded ensembles
------------------
In an expanded ensemble simulation [@lyubartsev:jcp:1992:expanded-ensembles], a single replica or “walker”) samples pairs $(x,k)$ from a joint distribution of configurations $x \in \Gamma$ and state indices $k \in \{1,\ldots,K\}$ given by, $$\begin{aligned}
\pi(x,k) &\propto& \exp[-u_k(x) + g_k] ,\end{aligned}$$ where $g_k$ is an state-dependent weighting factor. This space is therefore a [*mixed*]{}, [*generalized*]{}, or [*expanded*]{} ensemble which samples from multiple thermodynamic ensembles simultaneously. $g_k$ is chosen to give a specific weighting of each subensemble in the expanded ensemble, and is generally determined through some iterative procedure [@lyubartsev:jcp:1992:expanded-ensembles; @marinari-parisi:europhys-lett:1992:simulated-tempering; @wang-landau:prl:2001:wang-landau; @park-ensign-pande:pre:2006:bayesian-weight-update; @park-pande:pre:2007:choosing-weights-simulated-tempering; @li-fajer-yang:jcp:2007:simulated-scaling; @chelli:jctc:2010:optimal-weights-expanded-ensembles]. The set of $g_k$ is frequently chosen to give each thermodynamic ensemble equal probability, in which case $g_k=-\ln Z_k$, but they can be set to arbitrary values as desired.
In the context of Gibbs sampling, an expanded ensemble simulation proceeds by alternating between sampling from the two conditional distributions, $$\begin{aligned}
\pi(x | k) &=& \frac{q_k(x)}{\int_\Omega dx \, q_k(x)} = \frac{e^{-u_k(x)}}{\int_\Omega dx \, e^{-u_k(x)}} \\
\pi(k | x) &=& \frac{e^{g_k}q_k(x)}{\sum\limits_{k'=1}^K e^{g_{k'}}q_{k'}(x)} = \frac{e^{g_k - u_k(x)}}{\sum\limits_{k'=1}^K e^{g_{k'} - u_{k'}(x)}} .\label{equation:expanded-ensemble-gibbs-update}\end{aligned}$$ In all but trivial cases, sampling from the conditional distribution $\pi(x | k)$ must be done using some form of Markov chain Monte Carlo sampler that generates correlated samples, due to the complex form of $u_k(x)$ and the difficulty of computing the normalizing constant in the denominator [@jun-s-liu:mcmc]. Typically, Metropolis-Hastings Monte Carlo [@metropolis:jcp:1953:metropolis-monte-carlo; @hastings:biometrika:1970:metropolis-hastings] or molecular dynamics is used [@footnote3], generating an updated configuration $x^{(n+1)}$ that is correlated with the previous configuration $x^{(n)}$. However, as we will see in Algorithms (Section \[section:algorithms-expanded-ensemble\]), multiple choices for sampling from the conditional distribution $\pi(k | x)$ are possible due to the simplicity of its form.
Replica exchange ensembles {#section:replica-exchange-ensembles}
--------------------------
In a replica exchange, we consider $K$ simulations, with one simulation in each of the thermodynamic $K$ states. The current state of the replica exchange simulation is given by $(X,S)$, where $X$ is a vector of the replica configurations, $X \equiv \{x_1, x_2, \ldots, x_K\}$, and $S \equiv\{s_1,\ldots,s_K\} \in \mathcal{S}_K$ is a permutation of the state indices $S \equiv \{1, \ldots, K\}$ associated with each of the replica configurations $X \equiv \{x_1, \ldots, x_K\}$. Then: $$\begin{aligned}
\pi(X, S) &\propto& \prod_{i=1}^{K} q_{s_i}(x_i) \propto \exp\left[-\sum_{i=1}^K u_{s_i}(x_i)\right]
\label{eq:parallereplica}\end{aligned}$$ with the conditional densities therefore given by $$\begin{aligned}
\pi(X | S) &=& \prod_{i=1}^K \left[ \frac{e^{-u_{s_i}(x_i)}}{\int_\Omega dx \, e^{-u_{s_i}(x_i)}}\right] \\
\pi(S | X) &=& \frac{\exp\left[- \sum\limits_{i=1}^K u_{s_i}(x_i) \right]}{\sum\limits_{S' \in \mathcal{S}_K} \exp\left[- \sum\limits_{i=1}^K u_{s'_i}(x_i) \right]}\end{aligned}$$ As in the case of expanded ensemble simulations, updating of configurations $X$ must be by some form Markov chain Monte Carlo or molecular dynamics simulation, invariably generating configurations with some degree of correlation. Unlike the case of expanded ensembles, generating independent samples in the conditional permutation space is very challenging for even moderate numbers of states because of the expense of computing the denominator of $\pi(S | X)$ [@footnote4], which includes a sum over all permutations in the set $\mathcal{S}_K$. However, as we shall see in Section \[section:algorithms-replica-exchange\], there are still effective ways to generate *nearly* uncorrelated permutations that have improved mixing properties over traditional exchange attempt schemes.
Algorithms {#section:algorithms}
==========
We now describe the *algorithms* used in sampling from the expanded ensemble and replica exchange ensembles described in *Theory* (Section \[section:theory\]). We start with the typical neighbor exchange schemes commonly used in the literature, and then describe additional novel schemes based on Gibbs sampling that can encourage more rapid mixing among the accessible thermodynamic states.
Expanded ensemble simulation {#section:algorithms-expanded-ensemble}
----------------------------
For an expanded ensemble simulation, the *conditional* distribution of the state index $k$ given $x$ is, again: $$\begin{aligned}
\pi(k | x) &=& \frac{e^{g_k - u_k(x)}}{\sum\limits_{k'=1}^K e^{g_{k'} - u_{k'}(x)}} . \nonumber\end{aligned}$$ We can use any proposal/acceptance scheme that ensures this conditional distribution is sampled in the long run for any fixed $x$. We can choose at each step to sample in $k$ or $x$ depending according to some fixed probability $p$, in which case detailed balance is obeyed. We can also alternate $N_k$ and $N_x$ steps of $k$ and $x$ sampling, respectively. Although this algorithm does not satisfy detailed balance, it does satisfy the weaker condition of *balance* [@deem:jcp:1999:balance] which is sufficient to preserve sampling from the joint stationary distribution $\pi(x,k)$. In the cases that proposal probabilities are based on past history however, the algorithm will not preserve the equilibrium distribution [@reinhardt:cpl:2000:step-size-adjustment]).
### Neighbor exchange {#section:algorithms:expanded-ensemble:neighbor-exchange}
In the neighbor exchange scheme, the proposed state index $j$ given the current state index $i$ is chosen randomly from one of the neighboring states, $i \pm 1$, with probability, $$\begin{aligned}
\alpha(j | x, i) &=& \begin{cases}
\frac{1}{2} & \mathrm{if}\:\:j = i-1 \\
\frac{1}{2} & \mathrm{if}\:\:j = i+1 \\
0 & \mathrm{else}
\end{cases}
\label{eq:replica-up-and-down}\end{aligned}$$ and accepted with probability, $$\begin{aligned}
\lefteqn{P_\mathrm{accept}(j | x, i) =} \nonumber \\
&&\hspace{0.3in}\mbox{} \begin{cases}
0 & \mathrm{if}\:\:j \notin \{1, \ldots, K\} \\
\min\left\{1, \frac{e^{g_{j} - u_{j}(x)}}{e^{g_{i} - u_{i}(x)}} \right\} & \mathrm{else} \\
\end{cases}
\label{eq:mc-with-ends}\end{aligned}$$ This scheme was originally suggested by Marinari and Parisi [@marinari-parisi:europhys-lett:1992:simulated-tempering] and has been used extensively in subsequent work [@hansmann-okamoto:1996:pre:simulated-tempering; @fenwick-escobedo:jcp:2003:replica-exchange-expanded-ensembles]. A slight variation of this scheme considers the set $\{1, \ldots, K\}$ to lie on a torus, such that state $i + n K$ is equivalent to state $i$ for integral $n$, with the proposal and acceptance probability otherwise left unchanged.
An alternative scheme avoids having to reject choices of $j$ that lead to $j \notin \{1, \ldots, K\}$ by modifying the proposal scheme, $$\begin{aligned}
\alpha(j | x, i) &=& \begin{cases}
\frac{1}{2} & \mathrm{if}\:\:k \in \{2,\ldots,K-1\}, |j-i| = 1 \\
1 & \mathrm{if}\:\:i=1,j=i+1 \le K\\
1 & \mathrm{if}\:\:i=K,j=i-1 \ge 1\\
0 & \mathrm{else}
\end{cases}\end{aligned}$$ and modifying the acceptance criteria for these two moves to be [@fenwick-escobedo:jcp:2003:replica-exchange-expanded-ensembles] $$\begin{aligned}
P_\mathrm{accept}(j | x, i) &=& \min\left\{1, \frac{1}{2} \frac{e^{g_{j} - u_{j}(x)}}{e^{g_{i} - u_{i}(x)}} \right\}\end{aligned}$$ to include the correct Metropolis-Hastings ratio of proposal probabilities.
### Independence sampling {#section:algorithms:expanded-ensemble:gibbs-sampling}
The most straightforward way of generating an uncorrelated state index $i$ from the conditional distribution $\pi(k | i)$ is by *independence sampling*, in which we propose an update of the state index $i$ by drawing a new $j$ from $\pi(i | x)$ with probability $$\begin{aligned}
\alpha(j | x, i) &=& \pi(i | x) \end{aligned}$$ and always accepting this new $j$. While well-known in the statistical inference literature [@jun-s-liu:mcmc]—and the update scheme most closely associated with the use of the Gibbs sampler there—this scheme has been recently discovered independently in the context of molecular simulation [@rosta-hummer:jcp:2010:simulated-tempering-efficiency]. A straightforward way to implement this update scheme is to generate a uniform random number $r$ on the interval $[0,1)$, and select the smallest $k$ where $r < \sum_{i=1}^k \pi(i|x)$.
### Metropolized independence sampling {#section:algorithms:expanded-ensemble:metropolized-gibbs}
In what we term a *Metropolized independence sampling* move [@liu:biometrika:1996:metropolized-gibbs], a new state index $k'$ is proposed from the distribution, $$\begin{aligned}
\alpha(j | x, i) &=& \begin{cases}
\frac{\pi(j | x, i)}{1 - \pi(j | x, i)} & j \ne i \\
0 & j = i
\end{cases}\end{aligned}$$ and accepted with probability, $$\begin{aligned}
P_\mathrm{accept}(j | x, i) &=& \min\left\{ 1, \frac{1 - \pi(i | x,i)}{1 - \pi(j | x,i)} \right\} .\end{aligned}$$ This scheme has the surprisingly property that, despite including a rejection step (unlike the independence sampling in Section \[section:algorithms:expanded-ensemble:gibbs-sampling\] above), the mixing rate in $\pi(k | x)$ can be proven to be greater than that of independence sampling [@liu:biometrika:1996:metropolized-gibbs], using the same arguments that Peskun used to demonstrate the optimality of the Metropolis-Hastings criteria over other criteria for swaps between two states. This can be rationalized by noting that Metropolized independence sampling updates will always try move away from the current state, whereas standard independence sampling has some nonzero probability to propose to remain in the current state.
### Restricted range sampling {#section:algorithms:expanded-ensemble:restricted-range-gibbs}
In some situations, such as simulated scaling [@li-fajer-yang:jcp:2007:simulated-scaling] or other schemes in which the Hamiltonian differs in a non-trivial way among thermodynamic states, there may be a non-negligible cost in evaluating the unnormalized probability distributions $q_k(x)$ for all $k$. Because transitions to a states with minimal phase space overlap will have very low probability, prior knowledge of which states have the highest phase space overlap could reduce computational effort with little loss in sampling efficiency if states with poor overlap are excluded from consideration for exchange.
One straightforward way to implement such a *restricted range sampling* scheme is to define a set of proposal states $\mathcal{S}_i$ for each state $i \in \{1, \ldots, K\}$, with the requirement that $i \in \mathcal{S}_j$ if and only if $j \in \mathcal{S}_i$, and propose transitions from the current $(x,i)$ to a new state $j$ with probability, $$\begin{aligned}
\alpha(j | x, i) &=& \begin{cases}
\frac{e^{g_j - u_j(x)}}{\sum\limits_{k \in S_i} e^{g_k - u_k(x)}} & j \in \mathcal{S}_i \\
\hspace{0.8cm} 0 & j \notin \mathcal{S}_i
\end{cases} .\end{aligned}$$ This proposal is accepted with probability, $$\begin{aligned}
P_{\text{accept}}(j | x, i) &=& \min\left(1,\frac{\sum\limits_{k \in S_i} e^{g_k - u_k(x)}}{\sum\limits_{k' \in S_j} e^{g_{k'} - g_{k'}(x)}}\right). \label{equation:restricted-range-acceptance-criteria}\end{aligned}$$
We can easily see that this scheme satisfies detailed balance for fixed $x$. The probability the sampler is initially in $i \in \mathcal{S}_j$ and transitions to $j \in \mathcal{S}_i$, where $j \ne i$, is given by, $$\begin{aligned}
\lefteqn{\pi(i | x)\alpha(j | x, i)P_{\text{accept}}(j | x, i)} \nonumber \\
&=& \left[\frac{e^{g_i - u_i(x)}}{Z(\mathcal{S}_{\text{all}})}\right]\left[\frac{e^{g_j - u_j(x)}}{Z(\mathcal{S}_i)}\right] \left[\min\left(1,\frac{Z(\mathcal{S}_i)}{Z(\mathcal{S}_j)}\right)\right] \\
&=& \left[\frac{e^{g_j - u_j(x)}e^{g_i - u_i(x)}}{Z(\mathcal{S}_{\text{all}})}\right]\left[\min\left(Z^{-1}(\mathcal{S}_i),Z^{-1}(\mathcal{S}_j)\right)\right]\\
&=& \left[\frac{e^{g_j - u_j(x)}}{Z(\mathcal{S}_{\text{all}})}\right]\left[\frac{e^{g_i - u_i(x)}}{Z(\mathcal{S}_j)}\right] \left[\min\left(1,\frac{Z(\mathcal{S}_j)}{Z(\mathcal{S}_i)}\right)\right] \\
&=& \pi(j | x)\alpha(i | x, j)P_{\text{accept}}(i | x, j) \end{aligned}$$ where $Z(\mathcal{S}_i) = \sum_{k \in S_i} e^{g_k - u_k(x)}$, and $\mathcal{S}_{\text{all}} = \{1,\ldots,K\}$. This is simply the detailed balance condition, ensuring that this scheme will sample from the distribution $\pi(i | x)$. Therefore, this scheme samples from the stationary probability $\pi(j | x)$.
For example, we can define $\mathcal{S}_i = \{i-n,\ldots,i+n\}$, with $n \ll K$, for all $i$, making appropriate adjustments to this range at $i < n$ and $i > K-n$. Then we only need to compute the reduced potentials for states $\{ \min(1, i-2n),\ldots, \max(K,i+2n) \}$, rather than all states $\{1, \ldots, K\}$. The additional evaluations for $\{ \min(1,i-2n),\ldots,i-n-1 \}$ and $\{ \max(i+n+1,K) \ldots,\max(K,i+2n) \}$ are required to ensure that we can calculate both sums in the acceptance criteria (Eq. \[equation:restricted-range-acceptance-criteria\]).
Restricted range sampling simply reduces to independence sampling, as presented in Section \[section:algorithms:expanded-ensemble:gibbs-sampling\], when $\mathcal{S}_i = \{1,\ldots,K\}$, and all proposals are therefore accepted. We also note that Metropolized independence sampling, in Section \[section:algorithms:expanded-ensemble:metropolized-gibbs\] is exactly equivalent to using the restricted range scheme with $\mathcal{S}_i = \{1,\ldots,K\}$ *excluding* $i$, such that $\alpha(i|x,i) =0$ for all $i$. Any other valid scheme of sets $\mathcal{S}_i$ can be Metropolized by removing $i$ from $\mathcal{S}_i$.
Clearly, other state decomposition schemes exist, though the efficiency of such schemes will almost certainly depend on the underlying nature of the thermodynamic states under study. It is possible to define state schemes that preserve detailed balance, but that are not ergodic, such as $\mathcal{S}_1=\mathcal{S}_3=\mathcal{S}_5=\{1,3,5\}$ and $\mathcal{S}_2=\mathcal{S}_4=\mathcal{S}_6=\{2,4,6\}$ for $K=6$, so some care must be taken. In most cases, users will likely use straightforward rules to find locally defined sets such as $\mathcal{S}_i = \{i-n,\ldots,i+n\}$ or the Metropolized version $\mathcal{S}_i = \{i-n,\ldots,i-1,i+1,\ldots,i+n\}$, and ergodicity as well as detailed balance will be satisfied. Further analysis of the performance tradeoffs involved in choices of the sets, situations where sets might be chosen stochastically, or more efficient choices of sets that satisfy only balance is beyond the scope of this study.
### Other schemes
The list above is by no means intended to be exhaustive—many other schemes can be used for updating the state index $k$, provided they sample from $\pi(k | x)$. Compositions of different schemes are also permitted—even something simple as application of the neighbor exchange scheme a number of times, rather than just once, could potentially improve mixing properties at little or no additional computational cost.
Replica exchange simulation {#section:algorithms-replica-exchange}
---------------------------
### Neighbor exchange {#section:algorithms:replica-exchange:neighbor-exchange}
In standard replica exchange simulation algorithms, an update of the state permutation $S$ of the $(X,S)$ sampler state only considers exchanges between neighboring states [@hukushimi-nemoto:j-phys-soc-jpn:1996:parallel-tempering; @hansmann:chem-phys-lett:1997:parallel-tempering-monte-carlo; @sugita-okamoto:chem-phys-lett:1999:parallel-tempering-md; @sugita-kitao-okamoto:jcp:2000:hamiltonian-exchange; @fukunishi-watanabe-takada:jcp:2002:hamiltonian-exchange; @jang-shin-pak:prl:2003:hamiltonian-exchange; @kwak-hansmann:prl:2005:hamiltonian-exchange]. One such scheme involves attempting to exchange either the set of state index pairs $\{(1,2), (3,4), \ldots\}$ or $\{(2,3), (4,5), \ldots\}$, chosen with equal probability [@footnote5].
Each state index pair $(i,j)$ exchange attempt is attempted independently, with the exchange of states $i$ and $j$ associated with configurations $x_i$ and $x_j$, respectively, accepted with probability $$\begin{aligned}
P_\mathrm{accept}(x_i, i, x_j, j) &=& \min\left\{ 1, \frac{e^{-[u_i(x_j)+u_j(x_i)]}}{e^{-[u_i(x_i) + u_j(x_j)]}}\right\}
\label{eq:metropolis-replica}\end{aligned}$$
### Independence sampling {#section:algorithms:replica-exchange:gibbs-sampling}
Independence sampling in replica exchange would consist of generating an uncorrelated, independent sample from $\pi(S|X)$. The most straightforward scheme for doing so would require compiling a list of all possible $K!$ permutations of $S$, evaluating the unnormalized probability $\exp\left[-\sum_i u_{s_i}(x_i)\right]$ for each, normalizing by their sum, and then selecting a permutation $S'$ according to this normalized probability. Even if the entire $K \times K$ matrix ${{\bf U}} \equiv (u_{ij})$ with $u_{ij} \equiv u_i(x_j)$ is precomputed, the cost of this sampling scheme becomes impractical even for modestly large $K$.
Instead, we note that an *effectively* uncorrelated sample from $\pi(S|X)$ can be generated by running an MCMC sampler scheme for a short time with trivial or small additional computational expense. For each step of the MCMC sampler, we pick a pair of state indices $(i,j)$, with $i \ne j$, uniformly from the set $\{1, \ldots, K\}$. The state pair associated with the configurations $x_i$ and $x_j$ are swapped with the same replica exchange Metropolis-like criteria shown in Eq. \[eq:metropolis-replica\], with the labels of the states updated after each swap. If we precompute the matrix ${{\bf U}}$, then these updates are extremely inexpensive, and many Monte Carlo update steps of the state permutation vector $S$ can be taken to decorrelate from the previous sample for a fixed set of configurations $X$, effectively generating an uncorrelated sample $S' \sim P(S | X)$.
In the case where all $u_{ij}$ are equal, then the number of swaps required is $K \ln K$—a well-known result due to @aldous-diaconis:1986:amer-math-monthly:shuffling. Empirically, we have found that swapping $K^3$ to $K^5$ times each state update iteration appears to be sufficient for the molecular cases examined in this paper and in our own work without consuming a significant portion of the replica exchange iteration time, but further experimentation may be required for some systems. Instead of performing random pair selections, we could also apply multiple passes of the neighbor exchange algorithm (Section \[section:algorithms:replica-exchange:neighbor-exchange\]). We note that complete mixing in state space is not a requirement for validity of the algorithm, but increasing the number of swaps will lead to increased space sampling until the limit of independent sampling is reached.
The method of multiple consecutive state swaps between configuration sampling is not entirely novel—we have heard several anecdotal examples of people experimenting with multiple consecutive state swaps, with sparse mentions in the literature [@pitera_understanding_2003; @martin-mayor:prb:2009:replica-exchange]. However, we believe this is the first study to characterize the theory and properties of this particular modification of standard replica exchange.
For parallel tempering, in which only the inverse temperature $\beta_k$ varies with state index $k$, computation of ${{\bf U}}$ is trivial if the potential energies of all $K$ states are known. On the other hand, computation of all $u_i(x_j)$ for all $i,j = 1,\ldots,K$ may be time-consuming if the potential energy must be recomputed for each state, such as in an alchemical simulation. If the Bennett acceptance ratio (BAR) [@bennett:jcp:1976:fe-estimate] or the improved multistate version MBAR [@shirts-chodera:jcp:2008:mbar] are used to analyze data generated during the simulation, however, all such energies are required anyway, and so no extra work is needed if the state update interval matches the interval at which energies are written to disk. Alternatively, if the number of Monte Carlo or molecular dynamics time steps in between each state update is large compared to $K$, the overall impact on simulation time of the need to compute ${{\bf U}}$ will be minimal.
### Other schemes
The list of replica exchange methods above is by no means exhaustive—other schemes can be used for updating the state index $k$, provided they sample from the space of permutations $\pi(S | X)$ in a way that preserves the conditional distribution. For example, it may be efficient for a node of a parallel computer to perform many exchanges only among replicas held in local memory, and to attempt few exchanges between nodes due to network constraints. Compositions of different schemes are again also permitted.
Metrics of efficiency {#section:algorithms:metrics-of-efficiency}
---------------------
There is currently no universally accepted metric for assessing sampling efficiency in molecular simulation, and thus it is difficult to quantify exactly how much our proposed algorithmic modifications improve sampling efficiency. In the end, efficient algorithms will decrease the computational effort to achieve an estimate of the desired statistical precision for the expectations or free energy differences of interest. Unfortunately, this can depend strongly on property of interest, the thermodynamic states that are being sampled, and the dynamics of the system studied. While there exist metrics that describe the *worst case* convergence behavior by approximating the slowest eigenvalue of the Markov chain [@garren-smith:bernoulli:2000:estimating-second-eigenvalue; @zuckerman:jctc:2010:effective-sample-size], the worst case behavior can often differ from practical behavior by orders of magnitude [@diaconis:contemporary-math:1992:metropolis-running-time]. Here, we make use of a few metrics that will help us understand the time scale of these correlations in sampling under practical conditions.
Complex systems often get stuck in metastable states in configuration space with residence times a substantial fraction of the total available simulation time. This dynamical behavior hinders the sampling of uncorrelated configurations by molecular dynamics simulation or Metropolis Monte Carlo schemes [@schuette:j-comput-phys:1999:conformational-dynamics; @schuette:2002:metastable-states]. Systems can remain stuck in these metastable traps even as a replica in an expanded ensemble or replica exchange simulation travels through multiple thermodynamic states [@huang-bowman-bacallado-pande:pnas:2009:adaptive-seeding], either because the trap exists in multiple thermodynamic states or because the system does not have enough time to escape the trap before returning to states where the trap exists. While approaches for detecting and characterizing the kinetics of these metastable states exist [@defulhard-weber:lin-alg-appl:2005:pcca+; @huang-bowman-bacallado-pande:pnas:2009:adaptive-seeding], the combination of conformation space discretization error and statistical error makes the use of these approaches to compute relaxation times in configuration space not ideal for our purposes.
Here, we instead consider three simple statistics of the observed state index of each replica trajectory as surrogates to assess the improvements in overall efficiency of sampling. Instead of considering the full expanded ensemble simulation trajectory $\{(x^{(0)},k^{(0)}), (x^{(1)},k^{(1)}), \ldots \}$ or the replica exchange simulation trajectory $\{(X^{(0)},S^{(0)}), (X^{(1)},S^{(1)}), \ldots \}$, we consider the trajectory of individual replicas projected onto the sequence of thermodynamic state indices ${{\bf s}} \equiv \{s_0, s_1, \ldots\}$ visited during the simulation. In long replica exchange simulations, each replica executes an equivalent random walk, and statistics can be pooled [@chodera:jctc:2007:parallel-tempering-wham]. If significant metastabilities in configuration space exist, we hypothesize that these configurational states will have different typical reduced potential $u(x)$ distributions, and therefore induce metastabilities in the state index trajectory ${{\bf s}}$ as well that will be detectable by the methods described below. Each of the measures provides a different way to interpret the mixing of the simulation in state space; we will refer to all of them in the rest of the paper as “mixing times.”
### Relaxation time from empirical state transition matrix, $\tau_2$ {#section:applications:metrics-of-efficiency:second-eigenvalue}
One way to characterize how rapidly the simulation is mixing in state space is to examine the *empirical transition matrix* among states, the $K \times K$ row-stochastic matrix ${{\bf T}}$. An individual element of this matrix, $T_{ij}$, is the probability that an expanded ensembles or replica exchange walker currently in state $i$ will be found in state $j$ the next iteration. From a given expanded ensemble or replica exchange simulation, we can estimate ${{\bf T}}$ by examining the expanded ensemble trajectory history or pooled statistics from individual replicas, $$\begin{aligned}
T_{ij} &\approx& \frac{N_{ij} + N_{ji}}{\sum\limits_{k=1}^K [N_{ik} + N_{ki}]} \label{equation:empirical-state-transition-matrix}\end{aligned}$$ where $N_{ij}$ is the number of times the replica is observed to be in state $k$ one update interval after being in state $i$. To obtain a transition matrix ${{\bf T}}$ with purely real eigenvalues, we have assumed both forward and time-reversed transitions in state indexes are equally probable, which is true in the limit of infinite time for all methods described in this paper. To assess how quickly the simulation is transitioning between different thermodynamic states, we compute the eigenvalues $\{\mu_1, \mu_2, \ldots, \mu_K\}$ of ${{\bf T}}$ and sort them in descending order, such that that $1 = \mu_1 \ge \mu_2 \ge \cdots \ge \mu_K$. If $\mu_2 = 1$, the Markov chain is *decomposable*, meaning that two more subsets of the thermodynamic states exist where *no* transitions have been observed between these sets, a clear indicator of very poor mixing in the simulation. In this case, the thermodynamic states characterized by $\{\lambda_1, \ldots, \lambda_K\}$ should be adjusted, or additional thermodynamic states inserted to enhance overlap in problematic regions. Several schemes for optimizing the choice of these state vectors exist [@kofke:2002:jcp:acceptance-probability; @katzberger-trebst-huse-troyer:j-stat-mech:2006:feedback-optimized-parallel-tempering; @trebst-troyer-hansmann:jcp:2006:optimized-replica-selection; @nadler-hansmann:pre:2007:generalized-ensemble; @gront-kolinski:j-phys-condens-matter:2007:optimized-replica-selection; @park-pande:pre:2007:choosing-weights-simulated-tempering; @shenfeld-xu:pre:2009:thermodynamic-length], but are beyond the scope of this work to discuss here.
If the second-largest eigenvalue $\mu_2$ is such that $0 < \mu_2 < 1$ we can estimate a corresponding [*relaxation time*]{} $\tau_2$ as $$\begin{aligned}
\tau_2 &=& \frac{\tau}{1 - \mu_2}\end{aligned}$$ where $\tau$ is the effective time between exchange attempts. $\tau_2$ then provides an estimate of the total simulation time required for the autocorrelation function in the state index $k^{(n)}$ of a replica at iteration $n$ of the simulation to decay to $1/e$ of the initial value. This estimate holds if the time scale of decorrelation in the configurational coordinate $x$ is fast compared to the decorrelation of the state index $k$; that is, if essentially uncorrelated samples could be drawn from $\pi(x | k)$ for each update of $x^{(n+1)} | k^{(n)}$. Because configuration updates for useful molecular problems generally have long correlation times, this $\tau_2$ time represents a lower bound on the observed correlation time for both the state index $k^{(n)}$ and the configuration $x^{(n)}$.
### Correlation time of the replica state index, $\tau_\mathrm{ac}$ {#section:applications:metrics-of-efficiency:integrated-autocorrelation-time}
As a more realistic estimate of how quickly correlations in the state index $k^{(n)}$ decay in a replica trajectory, we also directly compute the correlation time of the state index history using the efficiency computation scheme described in Section 5.2 of [@chodera:jctc:2007:parallel-tempering-wham], where $\tau_\mathrm{ac}$ is equal to the integrated area under the autocorrelation function. For replica exchange simulations, where all replicas execute an equivalent walk in state space, the unnormalized autocorrelation functions were averaged over all replicas before computing the autocorrelation time by integrating the area under the autocorrelation function. This time, $\tau_\mathrm{ac}$, gives a practical estimate of how much simulation time must elapse for correlations in the state index to fall to $1/e$. The [*statistical inefficiency*]{} is the number of samples required to collect each uncorrelated sample, and can be estimated for a Markovian process by $2\tau_{ac}+1$, with $\tau_{ac}$ in units of time between samples.
### Average end-to-end transit time of the replica state index, $\tau_\mathrm{end}$ {#section:applications:metrics-of-efficiency:transit-time}
As an additional estimate of practical efficiency, we measure the average end-to-end transition time for the state index, $\tau_\mathrm{end}$. This is the average of the time elapsed between the first visit of the state index $k^{(n)}$ to one end point ($k=1$ or $k=K$) after visiting the opposite end point ($k=K$ or $k=1$, respectively). This metric of efficiency, or the related “round-trip” time, has seen common use in diagnosing efficiency for simulated-tempering and replica exchange simulations [@trebst-troyer-hansmann:jcp:2006:optimized-replica-selection; @nadler-hansmann:pre:2007:optimized-replica-exchange-moves; @escobedo-martinez-veracoechea:jcp:2008:optimization-of-expanded-ensemble; @denschlag-lingenheil-tavan:cpl:2009:optimal-temperature-ladders].
Model Illustration {#section:illustration}
==================
To illustrate the motivation behind the idea that speeding up sampling in one coordinate—the state index or permutation—will enhance sampling of the overall Markov chain of $(x,k)$ or $(X,P)$, we consider a simulated tempering simulation in a one-dimensional model potential, $$\begin{aligned}
U(x) &=& 10 (x-1)^2 (x+1)^2 .\end{aligned}$$ shown in the top panel of Figure \[figure:model-example-trajectory\], along with the corresponding stationary distribution $\pi(x)$ at several temperatures from $k_B T = 1$ to $k_B T = 10$. To simplify our illustration, we directly numerically compute the log-weight factors $$\begin{aligned}
g_k &=& - \ln \int_{-\infty}^{+\infty} dx \, e^{-\beta_k U(x)}\end{aligned}$$ so that the simulation has an equal probability to be in each of the $K$ states.
The $K$ inverse temperatures $\beta_k$ that can be visited during the simulated tempering simulation are chosen to be geometrically spaced, $$\begin{aligned}
\beta_k &=& 10^{-(k-1)/(K-1)} \:\: \mathrm{for} \:\: k = 1,\ldots,K \label{equation:geometric-temperature-spacing}\end{aligned}$$
Each iteration of the simulation consists of an update of the temperature index $k$ using either neighbor exchange (Section \[section:algorithms:expanded-ensemble:neighbor-exchange\]) or independence sampling updates (Section \[section:algorithms:expanded-ensemble:gibbs-sampling\]), followed by 100 steps of Metropolis Monte Carlo [@metropolis:jcp:1953:metropolis-monte-carlo; @hastings:biometrika:1970:metropolis-hastings] using a Gaussian proposal with zero mean and standard deviation of 0.1 in the $x$-coordinate. Simulations are initiated from $(x_0,k_0) = (-1,1)$.
Illustrative trajectories for $K = 16$ are shown in the second and third panels of Figure \[figure:model-example-trajectory\], along with the correlation times $\tau_k$ and $\tau_x$ computed for the temperature index $k$ and the configurational coordinate $x$, respectively, from a long trajectory of $10^6$ iterations. Independence sampling in state space $k$ greatly reduces the correlation time, and hence statistical inefficiency, in $k$ compared to neighbor sampling. Importantly, because $k$ and $x$ are coupled, we clearly see that increasing the mixing in the index $k$ also substantially reduces the correlation time in the configurational coordinate $x$. We find that $\tau_x = 9.6 \pm 0.2$ for independence sampling, compared to $24.1 \pm 0.9$ for neighbor moves.
Figure \[figure:model-g-vs-ntemps\] compares the correlation times for $k$ and $x$ estimated from simulations of length $10^6$ for different numbers of temperatures spanning the same range of $k_B T \in [1,10]$, with temperatures again geometrically spaced according to Eq. \[equation:geometric-temperature-spacing\]. As the number of temperatures spanning this range increases, the correlation time in the temperature coordinate $k$ increases, as one would expect for a random walk on domains of increasing size. Notably, increasing the number of temperatures *also* has the effect of increasing the correlation time of the configuration coordinate $x$. When independence sampling is used to update the temperature index $k$ instead, the mixing time in $k$ is greatly reduced, and both correlation times $\tau_k$ and $\tau_x$ remain small even as the number of temperatures is increased.
Applications {#section:applications}
============
To demonstrate that the simple state update modifications we describe in Section \[section:algorithms\] lead to real efficiency improvements in practical simulation problems, we consider three typical simulation problems: An alchemical expanded ensemble simulation of united atom (UA) methane in water to compute the free energy of transfer from gas to water; a parallel tempering simulation of terminally-blocked alanine dipeptide in implicit solvent; and a two-dimensional replica exchange umbrella sampling simulation of alanine dipeptide in implicit solvent to compute the potential of mean force. These systems are small compared to modern applications of biophysical and biochemical interest. However, they are realistic enough to demonstrate the fundamental issues in multiensemble simulations, but still sufficiently tractable that a large quantity of data can be collected to prove that the differences in efficiency of our proposed mixing schemes are highly significant.
Expanded ensemble alchemical simulations of Lennard-Jones spheres in water
--------------------------------------------------------------------------
### United atom methane
[lccccccccc]{} & &&\
& $\tau_2$ & $\tau_\mathrm{ac}$ & $\tau_\mathrm{end}$ & $\tau_\mathrm{N}$ && $\tau_2$ & $\tau_\mathrm{ac}$ & $\tau_\mathrm{end}$ & $\tau_\mathrm{N}$\
\
neighbor exchange & 1.693 $\pm$ 0.008 & 6.7 $\pm$ 0.4 & 11.9 $\pm$ 0.2 & 5.9 $\pm$ 0.4 && 1.0 & 1.0 & 1.0 & 1.0\
independence sampling & 0.771 $\pm$ 0.004 & 6.2 $\pm$ 0.2 & 7.2 $\pm$ 0.1 & 5.4 $\pm$ 0.2 && 2.20 $\pm$ 0.02& 1.08 $\pm$ 0.08& 1.65 $\pm$ 0.04& 1.10 $\pm$ 0.09\
Metropolized indep. & 0.645 $\pm$ 0.003 & 4.6 $\pm$ 0.2 & 6.6 $\pm$ 0.1 & 4.4 $\pm$ 0.2 && 2.62 $\pm$ 0.02& 1.5 $\pm$ 0.1& 1.81 $\pm$ 0.04& 1.3 $\pm$ 0.1\
\
neighbor exchange & 0.764 $\pm$ 0.006 & 4.9 $\pm$ 0.3 & 7.2 $\pm$ 0.1 & 4.6 $\pm$ 0.3 && 1.0 & 1.0 & 1.0 & 1.0\
independence sampling & 0.769 $\pm$ 0.005 & 4.8 $\pm$ 0.2 & 7.2 $\pm$ 0.1 & 4.5 $\pm$ 0.2 && 0.99 $\pm$ 0.01& 1.01 $\pm$ 0.08& 1.00 $\pm$ 0.02& 1.01 $\pm$ 0.07\
Metropolized indep. & 0.774 $\pm$ 0.005 & 5.0 $\pm$ 0.2 & 7.5 $\pm$ 0.1 & 4.7 $\pm$ 0.2 && 0.99 $\pm$ 0.01& 0.98 $\pm$ 0.08& 0.96 $\pm$ 0.02& 0.98 $\pm$ 0.07\
\
neighbor exchange & 85.8 $\pm$ 2.3 & 177.7 $\pm$ 17.6 & 330.0 $\pm$ 16.1 & 105.3 $\pm$ 12.1 && 1.0 & 1.0 & 1.0 & 1.0\
independence sampling & 39.0 $\pm$ 0.9 & 69.2 $\pm$ 6.1 & 141.1 $\pm$ 4.7 & 49.1 $\pm$ 3.8 && 2.20 $\pm$ 0.08& 2.6 $\pm$ 0.3& 2.3 $\pm$ 0.1& 2.1 $\pm$ 0.3\
Metropolized indep. & 31.8 $\pm$ 0.4 & 51.4 $\pm$ 1.9 & 115.7 $\pm$ 3.4 & 37.4 $\pm$ 1.4 && 2.70 $\pm$ 0.08& 3.5 $\pm$ 0.4& 2.9 $\pm$ 0.2& 2.8 $\pm$ 0.3\
We first compare different types of Gibbs sampling state space updates in an expanded ensemble alchemical simulation of the kind commonly used to compute the free energy of hydration of small molecules [@fenwick-escobedo:jcp:2003:replica-exchange-expanded-ensembles; @escobedo-martinez-veracoechea:jcp:2008:optimization-of-expanded-ensemble]. If the state mixing schemes proposed here lead to more efficient sampling among alchemical states, a larger number of effectively uncorrelated samples will be generated for a simulation of a given duration, and thus require less computation effort to reach the desired degree of statistical precision.
An OPLS-UA united atom methane particle ($\sigma = 0.373$ nm, $\epsilon=1.230096$ kJ/mol) was solvated in a cubic simulation cell containing 893 TIP3P [@jorgensen:jcp:1983:tip3p] waters. For all simulations, a modified version of [GROMACS]{} 4.5.2 [@gromacs4] was used [@footnote6]. A velocity Verlet integrator [@swope:jcp:1982:velocity-verlet] was used to propagate dynamics with a timestep of 2 fs. A Nosé-Hoover chain of length 10 [@martyna_nose-hoover_1992] and time constant $\tau_T = 10.0$ ps was used to thermostat the system to 298 K. A measure-preserving barostat was used according to Tuckerman et al. [@yu_measure-preserving_2010; @tuckerman_liouville-operator_2006] to maintain the average system pressure at 1 atm, with $\tau_p = 10.0$ ps and compressibility $4.5\times10^{-5}$ bar$^{-1}$. Rigid geometry was maintained for all waters using the analytical SETTLE scheme [@kollman:1992:j-comput-chem:settle]. A neighborlist and PME cutoff of 0.9 nm were used, with a PME order of 6, spacing of 0.1 nm and a relative tolerance of $10^{-6}$ at the cutoff. The Lennard-Jones potential was switched off, with the switch beginning at 0.85 nm and terminating at the cutoff of 0.9 nm. An analytical dispersion correction was applied beyond the Lennard-Jones cutoff to correct the energy and pressure computation [@shirts_accurate_2007]. The neighborlist was updated every 10 steps.
A set of $K = 6$ alchemically-modified thermodynamic states were used in which the Lennard-Jones interactions between the methane and solvent were eliminated using a soft-core Lennard-Jones potential [@shirts_solvation_2005], $$\begin{aligned}
U_{ij}(r;\lambda) &=& 4\epsilon_{ij}\lambda \, f(r;\lambda) [1 - f(r;\lambda)] \nonumber \\
f(r;\lambda) &\equiv& [\alpha(1-\lambda) + (r/\sigma_{ij})^6]^{-1}\end{aligned}$$ with values of the alchemical coupling parameter $\lambda_k$ chosen to be $\{0.0,0.3,0.6,0.7,0.8,1.0\}$.
To simplify our analysis of efficiency, we fix the log-weights $g_k$ to “perfect weights,” where all states are visited with equal probability. This also decouples the issue of efficiency of state updates with efficiency of different weight update schemes, of which many have been proposed [@lyubartsev:jcp:1992:expanded-ensembles; @marinari-parisi:europhys-lett:1992:simulated-tempering; @wang-landau:prl:2001:wang-landau; @park-ensign-pande:pre:2006:bayesian-weight-update; @park-pande:pre:2007:choosing-weights-simulated-tempering; @li-fajer-yang:jcp:2007:simulated-scaling; @chelli:jctc:2010:optimal-weights-expanded-ensembles]. The “perfect” log-weights were estimated for this system as follows: A 1 ns expanded ensemble simulation using independence sampling was run, with weights $g_k$ initialized to zero, then adjusted using a Wang-Landau scheme [@li-fajer-yang:jcp:2007:simulated-scaling], until occupancy of each state was roughly even to within statistical noise. With these approximate weights, a 2 ns expanded ensemble simulation using independence sampling with fixed weights was run, and the free energy of each state was estimated using MBAR [@shirts-chodera:jcp:2008:mbar]. The log-weights $g_k$ were set to these estimated free energies, which were $\{0.0, 0.32, -0.46, -1.67, -2.83, -3.66\}$, in units of $k_{B}T$. Simulations using these weights deviated by an average of 5% from flat histogram occupancy in states, with an average maximum deviation over all simulations of less than 10%.
The state update procedure was carried out either every 0.1 ps (frequent update) or 5 ps (infrequent update), in order to test the effect of state updates that were much faster than, or on the order of, the conformational correlation times of molecular dynamics, as water orientational correlation times are a few picoseconds [@shirts_solvation_2003]. Production simulations with fixed log-weights were run with for 25 ns (250 000 state updates), for frequent updates, or 100 ns (20 000 state updates), for infrequent updates. Three types of state moves were attempted: (1) neighbor exchange moves (described in Section \[section:algorithms:expanded-ensemble:metropolized-gibbs\]), (2) independence sampling (Section \[section:algorithms:expanded-ensemble:gibbs-sampling\]), and (3) Metropolized independence sampling (Section \[section:algorithms:expanded-ensemble:metropolized-gibbs\]). In the case of frequent updates, we additionally performed 1 000 trials of the state update every 0.1 ps, instead of a single update, before returning to coordinate update moves with molecular dynamics.
Statistics of the observed replica trajectories are shown in Table \[table:expanded-ensemble-methane\]. All three mixing efficiency measures of the state index trajectories described in Section \[section:algorithms:metrics-of-efficiency\] were computed: relaxation time of the empirical state transition matrix ($\tau_2$), autocorrelation of the state function ($\tau_{ac}$), and average end-to-end distance ($\tau_{\mathrm{end}}$).
We additionally look at a measure of correlation in the coordinate direction. For each configuration, we examine the number of O atoms of the water molecules $N$ that are found in the interior of the united atom methane, set to be 0.3 nm (or 87.5% of the Lennard-Jones $\sigma_{ij}=0.3428$ nm) from the center. We then compute the autocorrelation function of $\tau_{N}$ of this variable, which is affected both by the dynamics of the state and the dynamical response of the system to changes in state. Uncertainties in these time autocorrelation functions are computed by subdividing the trajectories into $N_S=10$ subtrajectories, computing the standard error, and then dividing by $\sqrt{N_S}$ to obtain standard error of the $N_S\times$ longer trajectory. Uncertainties changed by less than 5% when computed with $N_S=20$ for frequent update simulations, and less than 10% for infrequent update simulations.
The relaxation time $\tau_2$ estimated from the second eigenvalue of the empirical state transition matrix (Section \[section:applications:metrics-of-efficiency:second-eigenvalue\]) does appear to provide a lower bound for the other estimated mixing times. For the infrequent state updates, it is only about 25% smaller than $\tau_{N}$. This suggests that when transition times in state space are of the same order of magnitude as conformational rearrangements $\tau_2$ is not only a lower bound, but is characteristic of sampling through the joint state-configuration space. We additionally note that mixing time $\tau_2$ is empirically exactly proportional to the update frequency; the mixing times for the infrequent update state are exactly (5 ps/0.1 ps) = 50 times longer than the frequent state mixing times, a direct consequence of the fact that the probability of successful state transitions is directly proportional to the rate of attempted transitions.
For both the frequent and infrequent state updates, independence sampling and Metropolized independence sampling yield a clear, statistically significant speedup by all sampling metrics. This speedup is accentuated for infrequent updates. For frequent updates, the speedup is between 1.3 and 2.6 for Metropolized independence sampling, while for infrequent updates, it ranges between 2.7 and 3.5, as seen in Table \[table:expanded-ensemble-methane\]. As expected, attempting many state updates in a row (1 000 state moves) using any of the state update schemes effectively recapitulates the independence sampling scheme. Repeated application of any method that obeys the balance condition will eventually converge to the same independent sampling distribution. If state updates are relatively inexpensive, then any state update scheme that ensures the correct distribution is sampled can be iterated many times, effectively resulting in an independence sampling scheme. Interestingly, this means that Metropolized independence sampling becomes worse when repeated several times, as it eventually turns into simple independence sampling.
Although the acceleration of independence sampling over neighbor exchange is more dramatic with longer intervals between state updates, more frequent state updates appear to always be better than less frequent updates. For example, neighbor exchange with more frequent updates achieves shorter correlation times that either independence sampling scheme for infrequent updates. Increased sampling frequency in state space seems to be a good idea. [@sindhikara-meng-roitberg:jcp:2008:exchange-frequency; @sindhikara-emerson-roitberg:jctc:2010:exchange-often-and-properly] It is possible that there are conditions where this conclusion might not be true; collective moves like long molecular dynamics trajectories of polymers might become disrupted by too frequent changes in state space. Additional study is required to understand this phenomena. We finally note that for this particular system, Metropolized independence sampling is slightly but clearly better than independence sampling in all sampling measures, providing a strong incentive to use Metropolized independence sampling where convenient.
### Larger Lennard-Jones spheres
[lccccccccc]{} & &&\
& $\tau_2$ & $\tau_\mathrm{ac}$ & $\tau_\mathrm{end}$ & $\tau_\mathrm{N}$ && $\tau_2$ & $\tau_\mathrm{ac}$ & $\tau_\mathrm{end}$ & $\tau_\mathrm{N}$\
\
neighbor exchange & 9.51 $\pm$ 0.01 & 65.8 $\pm$ 4.2 & 126.3 $\pm$ 4.2 & 58.1 $\pm$ 4.3 && 1.0 & 1.0 & 1.0 & 1.0\
independence sampling & 2.586 $\pm$ 0.009 & 42.9 $\pm$ 2.4 & 88.4 $\pm$ 2.7 & 41.5 $\pm$ 2.0 && 3.68 $\pm$ 0.01& 1.5 $\pm$ 0.1& 1.43 $\pm$ 0.06& 1.4 $\pm$ 0.1\
Metropolized indep. & 2.181 $\pm$ 0.006 & 48.6 $\pm$ 4.0 & 88.3 $\pm$ 3.0 & 46.7 $\pm$ 3.4 && 4.36 $\pm$ 0.01& 1.4 $\pm$ 0.1& 1.43 $\pm$ 0.07& 1.2 $\pm$ 0.1\
\
neighbor exchange& 95.0 $\pm$ 0.2 & 211.1 $\pm$ 58.9 & 507.6 $\pm$ 19.3 & 167.6 $\pm$ 16.0 && 1.0 & 1.0 & 1.0 & 1.0\
independence sampling & 25.8 $\pm$ 0.1 & 67.3 $\pm$ 3.6 & 196.0 $\pm$ 5.8 & 63.1 $\pm$ 3.3 && 3.69 $\pm$ 0.02& 3.1 $\pm$ 0.9& 2.6 $\pm$ 0.1& 2.7 $\pm$ 0.3\
Metropolized indep. & 21.6 $\pm$ 0.1 & 66.8 $\pm$ 2.4 & 169.2 $\pm$ 4.7 & 62.1 $\pm$ 2.5 && 4.40 $\pm$ 0.02& 3.2 $\pm$ 0.9& 3.0 $\pm$ 0.1& 2.7 $\pm$ 0.3\
As united atom methane is much smaller than typical biomolecules of interest, we additionally examined an alchemical expanded ensemble simulation of a much larger Lennard-Jones sphere. In this case, the sphere has $\sigma_{ii} = 1.09$ nm and $\epsilon_{ii}=1.230096$ kJ/mol, again solvated in a cubic simulation cell containing 893 TIP3P [@jorgensen:jcp:1983:tip3p] waters. These parameters result in a sphere-water $\sigma_{ij}=0.561$ nm, and therefore a particle $5.0$ times as large in volume as the UA methane sphere. Because of the larger volume of the solute, $K = 18$ alchemically-modified thermodynamic states were required, with $\lambda$ = \[$0$, $0.15$, $0.3$, $0.45$, $0.55$, $0.6$, $0.64$, $0.66$, $0.68$, $0.70$, $0.72$, $0.75$, $0.78$, $0.81$, $0.84$, $0.87$, $0.90$, $1.0$\]. All other simulation parameters (other than simulation length) were the same as the UA methane simulations. Log-weights $g_k$ for the equilibrium expanded ensemble simulation were determined in the same manner as for united atom methane, except that a 15 ns simulation was used to generate the data for MBAR, yielding weights $g_k$ = $\{0.0$, $1.74$, $2.96$, $3.39$, $2.84$, $2.01$, $0.73$, $-0.34$, $-1.75$, $-3.35$, $-4.96$, $-7.19$, $-9.11$, $-10.70$, $-11.98$, $-12.98$, $-13.72$, $-14.65\}$. Frequent state updates were performed every 0.1 ps, but infrequent state moves were performed every 1 ps rather than 5 ps to obtain better statistics for the larger molecule. The production expanded ensemble simulations were run for a total of 100 ns for frequent exchange, and 250 ns for infrequent exchange. The same three types of moves in state space were attempted as with UA methane.
Statistics of the observed replica trajectories are shown in Table \[table:expanded-ensemble-bigLJ\]. All three convergence rate diagnostics of the state index trajectories described in Section \[section:algorithms:metrics-of-efficiency\] were computed. In general, the relaxation time estimated from the second eigenvalue of the empirical state transition matrix (Section \[section:applications:metrics-of-efficiency:second-eigenvalue\]) again provides a lower bound for the other computed relaxation times. For the infrequent sampling interval $\tau_2$ is of the same order of magnitude (2 to 5 times less) than the other sampling measures. Again, for both the frequent (0.1 ps) and infrequent (1 ps) state update intervals, independence sampling and Metropolized independence sampling yields a clear speedup over neighbor exchange. The improvement in sampling efficiency appears to be valid for both small and large particles.
Parallel tempering simulations of terminally-blocked alanine peptide in implicit solvent
----------------------------------------------------------------------------------------
----------------------- ----------------- -------------------- --------------------- -------------------- -------------------- -------------------- --------------------
$\tau_2$ $\tau_\mathrm{ac}$ $\tau_\mathrm{end}$ $\tau_{\cos \phi}$ $\tau_{\sin \phi}$ $\tau_{\cos \psi}$ $\tau_{\sin \psi}$
neighbor exchange 91.8 $\pm$ 0.6 80 $\pm$ 2 360 $\pm$ 30 25 $\pm$ 2 110 $\pm$ 9 25 $\pm$ 2 66 $\pm$ 6
independence sampling 2.62 $\pm$ 0.01 1.60 $\pm$ 0.06 28.7 $\pm$ 0.7 12.4 $\pm$ 0.5 8.7 $\pm$ 0.4 11.8 $\pm$ 0.6 9.1 $\pm$ 0.5
----------------------- ----------------- -------------------- --------------------- -------------------- -------------------- -------------------- --------------------
We next consider a parallel tempering simulation, a form of replica exchange in which the thermodynamic states differ only in inverse temperature $\beta_k$. A system containing terminally-blocked alanine (sequence Ace-Ala-Nme) was constructed using the [LEaP]{} program [@ambertools10-leap] from the [AmberTools]{} 1.2 package with bugfixes 1–4 applied. The Amber parm96 forcefield was used [@AMBER-parm96] along with the Onufriev-Bashford-Case generalized Born-surface area (OBC GBSA) implicit solvent model (corresponding to model I of [@onufriev-bashford-case:2004:proteins:obc-gbsa], equivalent to [igb=2]{} in Amber’s [sander]{} program and using the [mbondi2]{} radii selected within [LEaP]{}).
A custom Python code making use of the GPU-accelerated [OpenMM]{} package [@friedrichs:2009:j-comput-chem:openmm; @eastman:2010:comp-sci-eng:openmm; @eastman:2010:j-comput-chem:openmm] and the [PyOpenMM]{} Python wrapper [@pyopenmm] was used to conduct the simulations. All forcefield terms are identical to those used in AMBER except for the surface area term, which was left as default in the OpenMM implementation through a GBSAOBCForce term. Parallel tempering simulations of 2 000 iterations were run, with dynamics propagated by 500 steps each iteration using a 2 fs timestep and the leapfrog Verlet integrator [@verlet:1967:phys-rev:verlet-integrator-1; @verlet:1967:phys-rev:verlet-integrator-2]. Velocities were reassigned from the Maxwell-Boltzmann distribution each iteration. The Python scripts for simulation and data analysis used here are available online at <http://simtk.org/home/gibbs>.
For the replica-mixing phase, the simulation employed either neighbor exchange (Section \[section:algorithms:replica-exchange:neighbor-exchange\]) or independence sampling (Section \[section:algorithms:replica-exchange:gibbs-sampling\]), with $K^3$ attempted swaps of replica pairs selected at random. The efficiency was measured in several ways, shown in Table \[table:alanine-dipeptide-parallel-tempering\]. In addition to the standard mixing metrics described in Section \[section:algorithms:metrics-of-efficiency\], an estimate of the configurational relaxation times was also made; due to the circular nature of the torsional coordinates $\phi$ and $\psi$ known to be slow degrees of freedom for this system [@chodera:mms:2006:long-time-dynamics], we instead computed the autocorrelation times for $\sin\phi$, $\cos\phi$, $\sin\psi$, and $\cos\phi$. All replicas were treated as equivalent, and their raw statistics (e.g. autocorrelation functions before normalization) were averaged to produce these estimates. Statistical error was again estimated by blocking.
As expected, the various metrics indicate that the parallel tempering replicas mix in state space much more rapidly with independence sampling than when only neighbor exchanges are attempted. The amount by which mixing is accelerated depends on the metric used to quantify this, but it is roughly one to two orders of magnitude. The structural relaxation times also reflect a speedup, though much more modest than the acceleration in state space sampling—roughly a factor of two to ten, depending on the metric examined.
Two-dimensional replica exchange umbrella sampling of terminally-blocked alanine peptide in implicit solvent
------------------------------------------------------------------------------------------------------------
Finally, we consider a two-dimensional replica exchange umbrella sampling situation, commonly used to compute potentials of mean force along two coordinates of interest. We again consider the alanine dipeptide in implicit solvent, and employ umbrella potentials to restrain the $\phi$ and $\psi$ torsions near reference values $(\phi^0_k, \psi^0_k)$ for $K = 101$ replicas spaced evenly on a $10 \times 10$ toroidal grid, with the inclusion of one replica without any bias potential for ease of post-simulation analysis.
Because harmonic constraints are not periodic, we employ periodic bias potential based on the von Mises circular normal distribution, $$\begin{aligned}
U'_k(x) &\equiv& - \kappa \left[ \cos(\phi - \phi^0_k) + \cos(\psi - \psi^0_k) \right]\end{aligned}$$ where $\kappa$ has units of energy. For sufficiently large values of $\kappa$, this will localize the torsion angles in an approximately Gaussian distribution near the reference torsions $(\phi^0_k,\psi^0_k)$ with a standard deviation of $\sigma \equiv (\beta \kappa)^{1/2}$.
Here, we employ a $\kappa$ of $(2 \pi / 30)^{-2} \beta^{-1}$ so that neighboring bias potentials are separated by $3 \sigma$. This was sufficient to localize sampling near the reference torsion values for most sterically unhindered regions. The simulation was run at 300 K, using a 2 fs timestep with 5 ps between replica exchange attempts. A total of 2 000 iterations were conducted, with each iteration consisting of mixing the replica state assignments via a state update phase, a new velocity assignment from the Maxwell-Boltzmann distribution, propagation of dynamics, and writing out the resulting configuration data. The first 100 iterations were discarded as equilibration.
The same mixing schemes examined in the parallel tempering simulation were evaluated here, and the results of the efficiency metrics are summarized in Table \[table:alanine-dipeptide-2d-umbrella-sampling\]. Note that the end-to-end time does not have a clear interpretation in terms of the average transit time between a maximum and minimum thermodynamic parameter here—it simply reflects the average time between exchanges between a particular localized umbrella and the unbiased state.
As in the parallel tempering case, we find that both mixing times in state space and the structural correlation times are reduced by use of Gibbs sampling, albeit to a lesser degree than in the parallel tempering case. Here, state relaxation times are reduced by a factor of two to six, depending on the metric considered, while structural correlation times are reduced by a factor of four or five.
----------------------- ---------------- -------------------- --------------------- -------------------- -------------------- -------------------- --------------------
$\tau_2$ $\tau_\mathrm{ac}$ $\tau_\mathrm{end}$ $\tau_{\cos \phi}$ $\tau_{\sin \phi}$ $\tau_{\cos \psi}$ $\tau_{\sin \psi}$
neighbor exchange 82 $\pm$ 4 31.0 $\pm$ 0.9 350 $\pm$ 30 47 $\pm$ 2 57 $\pm$ 2 26.4 $\pm$ 0.8 27.1 $\pm$ 0.9
independence sampling 24.2 $\pm$ 0.3 5.45 $\pm$ 0.06 175 $\pm$ 6 8.92 $\pm$ 0.09 9.9 $\pm$ 0.1 5.63 $\pm$ 0.04 6.09 $\pm$ 0.04
----------------------- ---------------- -------------------- --------------------- -------------------- -------------------- -------------------- --------------------
Discussion {#section:discussion}
==========
We have presented the framework of Gibbs sampling on the joint set of state and coordinate variables to better understand different expanded ensemble and replica exchange schemes, and demonstrated how this framework can identify simple ways to enhance the efficiency of expanded ensemble and replica exchange simulations by modifying the thermodynamic state update phase of the algorithms. While the actual efficiency improvement will depend on the system and simulation details, we believe there is likely little, if any, drawback to using these improvements in a broad range of situations.
For simulated and parallel tempering simulations, in which only the temperature is varied among the thermodynamic states, the recommended scheme (independence sampling updates, Sections \[section:algorithms:expanded-ensemble:gibbs-sampling\] and \[section:algorithms:replica-exchange:gibbs-sampling\]) is simple and inexpensive enough to be easily adopted by simulated and parallel tempering codes. Because calculation of exchange probability requires no additional energy evaluations, it is effectively free. Other expanded ensemble or replica exchange simulations where the potential does not vary between states (such as exchange among temperatures and pressures [@paschek-garcia:prl:2004:temperature-pressure-replica-exchange] or pH values [@meng-roitberg:jctc:2010:constant-pH-replica-exchange]) are also effectively free, as no additional energy evaluations are required in these cases either. As long as state space evaluations are cheap compared to configuration updates, independence sampling will mix more rapidly than neighbor updates, though this advantage will be reduced as the interval spent between configuration updates by molecular dynamics or Monte Carlo simulation or the total time performing these coordinate updates becomes very small.
In some cases, exchange of information between processors during replica exchange in tightly coupled parallel codes may incur some cost, mainly in the form of latency. In many cases, however, the decrease in mixing times could more than offset any loss in parallel efficiency. If the recommended independence sampling schemes would consume a substantial fraction of the iteration time, or where the parallel implementation of state updates is already complex, it may still be relatively inexpensive to simply perform the same state update scheme *several times*, achieving enhanced mixing with little extra coding or computational overhead. Alternatively, the Gibbs sampling formalism could be used to design some other scheme that performs frequent state space sampling only on replicas that are local in the topology of the code.
For simulated scaling [@li-fajer-yang:jcp:2007:simulated-scaling] or Hamiltonian exchange simulations [@sugita-kitao-okamoto:jcp:2000:hamiltonian-exchange; @fukunishi-watanabe-takada:jcp:2002:hamiltonian-exchange; @jang-shin-pak:prl:2003:hamiltonian-exchange; @kwak-hansmann:prl:2005:hamiltonian-exchange], independence sampling updates of state permutation vector $S$ requires evaluation of the reduced potential $u_k(x)$ at all $K$ states for the current configuration (in simulated scaling) or all replica configurations $x_k$ (for Hamiltonian exchange), which requires more energy evaluations than the neighbor exchange scheme. However, if the intent is to make use of the multistate Bennett acceptance ratio (MBAR) estimator [@shirts-chodera:jcp:2008:mbar], which produces optimal estimates of free energy differences and expectations, all of these energies are required for analysis anyway, and so the computational impact on simulation time is negligible. It is more computationally efficient to evaluate these additional reduced potentials *during* the simulation, instead of post-processing simulation data, which is especially true if the additional reduced potential evaluations are done in parallel. Alternatively, if a simulated scaling simulation is run and one does not wish to use MBAR, restricted range state updates (Section \[section:algorithms:expanded-ensemble:restricted-range-gibbs\]) offer improved mixing behavior with minimal additional number of energy evaluations.
We have found that examining the exchange statistics, the empirical state transition matrix and its dominant eigenvalues, is extremely useful in diagnosing equilibration and convergence, as well as poor choices of thermodynamic states. It is often very easy to see, from the diagonally dominant structure of this matrix, where regions of poor state overlap occur. Poor overlap among sets of thermodynamic states observed early in simulations from the empirical state transition matrix are likely to also frustrate post-simulation analysis with techniques like MBAR and histogram reweighting methods [@shirts-chodera:jcp:2008:mbar; @chodera:jctc:2007:parallel-tempering-wham; @kumars:WHAM], making such metrics useful diagnostic tools.
For more complex state topologies in expanded ensemble or replica exchange simulations, where for example several different pressures or temperatures are included simultaneously, there may not exist a simple grid of values, or it may not be easy to identify which states are the most efficient neighbors. Using independence sampling eliminates the need to plan efficient exchange schemes among neighbors, or even to determine which states are neighbors. This may encourage the addition of states that aid in reducing the correlation time of the overall Markov chain solely by speeding decorrelation of conformational degrees of freedom, since they will automatically couple to states with reasonable phase space overlap.
It is important to stress, however, that expanded ensemble and replica exchange simulations are not a cure-all for all systems with poor sampling. In the presence of a first-order or pseudo-first-order phase transition, phase space mixing may still take an exponentially long time even when simulated or parallel tempering algorithms are used [@bhatnagar-randall:acm:2004:torpid-mixing]. Optimization of the state exchange scheme, as described here, can only help so much; further efficiency gains would require design of intermediate states that abolish the first-order phase transition. Schemes for optimal state selection are an area of active research [@kofke:2002:jcp:acceptance-probability; @katzberger-trebst-huse-troyer:j-stat-mech:2006:feedback-optimized-parallel-tempering; @trebst-troyer-hansmann:jcp:2006:optimized-replica-selection; @nadler-hansmann:pre:2007:generalized-ensemble; @gront-kolinski:j-phys-condens-matter:2007:optimized-replica-selection; @park-pande:pre:2007:choosing-weights-simulated-tempering; @shenfeld-xu:pre:2009:thermodynamic-length].
Finally, we observe that the independence sampling scheme for a simulated tempering simulation or any simulation where the contribution to the reduced potential is a thermodynamic parameter $\lambda$ multiplying a conjugate configuration-dependent variable $h(x)$ naturally generalizes to a continuous limit. As the number $K$ of thermodynamic states $\lambda_k$ is increased between some fixed lower and upper limits, this process eventually results in the thermodynamic state index $k$ effectively becoming a continuous variable $\lambda$ [@iba:intl-j-mod-phys-c:2001:extended-ensemble]. Such a *continuous tempering* simulation would sample from the joint distribution $\pi(x,\lambda) \propto \exp[-\lambda h(x) + g(\lambda)]$, with the continuous log weighting function $g(\lambda)$ replacing the discrete $g_k$ in simulated tempering simulations.
The Gibbs sampler and variations on it remain exciting areas for future exploration, and we hope that our conditional state space sampling formulation will make it much easier for other researchers to envision, develop, and implement new schemes for sampling from multiple thermodynamics states. We also hope it encourages exploration of further connections between the two deeply interrelated fields of statistical mechanics and statistical inference.
The authors thank Sergio Bacallado (Stanford University), Jed Pitera (IBM Almaden), Adrian Roitberg (University of Florida), Scott Schmidler (Duke University), and William Swope (IBM Almaden) for insightful discussions on this topic, and Imran Haque (Stanford University), David Minh (Argonne National Labs), Victor Martin-Mayor (Universidad Complutense de Madrid), and Anna Schnider (UC-Berkeley) for a critical reading of the manuscript, and especially David M. Rogers (Sandia National Labs) who recognized a key error in an early version of the restricted range sampling method. JDC acknowledges support from a QB3-Berkeley Distinguished Postdoctoral Fellowship. Additionally, the authors are grateful to [OpenMM]{} developers Peter Eastman, Mark Friedrichs, Randy Radmer, and Christopher Bruns and project leader Vijay Pande (Stanford University and SimBios) for their generous help with the [OpenMM]{} GPU-accelerated computing platform and associated [PyOpenMM]{} Python wrappers.
[^1]: Corresponding author
|
---
abstract: 'We study how to learn a semantic parser of state-of-the-art accuracy with less supervised training data. We conduct our study on WikiSQL, the largest hand-annotated semantic parsing dataset to date. First, we demonstrate that question generation is an effective method that empowers us to learn a state-of-the-art neural network based semantic parser with thirty percent of the supervised training data. Second, we show that applying question generation to the full supervised training data further improves the state-of-the-art model. In addition, we observe that there is a logarithmic relationship between the accuracy of a semantic parser and the amount of training data.'
author:
- |
Daya Guo$^1$[^1] , Yibo Sun$^3$$^*$, Duyu Tang$^2$, Nan Duan$^2$, Jian Yin$^1$,\
**Hong Chi$^2$, James Cao$^2$, Peng Chen$^2$, and Ming Zhou$^2$\
$^1$ The School of Data and Computer Science, Sun Yat-sen University.\
Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, P.R.China\
$^2$ Microsoft Research $^3$ Harbin Institute of Technology\
[{guody5@mail2,issjyin@mail}.sysu.edu.cn]{}\
[{dutang,nanduan,hongchi,jcao,peche,mingzhou}@microsoft.com]{}\
[ybsun@ir.hit.edu.cn]{}\
**
bibliography:
- 'emnlp2018.bib'
title: |
Question Generation from SQL Queries Improves\
Neural Semantic Parsing
---
Introduction
============
Semantic parsing aims to map a natural language utterance to an executable program (logical ) [@zelle1996learning; @wong2007learning; @zettlemoyer2007online]. Recently, neural network based approaches [@dong-lapata:2016:P16-1; @jia-liang:2016:P16-1; @xiao-dymetman-gardent:2016:P16-1; @guu-EtAl:2017:Long; @P18-1069] have achieved promising performance in semantic parsing. However, neural network approaches are data hungry, which performances closely correlate with the volume of training data. In this work, we study the influence of training data on the accuracy of neural semantic parsing, and how to train a state-of-the-art model with less training data.
We conduct the study on WikiSQL [@zhong2017seq2sql], the largest hand-annotated semantic parsing dataset which is larger than other datasets in terms of both the number of logical forms and the number of schemata. The task is to map a natural language question to a SQL query. We use a state-of-the-art end-to-end semantic parser based on neural (detailed in Section \[section:semantic-parsing\]), and vary the number of supervised training instances. Results show that there is a logarithmic relationship between accuracy and the amount of training data, which is consistent with the observations in computer vision tasks [@sun2017revisiting].
We further study how to achieve state-of-the-art parsing accuracy with less supervised data, since annotating a large scale semantic parsing dataset requires funds and domain expertise. We achieve this through question generation, which generates natural language questions from SQL queries. Our question generation model is based on sequence-to-sequence learning. Latent variables [@cao2017latent] are introduced to increase the diversity of generated questions. The artificially generated question-SQL pairs can be viewed as pseudo-labeled data, which can be combined with a small amount of human-labeled data to train the semantic parser.
Results on WikiSQL show that the state-of-the-art logical form accuracy drops from 60.7% to 53.7% with only thirty percent of training data, while increasing to 61.0% when we combine the pseudo-labeled data generated from the question generation model. Applying the question generation model to full training brings further improvements with 3.0% absolute gain. We further conduct a transfer learning experiment that applies our approach trained on WikiSQL to WikiTableQuestions [@pasupat-liang:2015:ACL-IJCNLP]. Results show that incorporating generated instances improves the state-of-the-art neural semantic parser [@krishnamurthy-dasigi-gardner:2017:EMNLP2017].
Overview of the Approach
========================
Our task aims to map a question to a SQL query, which is executable over a table to yield the answer. Formally, the task takes a question $q$ and a table $t$ consisting of $n$ column names and $n \times m$ cells as the input, and outputs a SQL query $y$. In this section, we describe an overview of our approach, which is composed of several components.
![An overview of our approach that improves semantic parsing with question generation.[]{data-label="fig:workflow"}](workflow.pdf){width=".46\textwidth"}
Figure \[fig:workflow\] gives an overview of our approach. First, given a table, a SQL query sampler is used to sample valid, realistic, and representative SQL queries. Second, a question generation component takes SQL queries as inputs to obtain natural language questions. Here, the question generation model is learnt from a small-scale supervised training data that consists of SQL-question pairs. Lastly, the generated question-SQL pairs are viewed as the pseudo-labeled data, which are combined with the supervised training data to train the semantic parser.
Since we conduct the experiment on WikiSQL dataset, we follow and use the same template-based SQL sampler, as summarized in Table \[table:sql-sampler\]. The details about the semantic parser and the question generation model will be introduced in Sections \[section:semantic-parsing\] and Section \[section:qg\], respectively.
[p[1cm]{}|p[5.5cm]{}]{}\
\
\
Variable &\
$agg\_col$ or $cond\_col$& The aggregation column $agg\_col$ and the condition column $cond\_col$ can be one of columns in the table.\
$agg\_op$ & The aggregation operator $agg\_op$ can be empty or *COUNT*. If the type of $agg\_col$ is numeric, $agg\_op$ can additionally be one of *MAX* and *MIN*.\
$cond\_op$& The condition operator $cond\_op$ is $=$. If the type of $cond\_col$ is numeric, $cond\_op$ can additionally be one of $>$ and $<$.\
$cond$& The condition value $cond$ can be any cell value under the $cond\_col$. If the type of $cond\_col$ is numeric, $cond$ can be numerical value sampled from minimum value to maximum value in the $cond\_col$.\
\
\
\
Semantic Parsing Model {#section:semantic-parsing}
======================
We use a state-of-the-art end-to-end semantic parser [@sun2018semantic] that takes a natural language question as the input and outputs a SQL query, which is executed on a table to obtain the answer. To make the paper self-contained, we briefly describe the approach in this section.
The semantic parser is abbreviated as , which is short for Syntax- and Table- Aware seMantic Parser. Based on the encoder-decoder framework, STAMP takes a question as the input and generates a SQL query. It extends pointer networks [@zhong2017seq2sql; @vinyals2015pointer] by incorporating three “channels” in the decoder, in which the column channel predicts column names, the value channel predicts table cells and the SQL channel predicts SQL keywords. An additional switching gate selects which channel to be used for generation. In STAMP, the probability of a token to be generated is calculated as Equation \[equa:our\], where $p_z(\cdot)$ is the probability of the channel $z_t$ to be chosen, and $p_w(\cdot)$ is the probability distribution of generating a word $y_t$ from the selected channel. $$\label{equa:our}
p(y_t| y_{<t}, x) = \sum_{z_t} p_w(y_t | z_t, y_{<t}, x) p_z(z_t| y_{<t}, x)$$
Specifically, the encoder takes a question as the input, uses bidirectional RNN with GRU cells to compute the hidden states, and feeds the concatenation of both ends as the initial state of the decoder. The decoder has another GRU to calculate the hidden states.
Each channel is implemented with an attentional neural network. In the SQL channel, the input of the attention module includes the decoder hidden state and the embedding of the SQL keyword to be calculated (i.e. $e^{sql}_i$). $$\label{equa:sql-channel}
p_w^{sql}(i) \propto exp(W_{sql} [h^{dec}_t;e^{sql}_i])$$
In the column channel, the vector of a column name includes two parts, as given in Equation \[equa:column-channel\]. The first vector ($h^{col}_i$) is calculated with a bidirectional GRU because a column name might contain multiple words. The second vector is a question-aware cell vector, which is weighted averaged over the cell vectors belonging to the column. Cell vectors ($h^{cell}_i$) are also obtained by a bidirectional GRU. The importance of a cell is measured by the number of co-occurred question words, which is further normalized through a $softmax$ function to yield the final weight $\alpha^{cell}_j \in [0,1]$. $$\label{equa:column-channel}
p_w^{col}(i)\propto exp(W_{col} [h^{dec}_t;h^{col}_i;\sum_{j \in col_i} \alpha^{cell}_j h^{cell}_j])$$
In the value channel, the model has two distributions and weighted average them as Equation \[equa:cell-channel\]. Similar to $p^{sql}(\cdot)$, a standard cell distribution $\hat{p}_w^{cell}(\cdot)$ is calculated over the cells belonging to the last predicted column name. They incorporate an additional probability distribution $\alpha^{cell}(\cdot)$ based on the aforementioned word co-occurrence. The hyper parameter $\lambda$ is tuned on the dev set. $$\label{equa:cell-channel}
p_w^{cell}(j) = \lambda \hat{p}_w^{cell}(j) + (1-\lambda)\alpha^{cell}_j$$
Please see more details on model training and inference in .
Question Generation Model {#section:qg}
=========================
In this section, we present our SQL-to-question generation approach, which takes a SQL query as the input and outputs a natural language question. Our approach is based on sequence-to-sequence learning [@sutskever2014sequence; @Bahdanau2015]. In order to replicate rare words from SQL queries, we adopt the copying mechanism. In addition, we incorporate latent variables to increase the diversity of generated questions.
Encoder-Decoder
---------------
#### Encoder:
A bidirectional RNN with gated recurrent unit (GRU) [@cho-EtAl:2014:EMNLP2014] is used as the encoder to read a SQL query $x=(x_1,...,x_T)$. The forward RNN reads a SQL query in a left-to-right direction, obtaining hidden states $(\overrightarrow{h_1},...,\overrightarrow{h_T})$. The backward RNN reads reversely and outputs $(\overleftarrow{h_1},...,\overleftarrow{h_T})$. We then get the final representation $(h_1,...,h_T)$ for each word in the query, where $h_j=[\overrightarrow{h_j};\overleftarrow{h_j}]$. The representation of the source sentence $h_x$ $=$ ($[\overrightarrow{h_T};\overleftarrow{h_1}]$) is used as initial hidden state of the decoder.
#### Decoder:
We use a GRU with an attention mechanism as the decoder. At each time-step $t$, the attention mechanism obtains the context vector $c_{t}$ that is computed same as the multiplicative attention [@luong2015effective]. Afterwards, the concatenation of the context vector, the embedding of the previously predicted word $y_{t-1}$, and the last hidden state $s_{t-1}$ is fed to the next step. $$\label{equa:update}
s_t= GRU(s_{t-1},y_{t-1},c_t)$$ After obtaining hidden states $s_{t}$, we adopt the copying mechanism that predicts a word from the target vocabulary or from the source sentence (detailed in Subsection \[subsec:copy mechanism\]).
Incorporating Copying Mechanism {#subsec:copy mechanism}
-------------------------------
In our task, the generated question utterances typically include informative yet low-frequency words such as named entities or numbers. Usually, these words are not included in the target vocabulary but come from SQL queries. To address this, we follow [CopyNet]{} [@gu-EtAl:2016:P16-1] and incorporate a copying mechanism to select whether to generate from the vocabulary or copy from SQL queries.
The probability distribution of generating the $t$-th word is calculated as Equation \[equa:copy distribution\], where $\psi_g(\cdot)$ and $\psi_c(\cdot)$ are scoring functions for generating from the vocabulary $\boldsymbol{\nu}$ and copying from the source sentence x, respectively. $$\label{equa:copy distribution}
p(y_t|y_{<t},x)=\frac{e^{\psi_g(y_t)}+e^{\psi_c(y_t)}}{\sum_{v\in\boldsymbol{\nu}}e^{\psi_g(v)}+\sum_{w\in{\textbf{x}}}e^{\psi_c(w)}}$$
The two scoring functions are calculated as follows, where $W_g$ and $W_c$ are model parameters, $v_i$ is the one-hot indicator vector for $y_i$ and $h_i$ is the hidden state of word $y_i$ in the source sentence. $$\label{equa:generate}
\begin{split}
&\psi_g(y_i)=v_i^TW_gs_t \\
&\psi_c(y_i)=tanh({h_i}^TW_c)s_t
\end{split}$$
Incorporating Latent Variable
-----------------------------
Increasing the diversity of generated questions is very important to improve accuracy, generalization, and stability of the semantic parser, since this increases the mount of training data and produces more diverse questions for the same intent. In this work, we incorporate stochastic latent variables [@cao2017latent; @serban2017hierarchical] to the sequence-to-sequence model in order to increase question diversity.
Specifically, we introduce a latent variable $z\sim p(z)$, which is a standard Gaussian distribution $\mathcal{N}(0, I_n)$ in our case, and calculate the likelihood of a target sentence $y$ as follows: $$\label{equa:likelihood}
p(y|x)= \int_{z} p(y|z,x)p(z)\, dz$$
We maximize the evidence lower bound (ELBO), which decomposes the loss into two parts, including (1) the KL divergence between the posterior distribution and the prior distribution, and (2) a cross-entropy loss between the generated question and the ground truth. $$\begin{aligned}
\label{equa:KL}
\notag
logp(y|x)\geq -D_{KL}(Q(z|x,y)||p(z)) \\
+ E_{z\sim Q}logp(y|z,x)\end{aligned}$$ The KL divergence in Equation \[equa:KL\] is calculated as follow, where $n$ is the dimensionality of $z$. $$\label{equa:d-kl}
\begin{split}
&D_{KL}(Q(z|x,y)||p(z))= \\
&-\frac{1}{2}\sum_{j=1}^n(1+log(\sigma_j^2)-\mu_j^2-\sigma_j^2)
\end{split}$$ $Q(z|x,y)$ is a posterior distribution with Gaussian distribution. The mean $\mu$ and standard deviation $\sigma$ are calculated as follows, where $h_x$ and $h_y$ are representations of source and target sentences in the encoder, respectively. Similar to $h_x$, $h_y$ is obtained by encoding the target sentence. $$\label{equa:mean_variance}
\begin{split}
&\mu=W_\mu[h_x;h_y]+b_\mu \\
&log(\sigma^2)=W_\sigma[h_x;h_y]+b_\sigma
\end{split}$$
Training and Inference
----------------------
At the training phase, we sample $z$ from $Q(z|x,y)$ using the re-parametrization trick [@kingma2014auto-encoding], and concatenate the source last hidden state $h_x$ and $z$ as the initial state of the decoder. Since the model tends to ignore the latent variables by forcing the KL divergence to 0 [@bowman2016generating], we add a variable weight to the KL term during training. At the inference phase, the model will generate different questions by first sampling $z$ from $p(z)$, concatenating $h_x$ and $z$ as the initial state of the decoder, and then decoding deterministically for each sample.
Here, we list our training details. We set the dimension of the encoder hidden state as 300, and the dimension of the latent variable $z$ as 64. We use dropout with a rate of 0.5, which is applied to the inputs of RNN. Model parameters are initialized with uniform distribution, and updated with stochastic gradient decent. Word embedding values are initialized with Glove vectors [@pennington-socher-manning:2014:EMNLP2014]. We set the learning rate as 0.1 and the batch size as 32. We tune hyper parameters on the development, and use beam search in the inference process.
Experiment
==========
----------------------------------- ------ ---------------- ---------------- ---------------- ----------------
[Acc$_{lf}$]{} [Acc$_{ex}$]{} [Acc$_{lf}$]{} [Acc$_{ex}$]{}
Attentional Seq2Seq 100% 23.3% 37.0% 23.4% 35.9%
Aug.PntNet [@zhong2017seq2sql] 100% 44.1% 53.8% 43.3% 53.3%
Aug.PntNet (re-implemented by us) 100% 51.5% 58.9% 52.1% 59.2%
Seq2SQL [@zhong2017seq2sql] 100% 49.5% 60.8% 48.3% 59.4%
SQLNet [@xu2017sqlnet] 100% – 69.8% – 68.0%
STAMP 30% 54.6% 69.7% 53.7% 68.9%
STAMP + QG 30% 61.6% 74.4% 61.2% 73.9%
STAMP 100% 61.5% 74.8% 60.7% 74.4%
STAMP + QG 100% 64.3% 76.5% 63.7% 75.5%
----------------------------------- ------ ---------------- ---------------- ---------------- ----------------
We conduct experiments on the WikiSQL dataset[^2] [@zhong2017seq2sql]. WikiSQL is the largest hand-annotated semantic parsing dataset which is an order of magnitude larger than other datasets in terms of both the number of logical forms and the number of schemata (tables). WikiSQL is built by crowd-sourcing on Amazon Mechanical Turk, including 61,297 examples for training, and 9,145/17,284 examples for development/testing. Each instance consists of a natural language question, a SQL query, a table and a result. Here, we follow to use two evaluation metrics. One is logical form accuracy (Acc$_{lf}$), which measures the percentage of exact string match between the generated SQL queries and the ground truth SQL queries. Since different logical forms might obtain the same result, another metric is execution accuracy (Acc$_{ex}$), which is the percentage of the generated SQL queries that result in the correct answer.
Impact of Data Size
-------------------
We study how the number of training instances affects the accuracy of semantic parsing.
![Semantic parsing accuracies of the model on WikiSQL. The $x$-axis is the training data size in log-scale, and the $y$-axis includes two evaluation metrics Acc$_{lf}$ and Acc$_{ex}$.[]{data-label="fig:log"}](relationship.pdf){width=".48\textwidth"}
In this experiment, we randomly sample 20 subsets of examples from the WikiSQL training data, incrementally increased by 3K examples (about 1/20 of the full WikiSQL training data). We use the same training protocol and report the accuracy of the STAMP model on the dev set. Results are given in Figure \[fig:log\]. It is not surprising that more training examples bring higher accuracy. Interestingly, we observe that both accuracies of the neural network based semantic parser grow logarithmically as training data expands, which is consistent with the observations in computer vision tasks [@sun2017revisiting].
Model Comparisons
-----------------
We report the results of existing methods on WikiSQL, and demonstrate that question generation is an effective way to improve the accuracy of semantic parsing. implement several methods, including **Attentional Seq2Seq**, which is a basic attentional sequence-to-sequence learning baseline; **Aug.PntNet**, which is an augmented pointer network in which words of the target sequence come from the source sequence; and **Seq2SQL** which extends Aug.PntNet by further learning two separate classifiers for SELECT aggregator and SELECT column. develop **SQLNet**, which uses two separate models to predict SELECT and WHERE clauses, respectively, and introduce a sequence-to-set neural network to predict the WHERE clause. **STAMP** stands for the semantic parser which has been described in Section \[section:semantic-parsing\].
From Table \[table:compare-to-other-alg\], we can see that STAMP better than existing systems when trained on the full WikiSQL training dataset, achieving state-of-the-art execution accuracy and logical form accuracy on WikiSQL. We further conduct experiments to demonstrate the effectiveness of our question generation driven approach. We run the entire pipeline (STAMP+QG) with different percentages of training data. The second column “Training Data” in Table \[table:compare-to-other-alg\] and the $x$-axis in Figure \[fig:ac\] represent the proportion of WikiSQL training data we use for training the QG model and semantic parser. That is to say, STAMP +QG with 30% means that we sample 30% WikiSQL training data to train the QG model, and then combine QG generated data and exactly the same 30% WikiSQL training data we sampled before to train the semantic parser. In this experiment, we sample five SQL queries for each table in the training data, resulting in 43.5K SQL queries. Applying the QG model on these SQL queries, we get 92.8K SQL-question pairs. From Figure \[fig:ac\], we see that accuracy increases as the amount of supervised training data expands. Results show that QG empowers the STAMP model to achieve the same accuracy on WikiSQL dataset with 30% of the training data. Applying QG to the STAMP model under the full setting brings further improvements, resulting in new state-of-the-art accuracies.
----------------------------------- ----------------- ----------------- --------------- ----------------- ----------------- ---------------
[Acc$_{sel}$]{} [Acc$_{agg}$]{} Acc$_{where}$ [Acc$_{sel}$]{} [Acc$_{agg}$]{} Acc$_{where}$
Aug.PntNet (re-implemented by us) 80.9% 89.3% 62.1% 81.3% 89.7% 62.1%
Seq2SQL [@zhong2017seq2sql] 89.6% 90.0% 62.1% 88.9% 90.1% 60.2%
SQLNet [@xu2017sqlnet] 91.5% 90.1% 74.1% 90.9% 90.3% 71.9%
STAMP 89.4% 89.5% 77.1% 88.9% 89.7% 76.0%
STAMP+QG 89.7% 90.1% 79.8% 89.1% 90.2% 79.0%
----------------------------------- ----------------- ----------------- --------------- ----------------- ----------------- ---------------
![Accuracies of STAMP+QG with different portions of supervised data. Dashed lines are Acc$_{lf}$ and Acc$_{ex}$ of STAMP on the full training data.[]{data-label="fig:ac"}](ac_relationship.pdf){width=".48\textwidth"}
Fine-grained Accuracies
-----------------------
Since SQL queries in WikiSQL consist of SELECT column, SELECT aggregator, and WHERE clause, we report fine-grained accuracies with regard to these aspects, respectively.
From Table \[table:fine-grained-results\], we observe that the main advantage of STAMP+QG over STAMP comes from the prediction of the WHERE clause, which is also the main challenge of the WikiSQL dataset. We further analyze STAMP and STAMP+QG on the WHERE clause by splitting the dev and test sets into three groups according to the number of in the WHERE clause. From Table \[table:difficulty\], we see that combining QG is helpful when the number of WHERE conditions is more than one. The main reason is that dominant instances in the WikiSQL training set have only one WHERE condition, as shown in Table \[table:distribution\], thus the model might not have memorized enough patterns for the other two limited-data groups. Therefore, the pseudo-labeled instances generated by our SQL sampler and QG approach are more precious to the limited-data groups (i.e \#where =2 and \#where$\geq$3).
---------- --------- ---------- --------- ----------
[dev]{} [test]{} [dev]{} [test]{}
$=$ 1 80.9% 80.2% 81.5% 80.9%
$=$ 2 65.1% 65.4% 68.3% 66.9%
$\geq$ 3 44.1% 48.2% 53.4% 51.9%
---------- --------- ---------- --------- ----------
: Execution accuracy (Acc$_{ex}$) on different groups of WikiSQL dev and test sets.[]{data-label="table:difficulty"}
\#where supervised data generated data
---------- ----------------- ----------------
$=$ 1 69.1% 55.4%
$=$ 2 24.1% 33.0%
$\geq$ 3 6.1% 11.4%
: Distribution of the number of WHERE conditions in supervised and generated data. []{data-label="table:distribution"}
Influences of Different QG Variations
-------------------------------------
To better understand how various components in our QG model impact the overall performance, we study different QG model variations. We use three evaluation metrics, including two accuracies and BLEU score [@papineni2002bleu]. The BLEU score evaluates the question generation.
Methods Scale [BLEU]{} [Acc$_{lf}$]{} Acc$_{ex}$
----------- ------- ---------- ---------------- ------------
s2s 30% 20.6 59.0% 72.1%
s2s+lv 30% 22.1 60.0% 72.3%
s2s+cp 30% 29.6 60.8% 73.5%
s2s+cp+lv 30% 29.5 61.2% 73.9%
s2s 100% 26.0 62.6% 74.9%
s2s+lv 100% 26.3 63.0% 75.3%
s2s+cp 100% 31.5 63.2% 75.6%
s2s+cp+lv 100% 31.6 63.7% 75.5%
: Performances of different question generation variations.[]{data-label="table:QG ablation studies"}
SQL SELECT COUNT 2nd leg WHERE aggregate = 7-2
------------------------- ----------------------------------------------------------------------
Question (ground truth) what is the total number of 2nd leg where aggregate is 7-2
Question (s2s + cp) how many 2nd leg with aggregate being 7-2
\(1) what is the total number of 2nd leg when the aggregate is 7-2 ?
\(2) how many 2nd leg with aggregate being 7-2
\(3) name the number of 2nd leg for 7-2
Results are shown in Table \[table:QG ablation studies\], in which **s2s** represents the basic attentional sequence-to-sequence learning model [@luong2015effective], **cp** means the copying mechanism, and **lv** stands for the latent variable. We can see that incorporating a latent variable improves QG model performance, especially in limit-supervision scenarios. This is consistent with our intuition that the performance of the QG model is improved by incorporating the copying mechanism, since rare words of great importance mainly come from the input sequence.
To better understand the impact of incorporating a latent variable, we show examples generated by different QG variations in Table \[table:QG example\]. We can see that incorporating a latent variable empowers the model to generate diverse questions for the same intent.
Transfer Learning on WikiTableQuestions
---------------------------------------
In this part, we conduct an extensional experiment on WikiTableQuestions[^3] [@pasupat-liang:2015:ACL-IJCNLP] in a transfer learning scenario to verify the effectiveness of our approach. WikiTableQuestions contains 22,033 complex questions on 2,108 Wikipedia tables. Each instance consists of a natural language question, a table and an answer. Following , we report development accuracy which is averaged over the first three 80-20 training data splits. Test accuracy is reported on the train-test data.
In this experiment, we apply the QG model learnt from WikiSQL to improve the state-of-the-art semantic parser [@krishnamurthy-dasigi-gardner:2017:EMNLP2017] on this dataset. Different from WikiSQL, this dataset requires question-answer pairs for training. Thus, we generate question-answer pairs by follow steps. We first sample SQL queries on the tables from WikiTableQuestions, and then use our QG model to generate question-SQL pairs. Afterwards, we obtain question-answer pairs by executing SQL queries. The generated question-answer pairs will be combined with the original WikiTableQuestions training data to train the model.
[Dev]{} [Test]{}
---------------------- --------- ----------
37.0% 37.1%
37.5% 37.7%
- 38.7%
40.4% 43.7%
STAMP (WikiSQL) - 14.5%
STAMP (WikiSQL) + QG - 15.2%
NSP 41.9% 43.8%
NSP + QG 42.2% 44.2%
: Accuracy (Acc$_{ex}$) of different approaches on WikiTableQuestion dev and test sets.[]{data-label="table:wikitablequestion"}
Results are shown in Table \[table:wikitablequestion\], in which **NSP** is short for the state-of-the-art neural semantic parser [@krishnamurthy-dasigi-gardner:2017:EMNLP2017]. Since the train-test data used in [NSP]{} is different from others, we retrain the [NSP]{} under the same protocol. **STAMP (WikiSQL)** means that the [STAMP]{} model trained on WikiSQL is directly tested on WikiTableQuestions. Despite applying QG slightly improves STAMP in this setting, the low accuracy reflects the different question distribution between these two datasets. In the supervised learning setting, we can see that incorporating QG further improves the accuracy of [NSP]{} from 43.8% to 44.2%.
Discussion
----------
To better understand the limitations of our QG model, we analyze a randomly selected set of 100 questions. We observe that 27% examples do not correctly express the meanings of SQL queries, among which the majority of them miss information from the WHERE clause. This problem might be mitigated by incorporating a dedicated encoder/decoder that takes into account the SQL structure. Among the other 73% of examples that correctly express SQL queries, there are two potential directions to make further improvements. The first direction is to leverage table information such as the type of a column name or column-cell correlations. For instance, without knowing that under the column name “*built*” are all building years, the model hardly predicts a question “*what is the average building year for superb?*” for “ *AVG built WHERE name = superb*”. The second direction is to incorporate common knowledge, which would help the model to predict *the earliest week* rather than *the lowest week*.
Related Work
============
#### Semantic Parsing.
Semantic parsing is a fundamental problem in NLP that maps natural language utterances to logical forms, which could be executed to obtain the answer (denotation) [@Zettlemoyer05; @liang2011learning; @berant2013semantic; @krishnamurthy2013jointly; @pasupat-liang:2016:P16-1; @iyer-EtAl:2017:Long]. Existing works can be classified into three areas, including (1) the language of the logical form, e.g. first-order logic, lambda calculus, lambda dependency-based compositional semantics (lambda DCS) and structured query language (SQL); (2) the form of the knowledge base, e.g. facts from large collaborative knowledge bases, semi-structured tables and images; and (3) the supervision used for learning the semantic parser, e.g. question-denotation pairs and question-logical form pairs. In this work, we regard the table as the knowledge base, which is critical for accessing relational databases with natural language, and also for serving information retrieval for structured data. We use SQL as the logical form, which has a broad acceptance to the public. In terms of supervision, this work uses a small portion of question-logical form pairs to initialize the QA model and train the QG model, and incorporate more generated question-logical form pairs to further improve the QA model.
#### Question Generation
Our work also relates to the area of question generation, which has drawn plenty of attention recently partly influenced by the remarkable success of neural networks in text generation. Studies in this area are classified based on the definition of the answer, including a sentence [@heilman2011automatic], a topic word [@chali2015towards], a fact (including a subject, a relation phrase and an object) from knowledge bases [@serban-EtAl:2016:P16-1], an image [@mostafazadeh2016generating], etc. Recent studies in machine reading comprehension generate questions from an answer span and its context from the document [@du-shao-cardie:2017:Long; @golub-EtAl:2017:EMNLP2017]. first generate logical forms, and then use AMTurkers to paraphrase them to get natural language questions. use a template-based approach based on the Paraphrase Database [@ganitkevitch2013ppdb] to generate questions from SQL. In this work, we generate questions from logical forms, in which the amount of information from two directions are almost identical. This differs from the majority of existing studies because a question typically conveys less semantic information than the answer.
#### Improving QA with QG
This work also relates to recent studies that uses a QG model to improve the performance of a discriminative QA model [@wang2017irgan; @yang2017semi; @duan-EtAl:2017:EMNLP2017; @konstas-EtAl:2017:Long]. The majority of these works generate a question from an answer, while there also exists a recent work [@dong2017learning] that generates a question from a question through paraphrasing. In addition, consider QA and QG as dual tasks, and further improve the QG model in a dual learning framework. These works fall into three categories: (1) regarding the artificially generated results as additional training instances [@yang2017semi; @golub-EtAl:2017:EMNLP2017]; (2) using generated questions to calculate additional features [@duan-EtAl:2017:EMNLP2017; @dong2017learning]; and (3) using the QG results as additional constraints in the training objectives [@tang2017question]. This work belongs to the first direction. Our QG approach takes a logical form as the input, and considers the diversity of questions by incorporating latent variables.
Conclusion
==========
In this paper, we observe the logarithmic relationship between the accuracy of a semantic parser and the amount of training data, and present an approach that improves neural semantic parsing with question generation. We show that question generation helps us obtain a state-of-the-art neural semantic parser with less supervised data, and further improves the state-of-the-art model with annotated data on WikiSQL and WikiTableQuesions datasets. In future work, we would like to make use of table information and external knowledge to improve our QG model. We also plan to apply the approach to other tasks.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work is supported by the National Natural Science Foundation of China (61472453, U1401256, U1501252, U1611264,U1711261,U1711262). Thanks to the anonymous reviewers for their helpful comments and suggestions.
[^1]: Work done while this author was an intern at Microsoft Research.
[^2]: <https://github.com/salesforce/WikiSQL>
[^3]: <https://nlp.stanford.edu/software/sempre/wikitable/>
|
---
abstract: 'Dark matter decays or annihilations that produce line-like spectra may be smoking-gun signals. However, even such distinctive signatures can be mimicked by astrophysical or instrumental causes. We show that velocity spectroscopy—the measurement of energy shifts induced by relative motion of source and observer—can separate these three causes with minimal theoretical uncertainties. The principal obstacle has been energy resolution, but upcoming experiments will reach the required $0.1\%$ level. As an example, we show that the imminent Astro-H mission can use Milky Way observations to separate possible causes of the 3.5-keV line. We discuss other applications.'
author:
- 'Eric G. Speckhard'
- 'Kenny C. Y. Ng'
- 'John F. Beacom'
- Ranjan Laha
bibliography:
- 'dmvsbib.bib'
date: 'July 31, 2015'
---
Introduction {#sec:Introduction}
============
What is the dark matter? Identification depends upon more than just observation of its bulk gravitational effects; distinct particle signatures are needed. Backgrounds make it difficult to pick out these signals, which are constrained to be faint. Among possible decay or annihilation signals, those with sharp spectral features, such as a line, are especially valuable.
Given that the stakes and difficulties are so profound, even such a “smoking-gun” signal may not be conclusive. A line could have other causes: astrophysical (baryonic) emission or detector backgrounds (or response effects). For example, the cause of the recently discovered 3.5-keV line is disputed [@Bulbul:2014sua; @Boyarsky:2014jta; @Riemer-Sorensen:2014yda; @Boyarsky:2014ska; @Anderson:2014tza; @Malyshev:2014xqa; @Jeltema:2014qfa; @Urban:2014yda]. This problem is more general [@Loewenstein:2009cm; @Prokhorov:2010us; @Weniger:2012tx; @Finkbeiner:2012ez; @2012arXiv1206.1616S; @Aharonian:2012cs; @Tempel:2012ey; @Ackermann:2013uma; @Weniger:2013tza; @Ackermann:2015lka] and will surely arise again. We need better evidence than just a smoking gun—we need to see it in motion.
Premise and Motivation
======================
We propose a general method for distinguishing the possible causes of a sharp spectral feature. Consider a line of unknown cause—dark matter (DM), astrophysical, or detector—observed in the Milky Way (MW). Relative motion between source and observer leads to distinctive energy shifts as a function of line of sight (LOS) direction. Figure \[fig:toon\] illustrates this schematically. Because typical Galactic virial velocities are $\sim 10^{-3}c$, the Doppler shifts are only $\sim 0.1\%$.
A potential target for velocity spectroscopy is the 3.5-keV line recently observed in MW, M31, and galaxy cluster spectra [@Bulbul:2014sua; @Boyarsky:2014jta; @Boyarsky:2014ska]. The line energy and flux can naturally be explained by sterile neutrino DM [@Dodelson:1993je; @Shi:1998km; @Abazajian:2014gza; @Abazajian:2001nj; @Shaposhnikov:2006xi; @Kusenko:2006rh; @Merle:2013gea; @Merle:2013wta; @Patwardhan:2015kga; @Venumadhav:2015pla] (or other candidates [@Finkbeiner:2014sja; @Higaki:2014zua; @Lee:2014xua; @Cicoli:2014bfa; @Choi:2014tva; @Frandsen:2014lfa; @Dudas:2014ixa; @Babu:2014pxa; @Roland:2015yoa]). However, the significance of the line is disputed [@Riemer-Sorensen:2014yda; @Anderson:2014tza; @Malyshev:2014xqa], and it has been argued that it can be explained by astrophysical emission [@Jeltema:2014qfa; @Urban:2014yda].
![**Top:** How DM, astrophysical, and detector lines shift with Galactic longitude is starkly different. **Bottom:** For DM signals at positive longitude, our motion through the non-rotating DM halo yields a negative LOS velocity and thus a blue shift. In contrast, for astrophysical lines (e.g., from gas), co-rotation in the disk leads to a positive LOS velocity and thus a red shift. These signs reverse at negative longitude. Detector lines have zero shift.[]{data-label="fig:toon"}](Fig1.pdf){width="\columnwidth"}
With present detectors, velocity spectroscopy of this line is impossible. Excitingly, the Soft X-Ray Spectrometer (SXS) on Astro-H (launch date early 2016) has a goal energy resolution of $\sigma_{\rm{AH}} = 1.7 \, \rm{eV}$ (4 eV FWHM) [@Takahashi:2012jn; @2014SPIE.9144E..25T], which is at the required $0.1\%$ scale. We show that if this goal resolution is achieved, Astro-H can identify the cause of the 3.5-keV line. We also discuss prospects if the performance is worse.
We emphasize that the applicability of DM velocity spectroscopy is much more general. The purpose of this paper is to introduce a new concept to increase the power of DM searches and to spur innovation in detector design. We conclude by discussing several generalizations.
Usual DM Decay Signal {#sec:Spectra}
=====================
The differential intensity (flux per solid angle) from DM with mass $m_{\chi}$ and lifetime $\tau = 1/\Gamma$, decaying within the MW, is $$\label{DiffI}
\frac{dI(\psi,E)}{dE} = \frac{\Gamma}{4\pi m_\chi} R_{\odot} \rho_{\odot} \, \mathcal{J}(\psi)\frac{dN(E)}{dE} \, ,$$ where $R_{\odot} \simeq 8 \, \rm{kpc}$ and $\rho_{\odot} \simeq 0.4 \, \rm{GeV \, cm^{-3}}$ [@Catena:2009mf; @2012ApJ...756...89B; @2015arXiv150406324P] are the distance to the Galactic center (GC) and local DM density. (We neglect the cosmologically broadened extra-galactic signal, which contributes negligibly in Astro-H’s narrow energy bins.) $\mathcal{J}(\psi)$ is the dimensionless, astrophysical J-factor defined by the LOS integral $$\mathcal{J}(\psi) \equiv \frac{1}{R_{\odot}\, \rho_{\odot}} \int ds \, \rho_{\chi}(r[s,\psi]) \,,$$ where $\psi$ is the angle relative to the GC and is related to Galactic longitude and latitude via $\cos\psi = \cos l\cos b$. $dN(E)/dE$ is the photon spectrum.
The above treatment assumes that the astrophysical term, $ \mathcal{J}(\psi)$, and the photon spectrum, $ dN(E)/dE$, are separable. However, for detectors with energy resolution $\lesssim 0.1\%$, this approximation is not valid because relative velocities between source and observer, and therefore the spectral shape, vary along the LOS.
Modified DM Spectrum
====================
We first account for how the signal is broadened by DM velocity dispersion and second for how it is shifted due to bulk relative motion.
We take the DM halo of the MW to be spherically symmetric, in steady state, and to have no appreciable rotation. The last is expected from angular momentum conservation, as the baryons from the proto-halo have collapsed significantly, while the DM has not; this is confirmed by simulations [@Bullock:2000ry; @Vitvitska:2001vw]. Thus, $\langle \vec{v}_{\chi} \rangle =0$.
DM particles do have non-zero velocity dispersion, determined by the total gravitational potential of the halo [@Binney:1987gd; @Robertson:2009bh]. Assuming an isotropic velocity distribution ($\sigma_{v,r} =\sigma_{v,\phi} =\sigma_{v,\theta}$, so the total dispersion is $\sqrt{3}\sigma_{v,r}$), the radial velocity dispersion of DM is [@Binney:1987gd] $$\sigma_{v,r}^{2}(r)=\frac{G}{\rho_{\chi}(r)} \int_{r}^{R_{vir}} \! dr' \, \rho_\chi(r') \frac{M_{\rm{tot}}(r')}{r'^2} \, ,$$ where $M_{\rm{tot}}(r)$ is the total mass within a radius $r$. Typical values at $r \sim$ few kpc are $\sigma_{v,r} \simeq 125 \, \mathrm{km \, s^{-1}}$.
To calculate $\sigma_{v,r}(r)$, we adopt the mass model of Ref. [@Klypin:2001xu], which fits a contracted DM and three-component baryon mass profile to MW rotation curve data; for more details see Supplemental Materials. The choice of mass model is not critical; kinematic results from other models agree within $\mathcal{O}(10\%)$ [@Catena:2009mf; @McMillan:2011wd].
The spectrum from a point along the LOS is the convolution of the intrinsic spectrum with the DM velocity distribution at that point. We assume a Maxwellian velocity distribution throughout the halo, which, at each point, yields a Gaussian distribution of the LOS velocity component. The modified spectrum from each point is $$\frac{d\widetilde{N}(E,r[s,\psi])}{dE} = \int dE' \, \frac{dN(E')}{dE'} \, G(E-E';\sigma_{E'}) \, ,$$ where $G(E;\sigma_E)$ is a Gaussian of width $\sigma_{E} = (E/c)\sigma_{v_{\text{\tiny LOS}}}$. Based upon observations of the LOS velocity distribution of MW halo stars reported in [@Xue:2008se], we take $\sigma_{v_{\text{\tiny LOS}}}(r) \simeq \sigma_{v,r}(r)$ which implies $\sigma_{E} = (E/c) \, \sigma_{v,r}(r[s,\psi])$.
The line shift follows from the LOS velocity, $v_{\text{\tiny LOS}} \equiv (\langle\vec{v}_{\chi}\rangle - \vec{v}_{\odot}) \cdot \hat{r}_{\text{\tiny LOS}}$, where positive $v_{\text{\tiny LOS}}$ indicates receding motion. For $v_{\text {\tiny LOS}} \ll c$, the resultant energy shift is $\delta E_{\rm{MW}}/E = {-v}_{\text{\tiny LOS}}/c$.
The Sun follows a roughly circular orbit about the GC in the direction toward positive Galactic longitude at a speed $v_{\odot} \simeq 220 \, \mathrm{km \, s^{-1}}$ [@Kerr:1986hz]. (Recent work suggests $v_{\odot} \gtrsim 240 \, \rm{km \, s^{-1}}$ [@Schonrich:2012qz; @2012ApJ...759..131B], which would strengthen our results.) The spectrum is therefore shifted by $\delta E_{\rm{MW}}(l,b)/E = +(v_{\odot}/c) \sin l\cos b$, which changes sign with $l$. We neglect the solar peculiar velocity as well as Earth and satellite motions, all of which are $\lesssim 10 \, \mathrm{km \, s^{-1}}$ [@McMillan:2009yr; @Lee:2013xxa; @Peter:2013aha].
The final expression for the modified spectrum, including broadening and shifts, is therefore $$\frac{d\mathcal{J}}{dE} = \frac{1}{R_{\odot}\rho_{\odot}} \int ds \, \rho_{\chi}(r[s,\psi]) \frac{d\widetilde{N}(E-\delta E_{\rm{MW}},r[s,\psi])}{dE} \,,$$ so that Eq. (\[DiffI\]) is altered by $\mathcal{J}(\psi) \, dN(E)/dE \rightarrow d\mathcal{J}(\psi,E)/dE$. The observed signal, which is the convolution of $d\mathcal{J}/dE$ with the detector response, is nearly Gaussian and has an effective width $\sigma_{\rm{eff}}$.
Modified Astrophysical Spectrum {#Systems}
===============================
The details are slightly different for astrophysical lines.
The widths of astrophysical lines are primarily determined by the mass of the emitting atom and by the gas temperature; turbulent broadening is negligible [@Redfield:2004wb]. For potassium at $T = 2 \, \rm{keV}$, the intrinsic line width is $\sigma_{\rm{gas}}\simeq 0.8 \, \rm{eV}$, comparable to Astro-H’s goal resolution, $\sigma_{\rm{AH}} \simeq 1.7 \, \rm{eV}$. The intrinsic width is weakly sensitive to the gas temperature and mass ($\propto \sqrt{T/m}$); any reasonable values of $T$ and $m$ give similar results.
For the shift of an astrophysical signal, we must account for co-rotation within the MW disc. (While there is a non-rotating, gaseous halo at the outskirts of the MW, it is not hot enough to produce significant emission at 3.5 keV [@Dai:2011yn; @Anderson:2011ih; @Anderson:2014tza]). For simplicity, we assume all baryons follow circular orbits about the GC with speed $v_{\rm{circ}}(r) = \sqrt{G M_{\rm{tot}}(r)/r}$. With this circular speed and the hot gas distribution of Ref. [@Ferriere:1998gm], we compute the spectral shift by integrating the signal along the LOS with the contribution from each point weighted by the gas density. We call this fiducial model G2.
Because the spatial and speed distributions of MW X-ray gas are uncertain, we compare to models in Ref. [@Kretschmer:2013naa] with smaller and larger line shifts. G1 is based on the distribution of free $e^-$ [@Cordes:2002wz] and the MW rotation curve [@Sofue:2008wt]. G3 is based on the observed distribution of $^{26}$Al gamma rays [@Kretschmer:2013naa]. G1 and G2 are in good agreement with MW HI and CO data . Peak LOS velocities for G1, G2, and G3 are $\simeq 50, 75,$ and $250 \, \rm{km \, s^{-1}}$.
![Comparison of received spectra for DM and gas (G2). The emitted spectra are taken to have equal flux and to be centered at 3.5 keV before velocity effects. The line profiles include velocity dispersion and shift effects, as well as the energy resolution of Astro-H. Vertical bands indicate the 1-$\sigma$ centroid uncertainties after 2-Ms observations. For contrast, the brown line in the figure and inset shows the same signal if Astro-H had the energy resolution of XMM.[]{data-label="fig:profiles"}](Fig2.pdf){width="\columnwidth"}
![LOS velocity for DM and various gas models (the realistic version of Fig. \[fig:toon\]). Uncertainties are computed assuming 2-Ms Astro-H exposures on each point.[]{data-label="fig:lvmap"}](Fig3.pdf){width="\columnwidth"}
Line Flux Detection {#sec:Prospects}
===================
One prerequisite to detecting a spectral shift is that the number of signal events be non-zero. Another is that the background fluctuations be small in comparison. Though Astro-H has a small field of view (FOV), its excellent energy resolution strongly suppresses backgrounds for a line signal, so that even a small number of signal events can be significant.
Viewing directions $l \simeq 10^{\circ}-40^{\circ}$ have advantages. First, the balance between decreasing signal flux and increasing energy shift at large $l$ is optimized. Second, theoretical uncertainties are minimized, as the DM density profile at $r \gtrsim \rm{few \, kpc}$ is fixed by rotation curve data. Third, continuum astrophysical backgrounds are reduced; we reduce these further by going slightly off the Galactic plane, which minimally affects the DM signal.
The expected signal intensity is calculated from Eq. (\[DiffI\]). For our DM example, this is $$\begin{aligned}
I(\psi) &=& 1.2 \times 10^{-8} \, \mathrm{cm^{-2} \, s^{-1} \, arcmin^{-2}} \\
&& \times \left(\frac{\sin^{2}2\theta}{7 \times 10^{-11}}\right) \left(\frac{m_{\chi}}{7 \, \mathrm{keV}}\right)^{4} \left(\frac{\mathcal{J}(\psi)}{\mathcal{J}(l =20^{\circ}, |b| = 5^{\circ})}\right) \nonumber \, , \end{aligned}$$ where we have integrated over energy in the line profile, calculated $\mathcal{J}(l =20^{\circ}, |b| = 5^{\circ}) = 7.5$ using Ref. [@Klypin:2001xu], and taken the DM parameters from Ref. [@Bulbul:2014sua]. For Astro-H, $\Omega_{\text{\tiny FOV}} = 9 \, \mathrm{arcmin}^{2}$ and $\mathrm{A_{eff}} = 200 \, \mathrm{cm}^{2}$ [@Takahashi:2012jn; @2014SPIE.9144E..25T], so the expected number of events is $$N_{s}(\psi) \simeq 43 \, \left(\frac{\mathcal{J}(\psi)}{\mathcal{J}(l =20^{\circ}, |b| = 5^{\circ})}\right) \left(\frac{t}{2 \, \mathrm{Ms}}\right) \, .$$ This assumed exposure is large, but appropriate to the stakes (a potential discovery of DM) and the difficulties (the total exposure of XMM, Chandra, and Suzaku used in the 3.5-keV analyses is $\gtrsim 40$ Ms [@Bulbul:2014sua; @Boyarsky:2014jta; @Boyarsky:2014ska; @Tamura:2014mta; @Sekiya:2015jsa]). Furthermore, due to Astro-H’s excellent energy resolution, all pointings in a substantial fraction of the sky will help test the 3.5-keV line.
For continuum backgrounds, we consider only the contribution over the narrow energy range $\pm 2\sigma_{\rm{eff}}$ centered at 3.5 keV. (We do not need to include the tails of nearby astrophysical lines, as they will be well-resolved, unlike in XMM.) One component of the background is due to the isotropic cosmic X-ray background (CXB) [@Kushino:2002vk; @Deluca:2003eu; @Hickox:2005dz]. We conservatively adopt the total CXB flux (unresolved + resolved sources) $ E \, d\Phi_{\rm{CXB}}/dE = 9.2 \times 10^{-7} (\rm{E/keV})^{-0.4} \, \rm{cm^{-2} \, s^{-1} \, arcmin^{-2}}$ [@Hickox:2005dz]. Another background, due to hot gas in the MW, varies strongly with direction [@Uchiyama:2012nw]. Finally, there are detector backgrounds due to intrinsic and induced radioactivities as well as cosmic-ray interactions; their intensity is expected to be comparable to that of the CXB [@Kitayama:2014fda]. For $\psi(l=20^{\circ}, |b| = 5^{\circ})$, backgrounds contribute $N_b \simeq 5.2+5.4+5.4 = 16$ events per 2 Ms within the $\pm 2\sigma_{\rm{eff}} \simeq \pm 4.8 \, \rm{eV}$ band centered at 3.5 keV, compared to $N_s \simeq 41$.
We estimate the detection significance by the Poisson probability $P(n\geq 57 | \, \mu = 16)$, which corresponds to a one-sided Gaussian probability $> 7\sigma$.
Line Shift Detection
====================
Detecting a line shift depends on how well the centroid of the line profile is determined. Backgrounds decrease the precision, but, as above, the energy resolution of Astro-H plays a critical role.
When backgrounds are absent, the uncertainty on the centroid is $\sigma_{\rm{eff}}/\sqrt{N_s}$. When they are present, the uncertainty becomes $\delta E = C(R) \, \sigma_{\rm{eff}} / \sqrt{N_s}$, where C(R) is a correction factor and R is defined by the background to signal ratio. We calculate the optimal C(R) using the Cramer-Rao theorem [@PhysRev.88.775; @Beacom:1998fj; @James:2006zz]. For $\psi(l=20^{\circ}, |b| = 5^{\circ})$, $C(R) \simeq 1.6$, so that the uncertainty in the LOS velocity is $\delta_{v_{\text{\tiny LOS}}} \simeq 50 \, \rm{km \, s^{-1}}$.
Figure \[fig:profiles\] shows the line profiles at $\psi(l=20^{\circ}, |b| = 5^{\circ})$ for a 3.5-keV emission line, due either to DM or gas. (A detector line would have zero shift). These profiles show how the energy spectra are shifted due to relative motion as well as broadened due to intrinsic dispersion and detector resolution. We show the uncertainties on the centroids, which are separated from each other and from zero in a 2-Ms exposure. With the energy resolution of XMM [@Turner:2000jy] ($\sigma_{\rm{XMM}} \simeq 47 \, \rm{eV}$ vs. $\sigma_{\rm{AH}} \simeq 1.7 \, \rm{eV}$), the profiles are indistinguishable.
Figure \[fig:lvmap\] shows how the expected shifts vary with Galactic longitude, along with their uncertainties, assuming 2-Ms observations for each point. We show the DM signal uncertainties; for an astrophysical line of the same flux, the uncertainties are comparable because the effective widths are comparable ($\sigma_{\rm{eff}}^{\rm{gas}} \simeq \, 160 \, \rm{km \, s^{-1}}, \, \sigma_{\rm{eff}}^{\text{\tiny DM}} \simeq \, 200 \, \rm{km \, s^{-1}}$); see Fig. \[fig:profiles\]. For a detector line with zero intrinsic width, the effective width is $ \sigma_{\rm{eff}}^{\rm{det}} \simeq 150 \, \rm{km \, s^{-1}}$, approximately a factor of $\sqrt{2}$ less than $\sigma_{\rm{eff}}^{\text{\tiny DM}}$.
For each point in Fig. \[fig:lvmap\], it is easy to assess the probability that the expected DM signal could fluctuate to match that expected for an astrophysical or detector line, i.e., that a true DM signal could remain hidden. With two observations, at $l = \pm 20^{\circ}$, this scenario can be ruled out, relative to G2, at $\simeq 3.6 \sigma$. This establishes that this technique has interesting sensitivity. Once there is data, one can assess the probability that an astrophysical or detector line could mimic a DM signal (for the same flux, $ \delta_{v_{\text{\tiny LOS}}}^{\text{gas}} \simeq \delta_{v_{\text{\tiny LOS}}}^{\text{det}} \simeq \delta_{v_{\text{\tiny LOS}}}^{\text{\tiny DM}} / \sqrt{2} $).
If the energy resolution is worse than the design goal, e.g., $\sigma_{\rm{AH}} \simeq$ 2.1, 2.5, or 3 eV, then the line shift significance is $\simeq$ 3.0, 2.4, or 1.9$\sigma$ (the line flux significance is always $> 5\sigma$). This could be improved as $\sqrt{t}$ with more exposure (including non-dedicated pointings). We have not included the systematic uncertainty due to detector gain calibration, for which the goal is 0.4 eV [@Kitayama:2014fda]. This can be mitigated by comparing the energies of nearby astrophysical lines, especially at opposite longitudes.
Related Searches {#sec: Gen}
================
Astro-H may be able to resolve the intrinsic width of a MW DM line. This would provide the first information on the large-scale DM velocity distribution, which is sensitive to DM particle properties [@Rocha:2012jg] and to the presence of substructure [@Ghigna:1999sn; @2015ApJ...807...14L] (see Suppl. Mat.).
The 3.5-keV line has been detected in M31. Due to the relative motion between the Sun and M31, DM or astrophysical lines from the center of M31 will have LOS shifts of $\simeq -300 \, \rm{km \, s^{-1}}$ [@Chemin:2009wd]. We estimate that this blue shift could be detected with $> 5\sigma$ significance, making this an attractive way to test detector causes. Due to M31’s rotation, astrophysical lines are separated from DM lines by $\pm 200 \, \rm{km \, s^{-1}}$ around $\pm 1^{\circ}$, but, because the statistical uncertainties are large, they cannot be cleanly distinguished in 2 Ms; see Suppl. Mat. and Refs. . The LMC [@vanderMarel:2002kq] may also be an attractive target.
More speculatively, it may be possible to see the line in the extragalactic DM signal, if more astrophysical sources in the CXB are resolved, e.g., with eRosita [@Merloni:2012uf; @Zandanel:2015xca]. Furthermore, because we move at $\simeq 400 \, \rm{km \, s^{-1}}$ with respect to the CMB, it may be possible to detect a dipole signature in DM line signal. Far-future observations may even detect a forest of sources in each LOS spectrum.
Conclusions
===========
Even for a supposedly smoking-gun signal, such as a line, it may be difficult to distinguish between DM, astrophysical, or detector causes. We have shown that detectors with energy resolution $\lesssim 0.1\%$ can break this degeneracy using velocity spectroscopy, which has minimal theoretical uncertainties. We emphasize that our main goal is to point out this new and robust method for testing DM signals, which can be applied to any sharp feature, such as an edge or box [@Ibarra:2015tya; @Boddy:2015efa].
To demonstrate the potential of this technique, we have shown that Astro-H will be able to test the origin of the 3.5-keV line. In the future, other lines may be discovered. For lines at higher energy, the relative energy resolution of Astro-H improves. This unprecedented resolution will allow Astro-H to dramatically improve on existing sterile neutrino limits [@Abazajian:2001vt; @Boyarsky:2006fg; @Watson:2006qb; @Abazajian:2006jc; @Abazajian:2006yn; @Pullen:2006sy; @Yuksel:2007xh; @Boyarsky:2007ay; @Boyarsky:2007ge; @Loewenstein:2008yi; @Boyarsky:2009ix; @Abazajian:2011tk; @Loewenstein:2012px; @Jackson:2013pjq; @Horiuchi:2013noa; @Boddy:2014qxa; @Ng:2015gfa; @Figueroa-Feliciano:2015gwa; @Riemer-Sorensen:2015kqa]. We encourage a dedicated study by the Astro-H Collaboration, once post-launch parameters are known, to give definitive answers on DM sensitivity over their full energy range.
We are encouraged by the expected $0.1 \%$ resolution of Astro-H in the range $0.3\!-\!12 \, \rm{keV}$, and the demonstrated $0.1 \%$ resolution of INTEGRAL-SPI in the range $20 \, \rm{keV}$ to $8 \, \rm{MeV}$ (including velocity spectroscopy of the 1.809-MeV line from $^{26}$Al [@Kretschmer:2003ak; @Diehl:2006cf; @Kretschmer:2013naa]). Excitingly, the proposed X-ray mission ATHENA [@Nandra:2013jka] and GeV gamma-ray mission HERD [@Zhang:2014qga] have made achieving similar energy resolution a priority, which will improve existing limits [@PhysRevLett.56.263; @PhysRevD.37.3737; @PhysRevD.40.3168; @Bergstrom:1997fj; @Hisano:2002fk; @Gustafsson:2007pc; @Mack:2008wu; @2009PhRvD..80b3512B; @Essig:2013goa; @Ng:2013xha; @Albert:2014hwa; @TheFermi-LAT:2015gja]. We encourage other missions to pursue this aggressively.
Acknowledgments {#acknowledgments .unnumbered}
===============
We are grateful to Yoshiyuki Inoue, Matthew Kistler, Greg Madejski, Phillip Mertsch, Annika Peter, and Randall Smith for discussions. EGS is supported by a Fowler Fellowship, KCYN and JFB by NSF grant PHY-1404311 to JFB, and RL by KIPAC.
**Supplemental Materials**
Outline
=======
We first briefly discuss the mass models and dispersion profiles used to derive the results presented in the main text. We then provide an expanded discussion of two additional applications of DM velocity spectroscopy, namely: probing the intrinsic DM dispersion profile using LOS observations and using velocity spectroscopy of M31 to test detector causes.
Radial Velocity Dispersion
==========================
To calculate the intrinsic broadening of a DM line, a galactic mass model must be adopted to determine the velocity dispersion profile. Below, we describe the mass models used in our analysis of the MW and M31.
Milky Way Mass Profile
----------------------
We use model A1 of Ref. [@Klypin:2001xu], which utilizes a DM halo determined by adiabatically contracting an initial NFW profile in the presence of baryons. We summarize key aspects of the model.
Before contraction, an NFW profile with scale radius $r_s = 21.5$ kpc is assumed to coexist with three axisymmetric baryonic profiles roughly associated with the nucleus, bulge/bar, and disc of the galaxy. The total baryonic mass within a given radius is determined by the integration of the density profiles, with the addition of a central black hole of mass $m_{\rm{BH}}=2.6\times 10^6 M_{\odot}$. The enclosed baryonic mass is $$M_{\rm{b}}(r)=m_{\rm{BH}}+\int_{0}^{r}\int_{4\pi} dr' \, d\Omega \, \rho_{\rm{b}}(r') \, r'^2 \, .$$ The final DM profile is determined by contracting the initial NFW profile in the presence of this baryonic mass distribution. The baryonic profiles are adiabatically contracted under the assumption that spherical shells of matter do not cross and that the DM particles follow circular orbits. This deepens the potential well and causes the DM to contract. Angular momentum conservation then dictates the following equations: $$\begin{aligned}
G \, [M_{\rm{b}}(r_f)+M_{\rm{dm}}(r_f)]\, r_f &=& G \, M_{\rm{halo}}(r_i) \, r_i\\
M_{\rm{halo}}(r_i) &=& M_{\rm{dm}}(r_f) \, \frac{(\Omega_{\rm{b}} + \Omega_{\rm{dm}})}{\Omega_{\rm{dm}}} \nonumber \, ,\end{aligned}$$ where $M_{\rm{halo}}(r_i)$ is the halo mass before contraction and $\Omega_{\rm{dm}}$ and $\Omega_{\rm{b}}$ are the dark and baryonic matter densities, taken to be in the ratio $\Omega_{\rm{dm}}/(\Omega_{\rm{b}} + \Omega_{\rm{dm}}) = 0.9$; more recent observations give $\Omega_{\rm{dm}}/(\Omega_{\rm{b}} + \Omega_{\rm{dm}}) = 0.84$ [@Ade:2015xua], which gives identical results.
These equations are solved numerically to give a final radius, $r_f$, corresponding to a given initial radius, $r_i$. The contracted profile has a normalization $\rho_{\chi}(r = 8 \, \rm{kpc}) \simeq 0.4$ GeV $\rm{cm^{-3}}$.
The combined baryonic and contracted DM profiles are integrated to give the total mass enclosed within a given radius, $M_{\rm{tot}}(r)$: $$M_{\rm{tot}}(r) = M_b(r)+M_{\rm{dm}}(r) \, ,
\label{Mtot}$$ where $M_{\rm{dm}}(r)$ is the dark matter mass within a radius $r$.
The velocity dispersion is determined by the potential well of the galaxy, which is, in general, non-spherical. We approximate the true mass distribution by the spherically averaged mass profile given above. This approximation has little impact outside of $r \sim$ few kpc (where the DM becomes the dominant mass component), but greatly simplifies the calculation of the dispersion profile. Spherical symmetry allows for a simpler treatment of the Jeans equations [@Binney:1987gd] and, together with equilibrium and an isotropic velocity distribution, yields the expression for the radial velocity dispersion given in the main text.
This mass profile (Eq. \[Mtot\]) generates a rotation curve, $v_{\rm{circ}}(r) = \sqrt{G M_{\rm{tot}}(r)/r}$, which is in good agreement with observations and a dispersion profile which agrees with results of previous papers [@Klypin:2001xu; @Robertson:2009bh].
M31 Mass Profile
----------------
We use the mass model of Ref. [@Tamm:2012hw]. Generalized Einasto profiles (given below) are used to describe the baryonic components $$\rho_b(a) = \rho_c \, \exp\left(-d_N \, \left[\left(\frac{a}{a_c}\right)^{1/N}-1\right]\right) \, ,$$ with $\rho_c$, $d_N$, $a_c$, and N adjusted to match data. The baryonic mass model include five components (nucleus, bulge, disc, young disc, and stellar halo). Together with the adopted NFW DM profile, the measured M31 rotation curve is reproduced well [@Tamm:2012hw].
We also include a black hole of mass $m_{\rm{BH}}=3.5\times 10^7 M_{\odot}$ [@Klypin:2001xu]; more recent observations suggest a slightly larger mass or $1.4 \times 10^8 M_{\odot}$ [@Bender:2005rq]. The inclusion of a central black hole yields larger velocity dispersions at small radii ($\lesssim 10 \, \rm{pc}$), which increases the intrinsic width of DM lines arising from small angle LOS directions. However, because we focus on large angles ($l \simeq 10^{\circ}-40^{\circ}$ in the MW and $\psi \simeq 0.5^{\circ}- 1.5^{\circ}$ in M31), we do not probe the region affected by the black hole, so its effect is negligible; we verified that our results were unmodified by this addition. Dispersions in M31 are comparable to those in the MW, but are systematically higher because of its larger mass and concentration.
Figure \[fig:RDisp\] shows the radial DM velocity dispersion profiles for the MW and M31. Vertical bands represent the range of radii that contribute $90\%$ to the signal along $\psi(l = 20^{\circ}, |b| = 5^{\circ})$ in the MW and $\psi = 1^{\circ}$ in M31.
![Radial velocity dispersion profiles for the MW and M31. Shaded vertical bands indicate the range of radii that contribute $90\%$ of the signal along $\psi(l = 20^{\circ},|b| = 5^{\circ})$ in the MW and $\psi = 1^{\circ}$ in M31; the radius ranges for the other directions discussed in the text are similar. Note that the lower bounds of these ranges are the smallest $r$ probed by these directions.[]{data-label="fig:RDisp"}](FigA1.pdf){width="\columnwidth"}
![Intrinsic ($\sigma_{\rm{DM}}$) and effective ($\sigma_{\rm{eff}}$) LOS velocity dispersion profiles for the MW and M31 as a function of $\psi/\psi_S$, the scaled angle relative to the center of each system. For the MW, $\psi_S = 50^{\circ}$, while for M31, $\psi_S = 2.5^{\circ}$; these scalings were chosen for display purposes. Intrinsic widths are determined by integrating the spectrum along the LOS using the radial velocity dispersion profiles given in the previous section. Effective widths include detector energy resolution. The increase in the LOS dispersion at small angles in M31 is due to the rising radial dispersions shown in Fig. \[fig:RDisp\]; for equally small (scaled) angles in the MW, the LOS dispersion decreases because only radii $< 1$ kpc, where the radial dispersion is decreasing, contribute.[]{data-label="fig:LOSDisp"}](FigA2.pdf){width="\columnwidth"}
![LOS velocity profiles for DM and HI gas [@Chemin:2009wd] in M31. DM error bars are calculated assuming 2-Ms exposures with Astro-H and only CXB and detector backgrounds.[]{data-label="fig:M31LV"}](FigA3.pdf){width="\columnwidth"}
LOS Velocity Dispersion
=======================
The velocity distribution of DM is of great interest both for the information it contains about the particle nature of DM and for its implications for direct and indirect detection experiments [@Peter:2013aha]. For example, models of self-interacting DM (SIDM) predict higher velocity dispersions near the centers of DM halos. By measuring the LOS velocity dispersion, it may be possible to constrain SIDM interaction cross-sections, particularly in clusters where deviations between SIDM and CDM dispersions are large [@Rocha:2012jg; @Kaplinghat:2013xca]. Additionally, because sub-halos generate smaller velocity dispersions, variations in line width along different LOS could help to constrain the size and distribution of DM substructure.
Because an observed DM signal will contain contributions from the entire LOS, and therefore a range of galactic radii, the full radial velocity dispersion cannot be probed directly. However, the observed LOS dispersion may still contain useful information. It is natural to ask how well Astro-H may be able to reconstruct the intrinsic DM LOS dispersion, given the observed signal.
Figure \[fig:LOSDisp\] shows both the intrinsic and observed (assuming $\sigma_{\rm{AH}} \simeq 1.7 \, \rm{eV}$) LOS velocity width for a DM line in the MW and M31. Because the detector resolution is comparable to the intrinsic width, the detector response broadens the signal by a factor of $\simeq \sqrt{2}$.
In principle, if the energy resolution of Astro-H were known exactly, the intrinsic width of the DM line could be reconstructed precisely; assuming the signal and detector response are both Gaussian, the effective width is $\sigma_{\rm{eff}}^2 = \sigma_{\rm{AH}}^2 + \sigma_{\rm{DM}}^2$, so that $\sigma_{\rm{DM}}$ can be determined simply.
Of course, in practice, the resolution can never be known exactly. Assuming the goal uncertainty of 1 eV (the uncertainty is expected to be $\lesssim 2$ eV [@Kitayama:2014fda]), we estimate that the intrinsic width of a 3.5-keV DM line can be reconstructed with an uncertainty of $\simeq 40 \, \rm{km \, s^{-1}}$; for higher energies the uncertainty in the width is smaller and scales as $E^{-1}$. See the Appendix of Ref. [@Kitayama:2014fda] for more details regarding uncertainty in detector energy resolution and intrinsic line width reconstruction.
More speculatively, using information about the strength of the signal along the LOS, it may be possible to construct a course-grained radial velocity dispersion profile from the LOS dispersion. For example, we see from the vertical bands in Fig. \[fig:RDisp\] that the range of radii that contributes to the $\psi(l = 20^{\circ}, |b| = 5^{\circ})$ signal is narrow and that the dispersion of these points is directly reflected in the intrinsic LOS dispersion shown in Fig. \[fig:LOSDisp\]. With additional pointings that probe different radii, it may be possible to constrain the radial dispersion profile using the measured line widths. This method would be most effective for small angles where the range of contributing radii is narrowest, although increased backgrounds would have to be overcome.
Velocity Spectroscopy of M31
============================
DM velocity spectroscopy can also be applied to a signal observed from M31. Relative motion between the Sun and M31 produces a DM LOS velocity shift of $\simeq -300 \, \rm{km \, s^{-1}}$ that is essentially independent of viewing angle. For astrophysical lines, one must also consider the rotation of the M31 disc. This produces an additional LOS velocity shift that varies strongly with viewing angle, separating the DM and astrophysical lines by $\pm \simeq 200 \, \rm{km \, s^{-1}}$ around $\pm 1^{\circ}$ [@Chemin:2009wd]. Detector lines are unshifted.
The large differences in LOS velocities between DM, astrophysical, and detector lines make M31 a potentially powerful tool to probe the origin of spectral lines. However, large LOS velocities are not by themselves sufficient to distinguish between these three causes; it is also necessary that the uncertainty in the profile centroid be small in comparison to the expected centroid separations.
As discussed in the main text, the uncertainty in the centroid is given by $\delta E = C(R) \, \sigma_{\rm{eff}} / \sqrt{N_s}$, where $\sigma_{\rm{eff}}$ is the observed line width, $N_s$ is the number of signal events, and C(R) is a correction factor that accounts for the presence of backgrounds. As can be seen in Fig. \[fig:LOSDisp\], the observed widths of DM signals arising from M31 and the MW are expected to be quite similar. However, the number of DM signal events in M31 is considerably smaller, increasing the centroid uncertainty substantially.
Figure \[fig:M31LV\] shows the LOS velocities for DM, astrophysical and detector lines as a function of the angular offset $\psi$ from the center of M31. We show the error bars on a DM signal assuming 2-Ms observations and only CXB and detector backgrounds. Astrophysical X-ray emission in M31 is not well studied outside of $\sim 0.5^{\circ}$, but is expected to be small . However, even without including this background, it is clear that the significance ($\propto \sqrt{t}$) with which DM and astrophysical signals can be differentiated is considerably smaller than for the MW. However, the large differences between DM, astrophysical and detector lines shifts could allow for cleaner separation of these causes, if uncertainties were reduced. If MW observations of a line suggest a DM origin, several Ms would be well spent on M31 observations.
Perhaps the greatest utility of observing M31 is in its power to test detector causes of a signal. This can be done most easily by looking directly at the center of M31. If the line is DM or astrophysical in nature, the signal strength should be strong and the centroid uncertainty correspondingly small, so that detector causes can be easily tested (Fig. \[fig:M31LV\]). Though we have shown error bars assuming 2-Ms observations, for this purpose, shorter exposures will clearly suffice.
|
---
author:
- |
$^{,a}$, Leonid I. Gurvits$^{b,c}$, Zsolt Paragi$^{b}$, Krisztina É. Gabányi$^{d}$[^1]\
FÖMI Satellite Geodetic Observatory, P.O. Box 585, H-1592 Budapest, Hungary\
Joint Institute for VLBI in Europe, Postbus 2, 7990 AA Dwingeloo, The Netherlands\
Department of Astrodynamics and Space Missions, Delft University of Technology, 2629 HS Delft, The Netherlands\
Konkoly Observatory, Research Centre for Astronomy and Earth Sciences, Hungarian Academy of Sciences, P.O. Box 67, H-1525 Budapest, Hungary\
E-mail: , , ,
title: 'Redshift, Time, Spectrum – the most distant radio quasars with VLBI'
---
Introduction
============
R for redshift – and for radio
------------------------------
Quasars at [*redshift*]{} $z$$\sim$6 have become spectroscopically identified for somewhat more than a decade [@Fan00; @Fan01], the first candidates being selected by their extremely red colours in the Sloan Digital Sky Survey (SDSS). These $i$-dropout objects, for which the absorption on the short-wavelength side of the Lyman-$\alpha$ emission line falls in the $i$ photometric band while the emission appears only at longer wavelengths, first in the $z$ band, are located in the approximate redshift range of 5.7$<$$z$$<$6.5. This technique was successfully applied for discovering the bulk of the $z$$\sim$6 quasars known to date, over 50 objects – not only in the SDSS but also in the Canada–France High-z Quasar Survey (CFHQS) (see e.g. [@Will10] for a review). As of today, the redshift record holder among quasars is J1120+0641 at $z$=7.085 [@Mort11]. It was discovered in the United Kingdom Infrared Telescope (UKIRT) Infrared Deep Sky Survey (UKIDSS). The $z$-dropout objects like this one can have redshift as high as 7.5. The search for quasars even more distant than the current record moves from the optical to the near-infrared regime, e.g. with the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) where the first $i$-dropout quasar at $z$=5.73 has recently been discovered [@Morg12], or in the Visible and Infrared Survey Telescope for Astronomy (VISTA) Kilo-degree Infrared Galaxy (VIKING) public survey [@Find12].
Only four of the known $z$$>$5.7 quasars (in the order of their discovery: J0836+0054 [@Fan01] at $z$=5.77; J1427+3312 [@McGr06] at $z$=6.12; J1429+5447 [@Will10] at $z$=6.21; J2228+0110 [@Zeim11] at $z$=5.95) show detectable continuum [*radio*]{} emission. The total 1.4-GHz flux density of the first three quasars is just in the order of 1 mJy. The most recently found source, J2228+0110, the second $z$$\sim$6 quasar after J1427+3312 selected by its radio emission, is somewhat weaker. Although the sample is still small, the radio-loud ratio among the most distant known quasars ($\sim$7%) is remarkably close to the 8%$\pm$1% found by matching the bright ($i$$<$18.5) SDSS quasars at any redshift with the radio detections in the 1.4-GHz Faint Images of the Radio Sky at Twenty-centimeters (FIRST) survey [@Ivez02]. The rare radio-emitting high-redshift quasars are particularly valuable, for a variety of reasons. The ultimate evidence for synchrotron jets produced by accretion of the surrounding material onto supermassive black holes (SMBHs) in active galactic nuclei (AGN) can be found in the radio by high-resolution Very Long Baseline Interferometry (VLBI) imaging observations. If the radio emission is compact on scales probed by VLBI, it should come from an AGN. Indeed, as we will review in Sect. \[VLBI\], compact radio structures in all four known radio quasars at $z$$\sim$6 have successfully been detected with VLBI. Moreover, compact radio sources that existed at around the epoch of reionization could serve as “beacons”, illuminating the intergalactic gas in their line of sight. This offers a good perspective to use them for studying the absorption spectrum of the neutral hydrogen with sensitive next-generation radio instruments like the Square Kilometer Array (SKA) [@Cari04].
T for time
----------
The lookback [*time*]{} is $\sim$12.5 Gyr for $z$=6, and $\sim$12.7 Gyr for $z$=7. (We assume a flat cosmological model with $H_{\rm{0}}=70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm m}=0.3$, and $\Omega_{\Lambda}=0.7$ troughout this paper.) The existence of $z$$\sim$6 quasars proves that accreting SMBHs with masses up to $\sim$$10^9$ $M_{\odot}$ have already assembled within several hundred million years after the Big Bang. Observing the earliest quasars of the Universe can constrain models of their birth and early cosmological evolution, the growth of the central SMBHs of active galactic nuclei, and their link to the host galaxy evolution via feedback mechanisms. Intriguingly, many of the intrinsic properties observed in the infrared, optical, and X-ray wavebands make the highest-redshift quasars very similar to their lower-redshift cousins, suggesting that they are already “evolved” objects even within 1 Gyr after the beginning of the Universe. Thus there are observational efforts going on to identify the “real firsts”. For example, based on the lack of the infrared emission originating from hot dust, two $z$$\sim$6 quasars in a sample of 21 seem less evolved, as the amount of hot dust in the quasar host may increase in parallel with the growth of the central SMBH [@Jian10]. As we will see in Sect. \[general\], the results of our high-resolution interferometric observations of radio-emitting sources also point to young objects, at least in terms of their radio jet activity.
S for spectra
-------------
In an attempt to offer yet another tribute to Prof. Richard T. Schilizzi (RTS), and as an addition to the ingenious title of this conference, we choose [*spectra*]{} to represent S. In the past, Richard participated with us in numerous VLBI studies of radio quasars known as the most distant ones that time, e.g. [@Gurv92; @Gurv94; @Frey97; @Para99; @Gurv00]. Another one of his major research interests was the study of young radio-loud AGN – Gigahertz-Peaked Spectrum (GPS) and Compact Steep Spectrum (CSS) sources – and their evolution, e.g. [@Snel00] and references therein. It is no surprise that in the highest-redshift Universe, the two topics eventually converge: the earliest radio AGN, right after their ignition should necessarily be young. Observations indicate that the spectral slope of the radio continuum is steep for the most distant quasars (Sect. \[VLBI\] and \[general\]; Fig. \[spectra\]) in the observed $\sim$1–5 GHz frequency range, which corresponds to $\sim$10–40 GHz in the rest frame of the sources. According to a plausible model [@Falc04], the high-redshift steep-spectrum objects may represent GPS sources at early cosmological epochs. The first generation of supermassive black holes could have had powerful jets that developed hot spots well inside their forming host galaxy, on linear scales of 0.1–10 kpc. Adopting the relation between the source size and the turnover frequency observed in GPS sources for our “typical” high-redshift quasars, the angular size of the smallest ($\sim$$100$ pc) of these early radio-jet objects would be in the order of 10 milli-arcseconds (mas), and the observed turnover frequency in their radio spectra would be around 500 MHz in the observer’s frame [@Falc04]. This spectral turnover has not been detected yet, but the structural and limited spectral information available for the $z$$\sim$6 radio quasars known to date fit well in the picture.
The highest-redshift radio quasars with VLBI {#VLBI}
============================================
Here we briefly summarise our VLBI imaging results obtained for the $z$$\sim$6 radio quasars known to date. These results came from a series of experiments performed with the European VLBI Network (EVN) starting in 2002, shortly after the discovery of J0836+0054 [@Fan01], the first quasar in this category. All the VLBI experiments were conducted in phase-reference mode, involving regular observations of nearby bright, compact reference radio sources. This technique allowed us to detect the weak target quasars, and to determine their astrometric position with mas-scale accuracy. For all but one quasar, VLBI observations were made at both 1.6 GHz and 5 GHz frequencies. For the weakest quasar in the sample that was discovered most recently (J2228+0110 [@Zeim11]), work is still in progress, and as of now, only 1.6-GHz EVN data have been collected. In the following subsections, we list the individual sources in the order of their discovery and of the date of their VLBI observations.
J0836+0054
----------
J0836+0054 was found in the SDSS data [@Fan01] as the first quasar at $z$$>$5.7 with a radio counterpart in the FIRST survey catalogue [@Whit97], with 1.4-GHz flux density $S_{1.4}$=1.11$\pm$0.15 mJy. Its accurate redshift was later measured as $z$=5.77 [@Ster03]. Our first experimental EVN observations were conducted on 2002 June 8 at 1.6 GHz. We found that essentially all radio emission comes from a compact but slightly resolved source within $\sim$10 mas angular extent which corresponds to $\sim$60 pc linear size at the distance of the quasar. We could rule out that the quasar’s image is multiplied by strong gravitational lensing [@Frey03]. (It turned out later that it’s true for the other $z$$\sim$6 quasars as well, in contrast to earlier predictions [@Wyit02].) Upon the successful detection at 1.6 GHz, we initiated 5-GHz EVN observations of J0836+0054. The data from 2003 November 4 verified that the source is compact ($<$40 pc) with a flux density $S_{5}$=0.34 mJy. Thus the spectrum of the source is steep; the variablity as a cause of the difference in flux densities is excluded by lower-resolution Very Large Array (VLA) observations performed nearly at the same time [@Frey05]. The two spectral points as a function of the rest-frame frequency are plotted in Fig. \[spectra\], along with the measurements for the other $z$$\sim$6 radio quasars and a low-redshift object for comparison.
J1427+3312
----------
J1427+3312 ($z$=6.12) was identified as the first radio quasar above redshift 6 [@McGr06; @Ster07]. Our 1.6-GHz and 5-GHz EVN imaging observations were conducted on 2007 March 11 and 2007 March 3, respectively. The source was clearly detected at both frequencies. Quite remarkably, there are two distinct radio components seen in the 1.6-GHz image of J1427+3312, separated by 28.3 mas, corresponding to a projected linear distance of $\sim$160 pc [@Frey08]. A similar result was published from an independent 1.4-GHz experiment conducted with the US Very Long Baseline Array (VLBA) [@Momj08]. Both radio components with sub-mJy flux densities appear resolved. 5-GHz radio emission on mas-scale was only detected for the brighter of the two, indicating again a steep radio spectrum (Fig. \[spectra\]), which is presumably the case for the other component which was too weak to be detected at the higher frequency. The double structure, the steep spectrum, and the separation of the components remind us to the Compact Symmetric Objects (CSOs), extremely young radio sources known in the more nearby Universe [@Wilk94; @Owsi98]. If this analogy holds, the kinematic age of J1427+3312 could be in the order of 10$^3$ years. The motion of the components could in principle be detected and the expansion speed measured with repeated VLBI imaging in the future. The nature however is not very cooperative in this case: due to the time dilation caused by the extremely large cosmological redshift, the expansion would appear very slow, and one must wait at least for an astronomer’s lifetime between the subsequent epochs of such a monitoring experiment.
J1429+5447
----------
J1429+5447 ($z$=6.21) is the most distant radio quasar known to date, found in the CFHQS [@Will10]. Our EVN images made on 2010 June 8 (at 1.6 GHz) and 2010 May 27 (at 5 GHz) show compact but somewhat resolved structures in the case of this source as well [@Frey11]. The steep radio spectrum of the VLBI-detected quasar (Fig. \[spectra\]) is similar to that of the previous two sources which have dual-frequency VLBI data available.
J2228+0110
----------
J2228+0110 ($z$=5.95) was found by matching the optical detections of the deep SDSS Stripe 82 with the radio sources detected in the 1.4-GHz VLA A-array survey covering the same area [@Zeim11]. This quasar is different from the previous three in the sense that it falls below the detection threshold of the FIRST survey. Its peak brightness is 0.31 mJy/beam in the VLA Stripe 82 survey catalogue [@Hodg11]. A cautious approach to the VLBI detection led us to initiate 1.6-GHz observations first. The EVN experiment was conducted on 2011 November 1. The analysis of the data has not been completed yet, but according to preliminary results, J2228+0110 appears detected as a compact source, with a flux density much similar to that of the VLA one (L.I. Gurvits et al. 2012, in preparation).
Summary of the general properties {#general}
=================================
By observing the sample of the four known $z$$\sim$6 radio quasars with the highest angular resolution provided by VLBI, we found that these are all compact sources. The bulk of their radio emission originates from regions well within 100 pc, clearly suggesting an AGN origin. The quasar J1427+3312 shows a double structure with components separated by about 160 pc, reminiscent of the structure of CSOs. It is possible that we see very young radio sources, like the GPS and CSS sources known in the less distant Universe. The measured moderate brightness temperatures ($\sim$10$^7$–10$^9$ K, substantially lower than the intrinsic equipartition limit, $\sim$5$\times$10$^{10}$ K, for powerful compact extragalactic radio sources [@Read94]) and the steep radio spectra in the rest-frame $\sim$10–40-GHz frequency range (Fig. \[spectra\]) can be considered as circumstantial evidence for the youth of these sources. The spectral indices are $\alpha$$\approx$$-0.6$...$-1.0$ for the three $z$$\sim$6 quasars where dual-frequency data are available. (The spectral index $\alpha$ is defined as $S\propto\nu^{\alpha}$, where $S$ is the flux density and $\nu$ the frequency.) In Fig. \[spectra\], the broad-band spectrum of J0713+4349, a well-known CSO [@Owsi98] is also compiled from the total flux densities from the literature and plotted as a visual aid for comparison. The flux densities are scaled down to match the distance of the $z$$\sim$6 quasars. The spectral slope at the high-frequency end is quite similar to that of the three distant quasars. Obviously, additional lower-frequency observations would be needed to find the suspected spectral turnover for the $z$$\sim$6 objects – a task very challenging with the current radio interferometric instruments due to the required spectral coverage, high sensitivity, and fine angular resolution.
A recent census of somewhat less distant VLBI-imaged radio quasars at $z>4.5$ [@Frey11] also suggests that the highest-redshift sample of compact radio sources is dominated by objects that do not resemble blazars that are characterised by highly Doppler-boosted, compact, flat-spectrum radio emission. Note that the continuum radio spectrum of bright blazars continues to be flat at much higher frequencies, in many cases up to several hundred GHz, e.g. [@Planck; @Gere11]. If exist, blazar-type compact flat-spectrum AGN remain to be discovered at $z$$\sim$6. Certainly, the case of the extremely distant radio quasars is far from being closed, as new discoveries are expeced from on-going surveys, e.g. [@Morg12; @Find12], perhaps breaking the $z$=7 barrier soon.
[99]{}
Carilli C.L., Furlanetto S., Briggs F., et al. 2004, New Astron. Rev., 48, 1029
Falcke H., Körding E., Nagar N.M. 2004, New Astron. Rev., 48, 1157
Fan X., White R.L., Davis M., et al. 2000, AJ, 120, 1167
Fan X., Narayanan V.K., Lupton R.H., et al. 2001, AJ, 122, 2833
Findlay J.R., Sutherland W.J., Venemans B.P., et al. 2012, MNRAS, 419, 3354
Frey S., Gurvits L.I., Kellermann K.I., Schilizzi R.T., Pauliny-Toth I.I.K. 1997, A&A, 325, 511
Frey S., Mosoni L., Paragi Z., Gurvits L.I. 2003, MNRAS, 343, L20
Frey S., Paragi Z., Mosoni L., Gurvits L.I. 2005, A&A, 436, L13
Frey S., Gurvits L.I., Paragi Z., Gabányi K.É. 2008, A&A, 848, L39
Frey S., Paragi Z., Gurvits L.I., Cseh D., Gabányi K.É. 2010, A&A, 524, A83
Frey S., Paragi Z., Gurvits L.I., Gabányi K.É., Cseh D. 2011, A&A, 531, L5
Geréb K., Frey S. 2011, Adv. Space Res., 48, 334
Gurvits L.I., Kardashev N.S., Popov M.V., et al. 1992, A&A, 260, 82
Gurvits L.I., Schilizzi R.T., Barthel P.D., et al. 1994, A&A, 291, 737
Gurvits L.I., Frey S., Schilizzi R.T., et al. 2000, Adv. Space Res., 26, 719
Hodge J.A., Becker R.H., White R.L., Richards G.T., Zeimann G.R. 2011, AJ, 142, 3
Ivezić Ž., Menou K., Knapp G.R., et al. 2002, AJ, 124, 2364
Jiang L., Fan X., Brandt W.N., et al. 2010, Nature, 464, 380
McGreer I.D., Becker R.H., Helfand D.J., White R.L. 2006, ApJ, 652, 157
Momjian E., Carilli C.L., McGreer I.D. 2008, AJ, 136, 344
Morganson E., De Rosa G., Decarli R., et al. 2012, AJ, 143, 142
Mortlock D.J., Warren S.J., Venemans B.P., et al. 2011, Nature, 474, 616
Owsianik I., Conway J.E. 1998, A&A, 337, 69
Paragi Z., Frey S., Gurvits L.I., et al. 1999, A&A, 344, 51
Planck Collaboration, Aatrokoski A., et al. 2011, A&A, 536, A15
Readhead A.C.S. 1994, ApJ, 426, 51
Snellen I.A.G., Schilizzi R.T., Miley G.K., et al. 2000, MNRAS, 319, 445
Stern D., Hall P.B., Barrientos L.F., et al. 2003, ApJ, 596, L39
Stern D., Kirkpatrick J.D., Allen L.E., et al. 2007, ApJ, 63, 677
White R.L., Becker R.H., Helfand D.J., Gregg M.D. 1997, ApJ, 475, 479
Wilkinson P.N., Polatidis A.G., Readhead A.C.S., Xu W., Pearson T.J. 1994, ApJ, 432, L87
Willott C.J., Delorme P., Reylé C., et al. 2010, AJ, 139, 906
Wyithe J.S.B., Loeb A. 2002, Nature, 417, 923
Zeimann G.R., White R.L., Becker R.H., et al. 2011, ApJ, 736, 57
[^1]: The EVN is a joint facility of European, Chinese, South African, and other radio astronomy institutes funded by their national research councils. This work was supported by the European Community’s Seventh Framework Programme, Advanced Radio Astronomy in Europe, grant agreement no. 227290, and the Hungarian Scientific Research Fund (OTKA, grant no. K72515). We thank László Mosoni and Dávid Cseh for their contribution to the VLBI studies reviewed here.
|
---
author:
- |
A. Bajravani\
\
\
\
A. Rastegar$^*$\
\
---
> **Abstract**
>
> = 0 mm In this paper we will try to introduce a good smoothness notion for a functor. We consider properties and conditions from geometry and algebraic geometry which we expect a smooth functor should have.\
> [: Abelian Category, First Order Deformations, Multicategory, Tangent Category, Topologizing Subcategory.\
> [**[Mathematics Subject Classification:]{}**]{} 14A20, 14A15, 14A22.]{}
Introduction
============
Nowadays noncommutative algebraic geometry is in the focus of many basic topics in mathematics and mathematical physics. In these fields, any under consideration space is an abelian category and a morphism between noncommutative spaces is a functor between abelian categories. So one may ask to generalize some aspects of morphisms between commutative spaces to morphisms between noncommutative ones. One of the important aspects in commutative case is the notion of smoothness of a morphism which is stated in some languages, for example: by lifting property as a universal language, by projectivity of relative cotangent sheaves as an algebraic language and by inducing a surjective morphism on tangent spaces as a geometric language.
In this paper, in order to generalize the notion of smooth morphism to a functor we propose three different approaches. A glance description for the first one is as follows: linear approximations of a space are important and powerful tools. They have geometric meaning and algebraic structures such as the vector space of the first order deformations of a space. So it is legitimate to consider functors which preserve linear approximations. On the other hand first order deformations are good candidates for linear approximations in categorical settings. These observations make it reasonable to consider functors which preserve first order deformations.\
The second one is motivated from both Schlessinger’s approach and simultaneous deformations. Briefly speaking, a simultaneous deformation is a deformation which deforms some ingredients of an object simultaneously. Deformations of morphims with nonconstant target, deformations of a couple $(X,\mathcal{L})$, in which X is a scheme and $\mathcal{L}$ is a line bundle on X, are examples of such deformations. Also we see that by this approach one can get a morphism of moduli spaces of some moduli families. We get this, by fixing a universal ring for objects which correspond to each other by a smooth functor. Theorem \[Th2\] connects this notion to the universal ring of an object. In $3.1$ and $3.2$ we describe geometrical setting and usage of this approach respectively.\
The third notion of smoothness comes from a basic reconstruction theorem of A. Rosenberg, influenced by ideas of A. Grothendieck. We think that this approach can be a source to translate other notions from commutative case to noncommutative one. In remarks \[rem2\] and \[rem3\] we notice that these three smoothness notions are independent of each other.\
Throughout this paper $\mathbf{Art}$ will denote the category of Artinian local $k$-algebras with quotient field $k$. By $\mathbf{Sets}$, we denote the category of sets which its morphisms are maps between sets. Let $F$ and $G$ be functors from $\mathbf{Art}$ to $\mathbf{Sets}$. For two functors $F,G: \mathbf{Art} \rightarrow \mathbf{Sets}$ the following is the notion of smoothness between morphisms of $F$ and $G$ which has been introduced in [@M.; @Sch.]:\
\
A morphism $D:F\rightarrow G$ between covariant functors $F$ and $G$ is said to be a smooth morphism of functors if for any surjective morphism $\alpha:B\rightarrow A$, with $\alpha \in \operatorname{Mor}(\textbf{Art})$, the morphism $$F(B)\rightarrow F(A)\underset{G(A)}{\times}G(B)$$ is a surjective map in $\mathbf{Sets}$.\
Note that this notion of smoothness is a notion for morphisms between special functors, i.e. functors from the category $\mathbf{Art}$ to the category $\mathbf{Sets}$, while the concepts for smoothness which we introduce in this paper are notions for functors, but not for morphisms between them.\
\
A functor $F:\textbf{Art}\rightarrow \mathbf{Sets}$ is said to be a deformation functor if it satisfies in definition 2.1. of [@M.; @Man.]. For a fixed field $k$ the schemes in this paper are schemes over the scheme $\operatorname{Spec}(k)$ otherwise it will be stated.
First Smoothness notion and some examples
=========================================
[**1.1 Definition:**]{} Let $M$ and $C$ be two categories. We say that the category $C$ is a multicategory over $M$ if there exists a functor $T:C\rightarrow M$, in which for any object $A$ of $M$, $T ^{-1}(A)$ is a full subcategory of $C$.\
Let $C$ and $\overline{C}$ be two multicategories over $M$ and $\overline{M}$ respectively. A morphism of multicategories $C$ and $\overline{C}$ is a couple $(u,\nu)$ of functors, with $u:C \rightarrow \overline{C}$ and $\nu:M\rightarrow \overline{M}$ such that the following diagram is commutative:\
$$\begin{array}{ccccc}
C &\overset{f}\rightarrow&M \\
u \downarrow& & \downarrow \nu\\
\overline{C}& \rightarrow & \overline{M}\\
\end{array}$$\
The category of modules over the category of rings and the category of sheaves of modules over the category of schemes are examples of multicategories.\
[**1.2 Definition:**]{} For a $S$-scheme $X$ and $A\in \mathbf{Art}$, we say that $\mathcal{X}$ is a $S$-deformation of $X$ over $A$ if there is a commutative diagram: $$\begin{array}{ccccc}
X & \rightarrow & \mathcal{X}\\
\downarrow & & \downarrow \\
S & \rightarrow & S\underset{k}{\times}A \\
\end{array}$$ in which $X$ is a closed subscheme of $\mathcal{X}$, the scheme $\mathcal{X}$ is flat over $S\underset{k}{\times}A$ and one has $X \cong S\underset{S\underset{k}{\times}A}{\times}\mathcal{X}$.\
Note that in the case $S=\operatorname{Spec}(k)$, we would have the usual deformation notion and as in the usual case the set of isomorphism classes of first order $S$-deformations of $X$ is a $k$-vector space. The addition of two deformations $(\mathcal{X}_{1},\mathcal{O}_{\mathcal{X}_{1}})$ and $(\mathcal{X}_{2},\mathcal{O}_{\mathcal{X}_{2}})$ is denoted by $(\mathcal{X}_{1}\underset{X}{\bigcup}\mathcal{X}_{2},\mathcal{O}_{\mathcal{X}_{1}}\underset{\mathcal{O}_{X}}{\times}\mathcal{O}_{\mathcal{X}_{2} })$.\
[**1.3 Definition:**]{} [**i)**]{} Let $C$ be a category. We say $C$ is a category with enough deformations, if for any object $c$ of $C$, one can associate a deformation functor. We will denote the associated deformation functor of $c$, by $D_{c}$. Moreover for any $c\in \operatorname{Obj}(C)$ let $D_{c}(k[\epsilon])$ be the tangent space of $c$, where $k[\epsilon]$ is the ring of dual numbers.\
[**ii)**]{} Let $C_{1}$ and $C_{2}$ be two multicategories with enough deformations over $\operatorname{Sch}/k$, and $(F,id)$ be a morphism between them. We say $F$ is a smooth functor if it has the following properties:\
[**1 :**]{} For any object $M$ of $C_{1}$, if $M_{1}$ is a deformation of $M$ in $C_{1}$ then $F(M_{1})$ is a deformation of $F(M)$ on $A$ in $C_{2}$.\
[**2 :**]{} The map $$\begin{array}{ccc}
D_{M}(k[\varepsilon])&\rightarrow&D_{F(M)}(k[\varepsilon])\\
\mathcal{X}\!\!\!\!\!\!\!\!\!\!&\mapsto&\!\!\!\!\!\!\!\!\!\!F(\mathcal{X})
\end{array}$$ is a morphism of tangent spaces.\
The following are examples of categories with enough deformations:\
1) Category of schemes over a field $k$.\
2) Category of coherent sheaves on a scheme $X$.\
3) Category of line bundles over a scheme.\
4) Category of algebras over a field $k$.\
We will need the following lemma to present an example of smooth functors:
\[lem1.1\] Let $X$, $X_{1}$, $X_{2}$ and $\mathcal{X}$ be schemes over a fixed scheme $S$. Assume that the following diagram of morphisms between schemes is a commutative diagram.\
.7500mm
(20,30)(30,90) (25,115)[(0,0)\[cc\][$X$]{}]{} (60,115)[(0,0)\[cc\][$X_1$]{}]{} (25,85)[(0,0)\[cc\][$X_2$]{}]{} (60,85)[(0,0)\[cc\][$\mathcal{X}$]{}]{} (52.25,115.5)[(1,0)[.07]{}]{} (31,115.5)[(1,0)[21.25]{}]{} (53.25,85.25)[(1,0)[.07]{}]{} (30.25,85.25)[(1,0)[23]{}]{} (25,92.25)[(0,-1)[.07]{}]{} (25,110.25)[(0,-1)[18]{}]{} (60,92.75)[(0,-1)[.07]{}]{} (60,110.5)[(0,-1)[17.75]{}]{} (40,120)[(0,0)\[cc\][$i_1$]{}]{} (40,80)[(0,0)\[cc\][$i_2$]{}]{} (66,100)[(0,0)\[cc\][$g$]{}]{}
\
If $i_{1}$ is homeomorphic on its image, then so is $i_2$.
[ See Lemma $(2.5)$ of [@K.; @Sch.]. ]{}
\[exam0\] Let $Y$ be a flat scheme over $S$. Then the fibered product by $Y$ over $S$ is smooth. More precisely, the functor: $$\begin{array}{ccc}
F:\operatorname{Sch}/S&\rightarrow&\operatorname{Sch}/Y\\
F(X)\!\!\!\!\!\!\!\!\!\!\!\!\!&=&X\underset{S}{\times}Y
\end{array}$$ is smooth.
Let $X$ be a closed subscheme of $\mathcal{X}$. Then $X\underset{S}{\times}Y$ is a closed subscheme of $\mathcal{X}\underset{S}{\times}Y$. To get the flatness of $\mathcal{X}\underset{S}{\times}Y$ over $S\underset{k}{\times}A$, it suffices to has flatness of $Y$ over $S$. It can also be verified easily that the isomorphism: $$(\mathcal{X}\underset{S}{\times}Y)\underset{S\underset{k}{\times}A}{\times}S\cong X\underset{S}{\times}Y$$ is valid. Therefore $\mathcal{X}\underset{S}{\times}Y$ is a $S$-deformation of $X\underset{S}{\times}Y$ if $\mathcal{X}$ is such a deformation of $X$. This verifies the first condition of item $(\mathbf{ii})$ of definition 1.3. To prove the second condition we need the following:
\[lem1.2\] Let $Y$, $X_{1}$ and $X_{2}$ be $S$-schemes. Assume that $X$ is a closed subscheme of $X_{1}$ and $X_{2}$. Then we have the following isomorphism:
$(X_{1}\underset{X}{\bigcup} X_{2})\underset{S}{\times}Y\cong
(X_{1}\underset{S}{\times}Y)\underset{X\underset{S}{\times}Y}{\bigcup}(X_{2}\underset{S}{\times}Y)$.
For simplicity we set: $$X_{1}\underset{X}{\cup}X_{2}=\mathcal{X} \qquad , \qquad
(X_{1}\underset{S}{\times}Y)\underset{X\underset{S}{\times}Y}{\bigcup}(X_{2}\underset{S}{\times}Y)=\mathcal{Z}$$ By universal property of $\mathcal{Z}$ we have a morphism $\theta:\mathcal{Z}\rightarrow\mathcal{X}\underset{S}{\times}Y$. We prove that $\theta$ is an isomorphism. Let $i_{1}:X_{1}\rightarrow \mathcal{X}$, $i_{2}: X_{2}\rightarrow \mathcal{X}$, $j_{1}:X_{1}\underset{S}{\times}Y\rightarrow \mathcal{Z}$ and $j_{2}:X_{2}\underset{S}{\times}Y\rightarrow \mathcal{Z}$ be the inclusion morphisms. Set theoretically we have: $$\begin{array}{cccc}
j_{1}(X_{1}\underset{S}{\times}Y)\bigcup j_{2}(X_{2}\underset{S}{\times}Y)&=&\mathcal{Z}& \qquad(\mathbf{\operatorname{I}})\\
i_{1}(X_{1})\bigcup i_{2}(X_{2})&=&\mathcal{X} & \qquad(\operatorname{II})
\end{array}$$ Now consider the following commutative diagrams:
0.50mm
(70,100)(0,40) (10,100)[(0,0)\[cc\][$X$]{}]{} (40,130)[(0,0)\[cc\][$X_1$]{}]{} (70,100)[(0,0)\[cc\][$\mathcal{X}$]{}]{} (40,70)[(0,0)\[cc\][$X_2$]{}]{} (34.75,127.25)[(1,1)[.14]{}]{} (13.75,103.5)(.067307692,.076121795)[312]{}[(0,1)[.076121795]{}]{} (67.25,105.75)[(1,-1)[.14]{}]{} (45.5,127.5)(.067337461,-.067337461)[323]{}[(0,-1)[.067337461]{}]{} (35.75,74.75)[(1,-1)[.14]{}]{} (13.5,97)(.067424242,-.067424242)[330]{}[(0,-1)[.067424242]{}]{} (66.25,97.5)[(1,1)[.14]{}]{} (44.5,74)(.067337461,.072755418)[323]{}[(0,1)[.072755418]{}]{} (20,120)[(0,0)\[cc\][$f$]{}]{} (58,120)[(0,0)\[cc\][$i_1$]{}]{} (58,80)[(0,0)\[cc\][$i_2$]{}]{} (20,80)[(0,0)\[cc\][$g$]{}]{}
\
.500mm
(138.5,100)(0,20) (23,77)[(0,0)\[cc\][$X\underset{S}{\times}Y$]{}]{} (57,113)[(0,0)\[cc\][$X_1\underset{S}{\times} Y$]{}]{} (57,42)[(0,0)\[cc\][$X_2\underset{S}{\times}Y$]{}]{} (130,115)[(0,0)\[cc\][$\mathcal{Z}$]{}]{} (130,42)[(0,0)\[cc\][$\mathcal{X}\underset{S}{\times}Y$]{}]{} (51.5,109)[(1,1)[.07]{}]{} (27.25,84.75)(.0337078652,.0337078652)[700]{}[(0,1)[.0337078652]{}]{} (51.5,49.3)[(1,-1)[.07]{}]{} (26.5,74.5)(.0337273992,-.0337273992)[700]{}[(0,-1)[.0337273992]{}]{} (120,115.5)[(1,0)[.07]{}]{} (75,115.5)[(1,0)[45]{}]{} (118,45.25)[(1,0)[.07]{}]{} (75,45.25)[(1,0)[40]{}]{} (129.5,51.25)[(0,-1)[.07]{}]{} (129.75,110.5)(-.03125,-7.40625)[8]{}[(0,-1)[7.40625]{}]{} (121.5,52.5)[(1,-1)[.07]{}]{} (62.5,111)(.0340253749,-.0337370242)[1734]{}[(1,0)[.0340253749]{}]{} (122.25,110)[(1,1)[.07]{}]{} (62.5,50)(.0337380011,.0338791643)[1771]{}[(0,1)[.0338791643]{}]{} (35,100)[(0,0)\[cc\][$g_1$]{}]{} (90,121)[(0,0)\[cc\][$j_1$]{}]{} (35,57.75)[(0,0)\[cc\][$g_2$]{}]{} (90,36)[(0,0)\[cc\][$h$]{}]{} (138,80)[(0,0)\[cc\][$\theta$]{}]{} (108,70)[(0,0)\[cc\][$e$]{}]{} (75,70)[(0,0)\[cc\][$j_2$]{}]{}
Let $z\in \mathcal{X}\underset{S}{\times}Y$, $\alpha=P_{\mathcal{X}}(z)\in \mathcal{X}$ and $\beta=P_{Y}(z)\in Y$ in which $P_{\mathcal{X}}$ and $P_{Y}$ are the first and second projections from $\mathcal{X} \underset{S}{\times}Y$ to $\mathcal{X}$ and $Y$ respectively. Then by relation $(\operatorname{II})$ one has $ \alpha\in i_{1}(X_{1})$ or $ \alpha\in i_{2}(X_{2})$. If $\alpha=i_{1}(\alpha_{1})\in i_{1}(X_{1})$, then $\alpha_{1}$ and $\beta$ go to the same element in S by $\eta_{X_{1}}$ and $\eta_{Y}$ in which $\eta_{X_{1}}:X_{1}\rightarrow S$ and $\eta_{Y}:Y\rightarrow S$ are the maps which make $X_{1}$ and $Y$ schemes over $S$. Therefore there exists an element $\gamma$ in $X_{1}\underset{S}{\times}Y$ such that $\overline{P}_{X_{1}}(\gamma)=\alpha_{1}$ and $\overline{P}_{Y}(\gamma)=\beta$ in which $\overline{P}_{X_{1}}$ and $\overline{P}_{Y}$ are the first and second projections from $X\underset{S}{\times}Y$ to $X_{1}$ and $Y$ respectively. By universal property of fibered products $\gamma$ belongs to $\mathcal{X}\underset{S}{\times}Y$ and $\theta(\gamma)=z$. The proof for the case $\alpha \in i_{2}(X)$ is similar. This implies that $\theta$ is surjective.\
For injectivity of $\theta$ assume that $\theta(z_{1})=\theta(z_{2})$. The relation $(\operatorname{I})$ implies that $z_{1}$ and $z_{2}$ belong to $\operatorname{im}(j_{1})\bigcup \operatorname{im}(j_{2})$. Set $z_{1}=j_{1}(c_{1})$ and $z_{2}=j_{2}(c_{2})$. There are two cases: if $z_{1}, z_{2} \in \operatorname{im}(j_{1})\cap \operatorname{im}(j_{2})$, then the lemma \[lem1.1\] implies $e(c_{1})\neq e(c_{2})$ when $c_{1}\neq c_{2}$. Now by commutativity of the subdiagram:
.7500mm
(30,30)(30,90) (20,115)[(0,0)\[cc\][$X_1\underset{S}{\times}Y$]{}]{} (70,114)[(0,0)\[cc\][$\mathcal{X} \underset{S}{\times}Y$]{}]{} (70,82)[(0,0)\[cc\][$\mathcal{Z}$]{}]{} (59,115.5)[(1,0)[.07]{}]{} (32.75,115.5)[(1,0)[26.25]{}]{} (70.25,108)[(0,1)[.07]{}]{} (70.25,86.5)[(0,1)[21.5]{}]{} (65.75,86.25)[(3,-2)[.07]{}]{} (28,110.25)(.0530196629,-.0337078652)[712]{}[(1,0)[.0530196629]{}]{} (39.25,95.25)[(0,0)\[cc\][$j_1$]{}]{} (75.25,98.25)[(0,0)\[cc\][$\theta$]{}]{}
we have $\theta(z_{1})\neq \theta(z_{2})$ when $z_{1}\neq z_{2}$.\
Otherwise assume that $z_{1}\in \operatorname{im}(j_{1})$ and $z_{2}\in \operatorname{im}(j_{2})- \operatorname{im}(j_{1})$. In this case one can see easily that $i_{1}\overline{P}_{X_{1}}(c_{1})=i_{2}q_{2}(c_{2})$ in which $q_{2}$ is the first projection from $X_{2}\underset{S}{\times}Y$ to $X_{2}$. Since $\mathcal{X}$ is the fibered sum of $X_{1}$ and $X_{2}$, there exists an element $x\in X$ such that $i_{1}f(x)=i_{2}g(x)$, $f(x)=\overline{P}_{X_{1}}(c_{1})$ and $g(x)=q_{2}(c_{2})$.\
Set $y=p_{2}e(c_{1})$ in which $p_{2}$ is the second projection from $\mathcal{X}\underset{S}{\times}Y$ to $Y$. By a diagram chasing we see that $x$ and $y$ go to the same element in S. This implies that there exists an element $\epsilon$ in $X\underset{S}{\times}Y$ which is mapped to $x$ and $y$ by first and second projections, respectively. Also it is easy to see that the equalities $g_{1}(x,y)=c_{1}$ and $g_{2}(x,y)=c_{2}$ are valid. Since $\mathcal{Z}$ is the fibered sum of $X_{1}\underset{S}{\times}Y$ and $X_{2}\underset{S}{\times}Y$ on $X\underset{S}{\times}Y$, we have $z_{1}=z_{2}$ which means that $\theta$ is injective. This together with the surjectivity of $\theta$ implies that $\theta$ is bijective. Continuity of $\theta$ and its inverse, follow by a diagram chasing.\
Finally we should prove that $\mathcal{O}_{\mathcal{X}\underset{S}{\times}Y}\cong \mathcal{O}_{Z}$. Since the claim is local, it is sufficient to prove it for affine schemes. Let $\mathcal{X}$ be an affine scheme, so $X_{1}$, $X_{2}$ and $X$ are affine schemes, since they are closed subschemes of $\mathcal{X}$ each one defined by a nilpotent sheaf of ideals. Set $\mathcal{X}=\operatorname{Spec}(A)$, $X_{1}=\operatorname{Spec}(A_{1})$, $X_{2}=\operatorname{Spec}(A_{2})$, $X=\operatorname{Spec}(A_{0})$, $Y=\operatorname{Spec}(B)$ and $S=\operatorname{Spec}(C)$. The isomorphism $\mathcal{O}_{\mathcal{X}\underset{S}{\times}Y}\cong \mathcal{O}_{Z}$ reduces to the following isomorphism: $$(A_{1}\underset{A_{0}}{\times}A_{2})\underset{C}{\otimes}B\cong
(A_{1}\underset{C}{\otimes}B)\underset{A_{0}\underset{C}{\otimes}B}{\times}(A_{2}\underset{C}{\otimes}B).$$ Define a morphism as follows: $$\begin{array}{ccc}
d:(A_{1}\underset{A_{0}}{\times}A_{2})\underset{C}{\otimes}B&\rightarrow&
(A_{1}\underset{C}{\otimes}B)\underset{A_{0}\underset{C}{\otimes}B}{\times}(A_{2}\underset{C}{\otimes}B)\\
d((a_{1},a_{2})\otimes b)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!&=&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!(a_{1}\otimes b,a_{2}\otimes b).
\end{array}$$ By a simple commutative algebra argument it can be shown that this is in fact an isomorphism. This completes the proof of lemma.
This lemma shows that the fibered product functor, induces an additive homomorphism on tangent spaces. To check linearity with respect to scalar multiplication, take an element $a$ in the field $k$. Multiplication by $a$ is a ring homomorphism on $D$. This homomorphism induces a morphism from $S\underset{k}{\times}D$ to $S\underset{k}{\times}D$ and scalar multiplication on $t_{D_{X}}$, comes from composition of this map with $\pi$. In other words this gives a map from $\mathcal{X}\underset{S}{\times}Y$ into $\mathcal{X}\underset{S}{\times}Y$. These together give the linearity of homomorphism induced from $F$ with respect to scalar multiplication.\
This observation together with the lemma \[lem1.2\], give the smoothness of the fibered product functor.
\[lem1.3\] Let $X$ and $Y$ be arbitrary schemes and assume that there exist morphisms $h$ and $g$ from $\eta$ to $\eta_{1}$ and $\eta_{2}$, where $\eta$, $\eta_{1}$, $\eta_{2}$ are sheaves of $\mathcal{O}_{X}$-modules on the scheme $X$. Then for any morphism $f:X\rightarrow Y$ we have the following isomorphisms: $$\begin{array}{ccc}
f_{*}(\eta_{1}\underset{\eta}{\times}\eta_{2})\!\!\!\!\!&\cong &\!\!\!\!\!f_{*}(\eta_{1}) \underset{f_{*}(\eta)}{\times}
f_{*}(\eta_{2}) \\
f^{*}(\rho_{1}\underset{\rho}{\times}\rho_{2})\!\!\!\!\!&\cong&\!\!\!\!\!
f^{*}(\rho_{1})\underset{f^*(\rho)}{\times}
f^{*}(\rho_{2}).
\end{array}$$
[ For the first isomorphism, it is enough to consider the definition of direct image of sheaves.\
To prove the second one, assume that $(M_{i})_{i\in I}, (N_{i})_{i\in I}$ and $(P_{i})_{i \in I}$ are direct systems of modules over a directed set $I$. We have to prove that $$\lim_{i\in I}(M_{i}\underset{P_{i}}{\times}N_{i})\cong (\lim_{i\in
I}(M_{i}))\underset{(\lim_{i\in I}(P_{i}))}{\times}(\lim_{i\in I}(N_{i})).$$ The above isomorphism can be proved by elementary calculations and using elementary properties of direct limits.]{}
\[exam1\] Let $f:X\rightarrow Y$ be a flat morphism of schemes. Then $f_{*}$ and $f^{*}$ are smooth functors.
In fact let $\eta$ be a coherent sheaf on $X$ and $\eta_{1}\in \operatorname{Coh}(X\underset{k}{\times}D)$ be a deformation of $\eta$. By these assumptions we would have: $$(f_{*}(\eta))\underset{D}{\otimes}k=f_{*}(\eta_{1}\underset{D}{\otimes}k)=f_{*}(\eta).$$ Moreover $f_{*}(\eta_{1})$ is flat on $D$, because $\eta$ is flat on $D$. This implies that $f_{*}$ satisfies in the first condition of smoothness. The second one is the first isomorphism of lemma \[lem1.3\]. Therefore $f_{*}$ is smooth. Smoothness of $f^{*}$ is similar to that of $f_{*}$.
Assuming this notion of smoothness we can generalize another aspect of geometry to categories.
[**1.9 Definition:**]{} Let $C$ be a category with enough deformations. We define the tangent category of $C$, denoted by $TC$, as follows: $$\begin{array}{ccc}
\operatorname{Obj}(TC)\!\!\!\!\!\!\!\!\!\!&:=&\underset{c\in\operatorname{Obj}(C)}{\bigcup} T_{c}C\\
\operatorname{Mor}_{TC}(\upsilon,\omega)&:=&\!\!\!\!\!\!\!\!\!\!\operatorname{Mor}(V,W)
\end{array}$$ which by $T_{c}C$, we mean the tangent space of $D_{c}$. Moreover $\upsilon$ and $\omega$ are first order deformations of $V$ and $W$.
\[rem1\] (i) It is easy to see that a smooth functor induces a covariant functor on the tangent categories.\
(ii) Let $C$ be an abelian category. Then its tangent category is also abelian.
The following is a well known suggestion of A. Grothendieck: Instead of working with a space, it is enough to work on the category of quasi coherent sheaves on this space. This suggestion was formalized and proved by P. Gabriel for noetherian schemes and in its general form by A. Rosenberg. To do this, Rosenberg associates a locally ringed space to an abelian category $A$. In a special case he gets the following:
\[Th1.1\] Let $(X,\mathcal{O}_{X})$ be a locally ringed space and let $A=\operatorname{QCoh}(X)$. Then $$(\operatorname{Spec}(A),\mathcal{O}_{ \operatorname{Spec}(A)})=(X,\mathcal{O}_{X})$$ where $\operatorname{Spec}(A)$ is the ringed space which is constructed from an abelian category by A. Rosenberg.
[ See Theorem $(A.2)$ of [@A.; @L.; @R]. ]{}
The definition of tangent category and theorem 4 motivates the following questions which the authors could not find any positive or negative answer to them until yet.\
[**Question 1:**]{} For a fixed scheme $X$ consider $T\operatorname{QCoh}(X)$ and $TX$, the tangent category of category of quasi coherent sheaves on $X$ and the tangent bundle of $X$ respectively. Can $TX$ be recovered from $T\operatorname{QCoh}(X)$ by Rosenberg construction?\
**Question 2:** Let $\mathcal{M}$ be a moduli family with moduli space $M$. Consider $\mathcal{M}$ as a category and consider its tangent category $T\mathcal{M}$. Is there a reconstruction from $T\mathcal{M}$ to $TM$?
Second Smoothness Notion
========================
**Definition 3.1 :** Let $F:\operatorname{Sch}/k\rightarrow \operatorname{Sch}/k$ be a functor with the following property:\
For any scheme X and an algebra $A\in \operatorname{Obj}(\textbf{Art})$, $F(\mathcal{X})$ is a deformation of $F(X)$ over $A$ if $\mathcal{X}$ is a deformation of $X$ over $A$.\
We say $F$ is smooth at $X$, if the morphism of functors $$\Theta_{X}:D_{X}\rightarrow D_{F(X)}$$ is a smooth morphism of functors in the sense of Schlessinger (See [@M.; @Sch.]). $F$ is said to be smooth if for any object $X$ of $\operatorname{Sch}/k$, the morphism of functors $\Theta_{X}$ is smooth.\
The following lemma describes more properties of smooth functors.
\[lem2.1\] $(a)$ Assume that $C_{1}$, $C_{2}$ and $C_{3}$ are multicategories over the category $\operatorname{Sch}/k$. Let $F_{1}:C_{1}\rightarrow C_{2}$ and $F_{2}:C_{2}\rightarrow C_{3}$ be smooth functors with the first notion. Then so is their composition.\
$(b)$ Let $F_{1}:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ and $F_{2}:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ be smooth functors with second notion. Then so is their composition.\
$(c)$ Let $F:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ and $G:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ be functors to which $F$ and $GoF$ are smooth with second notion. Then $G$ is a smooth functor.\
$(d)$ Let $F,G,H: \operatorname{Sch}/k\rightarrow \operatorname{Sch}/k$ be smooth functors in the sense of second notion with morphisms of functors $F\rightarrow G$ and $H\rightarrow G$ between them. Then the functor $F\underset{G}{\times}H$ is smooth functor with the second one.
Part $(a)$ of lemma is trivial.\
$(b)$ Let $X\in \operatorname{Sch}/k$ and $B\rightarrow A$ be a surjective morphism in $\mathbf{Art}$. By smoothness of $F_{1}$, $F_{2}$ and by remark $2.4$ of [@M.; @Sch.], there exists a surjective map $$\Theta_{F_{2}(X),F_{2}oF_{1}(X)}: D_{F_{2}oF_{1}(X)}(B)\underset{D_{F_{2}oF_{1}(X)}(A)}{\times}D_{X}(A)\rightarrow D_{F_{1}(X)}(B)\underset{D_{F_{1}(X)}(A)}{\times}D_{X}(A)$$ such that we have $$\Theta_{X,F_{2}oF_{1}(X)}=\Theta_{F_{2}(X),F_{2}oF_{1}(X)}o\Theta_{X,F_{2}(X)}$$ in which $\Theta_{X,F_{2}(X)}$ is the surjective map induced by smoothness of $F_{2}$. From this equality it follows the map $\Theta_{X,F_{2}oF_{1}(X)}$ is surjective immediately.\
$(c)$ For a scheme $X$ in the category $\operatorname{Sch}/k$ consider a surjective morphism $B\rightarrow A$ in $\mathbf{Art}$. By smoothness of $F$, the morphism $D_{X}\rightarrow D_{F(X)}$ is a surjective morphism of functors. Now apply proposition $(2.5)$ of $\cite{M. Sch.}$ to finish the proof.\
$(d)$ Let $X\in \operatorname{Sch}/k$ and $B\rightarrow A$ be a surjective morphism in $\mathbf{Art}$. Consider the following commutative diagram:
.7500mm
(30,30)(30,90) (20,115)[(0,0)\[cc\][$D_{X}$]{}]{} (70,115)[(0,0)\[cc\][$D_{F(X)}$]{}]{} (70,80)[(0,0)\[cc\][$D_{G(X)}$]{}]{} (59,115.5)[(1,0)[.07]{}]{} (32.75,115.5)[(1,0)[26.25]{}]{} (70.25,108)[(0,1)[.07]{}]{} (70.25,86.5)[(0,1)[21.5]{}]{} (61.75,86.25)[(3,-2)[.07]{}]{} (24,110.25)(.0530196629,-.0337078652)[712]{}[(1,0)[.0530196629]{}]{} (39.25,95.25)[(0,0)\[cc\][$$]{}]{} (75.25,98.25)[(0,0)\[cc\][$$]{}]{}
Since the morphisms of functors $D_{X}\rightarrow D_{F(X)}$ and $D_{X}\rightarrow D_{G(X)}$ are smooth morphisms of functors, proposition $2.5(iii)$ of $\cite{M. Sch.}$ implies that $D_{F(X)}\rightarrow D_{G(X)}$ is a smooth morphism of functors. Similarly $D_{H(X)}\rightarrow D_{G(X)}$ is a smooth morphism of functors. Again by $2.5(iv)$ of $\cite{M. Sch.}$, the morphism of functors: $$D_{H(X)}\underset{D_{G(X)}}{\times}D_{F(X)}\rightarrow D_{H(X)}$$ is a smooth morphism of functors. Since in the diagram:
.7500mm
(30,30)(30,90) (25,115)[(0,0)\[cc\][$D_{X}$]{}]{} (81,113)[(0,0)\[cc\][$D_{H(X)}\underset{D_{G(X)}}{\times}D_{F(X)}$]{}]{} (81,80)[(0,0)\[cc\][$D_{H(X)}$]{}]{} (59,115.5)[(1,0)[.07]{}]{} (32.75,115.5)[(1,0)[26.25]{}]{} (80.25,108)[(0,1)[.07]{}]{} (80.25,86.5)[(0,1)[21.5]{}]{} (71.75,86.25)[(3,-2)[.07]{}]{} (34,110.25)(.0530196629,-.0337078652)[712]{}[(1,0)[.0530196629]{}]{} (39.25,95.25)[(0,0)\[cc\][$$]{}]{} (75.25,98.25)[(0,0)\[cc\][$$]{}]{}
the morphisms $D_{X}\rightarrow D_{H(X)}$ and $D_{H(X)}\underset{D_{G(X)}}{\times}D_{F(X)}$ are smooth morphisms of functors, part $(c)$ of this lemma implies that $D_{H(X)}\underset{D_{G(X)}}{\times}D_{F(X)}$ is smooth. This completes the proof.
\[rem22\] $\textbf{(i)}$ The same proof works to generalize part $(c)$ of lemma \[lem2.1\] as follows:\
$(\acute{c})$ Let $F:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ and $G:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ be functors with $GoF$ smooth and $F$ surjective in the level of deformations in the sense that for any $X\in \operatorname{Sch}/k$ and any $A\in \operatorname{Obj}(\mathbf{Art})$ the morphism $D_{X}(A)\rightarrow D_{F(X)}(A)$ is surjective in $\mathbf{Art}$. Then $G$ is smooth.\
$\textbf{(ii)}$ One may ask to find a criterion to determine smoothness of a functor. We could not get a complete answer to this question. But by the following fact, one may answer the question at least partially:\
A functor $F:\operatorname{Sch}/k \rightarrow \operatorname{Sch}/k$ is not smooth at $X$ if there exists an algebra $A\in \mathbf{Art}$ such that the map $D_{X}(A)\rightarrow D_{F(X)}(A)$ is not surjective in $\mathbf{Art}$, (See [@M.; @Sch.]).
Theorem \[Th2\] relates the second smoothness notion to the hull of deformation functors. Recall the hull of a functor is defined in [@M.; @Sch.]. We need the following:
\[lem2.2\] Let $F: \mathbf{Art} \rightarrow \operatorname{Sets}$ be a functor. Then its hulls are non-canonically isomorphic if there exist.
[ See Proposition $2.9$ of [@M.; @Sch.]. ]{}
\[Th2\] Let $F:\operatorname{Sch}/k\rightarrow \operatorname{Sch}/k$ be a functor and for a scheme $X$ the functor $F$ has the following properties:\
$(a)$ $F(\mathcal{X})$ is a deformation of $F(X)$ if $\mathcal{X}$ is a deformation of $X$.\
$(b)$ The functor $F$ induces isomorphism on tangent spaces.\
Then $F$ is smooth at $X$ if and only if $(R,F(\xi))$ is a hull of $D_{F(X)}$ whenever $(R,\xi)$ is a hull of $D_{X}$.
[ To prove the Theorem it is enough to apply $(b), (c)$ of lemma \[lem2.1\], and lemma \[lem2.2\] to the functors $$\Theta_{X}:D_{X}\rightarrow D_{F(X)} \quad,\quad h_{R,X}:h_{R}\rightarrow D_{X} \quad,\quad h_{R,F(X)}:h_{R}\rightarrow D_{F(X)}.$$ ]{}
For a scheme $X$ let:
{pairs $(\mathcal{X},\Omega_{\mathcal{X}/k})$ which $\mathcal{X}$ is an infinitesimal deformation of $X$ over $A$ }
be the isomorphism classes of fibered deformations of $X$.\
In the following example we use this notion of deformations of schemes.
\[exam2\] The functor defined by: $$\begin{array}{ccc}
F:\operatorname{Sch}/k &\rightarrow& \operatorname{QCoh}\\
F(X)\!\!\!\!\!\!\!\!\!\!&=&\Omega_{X/k}
\end{array}$$ is a smooth functor.
Note that if one considers deformations of $\Omega_{X/k}$ as usual case, the above functor will not be smooth. The usual deformation of $ \Omega_{X/k}$ can be described as simultaneous deformation of an object, and differential forms on that object. Also this observation is valid for $TX$ and $\omega_{X}$ instead of $\Omega_{X}$.
\[rem2\] The first and second smoothness notions are in general different. Note that a functor which is smooth with the second notion induces surjective maps on tangent spaces. Since the morphism induced on tangent spaces with first notion of smoothness is not necessarily surjective, a functor which is smooth in the sense of first notion is not necessarily smooth with the sense of second notion. Also a functor which is smooth in the sense of second notion can not be necessarily smooth with the first notion in general. In fact the map induced on tangent spaces by second notion is not necessarily a linear map. It is easy to see that the example \[exam2\] is smooth with both of the notions, but examples \[exam0\] and \[exam1\] are smooth just in the sense of first one.
A Geometric interpretation
--------------------------
Let $F$ be a smooth functor at $X$. By theorem \[Th2\], $X$ and $F(X)$ have the same universal rings and this can be interpreted as we are deforming $X$ and $F(X)$ simultaneously. Therefore we have an algebraic language for simultaneous deformations. The example \[exam2\] can be interpreted as follows: we are deforming a geometric space and an ingredient of that space, e.g. the structure sheaf of the space or its sheaf of relative differential forms, and these operations are smooth.
Relation with smoothness of a morphism
--------------------------------------
Let $\mathcal{M}$ be a moduli family of algebro - geometric objects with a variety $M$ as its fine moduli space and suppose $Y(m)\rightarrow M $ is the fiber on $m\in M$. With this assumptions we would have the following bijections: $$\begin{array}{ccc}
T_{m,M}&\cong&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\mbox{Hom}(\operatorname{Spec}(k[\epsilon]),M)\\
&\cong&\{\mbox{classes of first order deformations of X over A} \}
\end{array}$$ In fact these bijections states that why deformations are important in geometric usages. Now suppose we have two moduli families $\mathcal{M}_{1}$ and $\mathcal{M}_{2}$ with varieties $M_{1}$ and $M_{2}$ as their fine moduli spaces. Also describe $\mathcal{M}_{1}$ and $\mathcal{M}_{2}$ as categories in which there exists a smooth functor $F$ between them. In this setting, if we have a morphism between them, induced from $F$, then it is a smooth morphism.
Third Smoothness Notion
=======================
This notion of smoothness is completely motivated from Rosenberg’s reconstruction theorem, Theorem $(A.2)$ of [@A.; @L.; @R]. For this notion of smoothness we do not use deformation theory.
**3.1 Definition:** Let $F:C_{1} \rightarrow C_{2}$ be a functor between abelian categories such that there exists a morphism $$f:\operatorname{Spec}(C_{1})\rightarrow \operatorname{Spec}(C_{2})$$ induced by the functor $F$. We say $F$ is a smooth functor if $f$ is a smooth morphism of schemes.
\[rem3\] $(a)$ Since this smoothness notion uses a language completely different from the two previous ones, it does not imply non of them and vice versa. We did not verified this claim with details but it is not so legitimate to expect that this smoothness implies the previous ones, because deformation theory is not consistent with the Rosenberg construction. This observation together with the remark \[rem2\] show that these three notions are independent of each other, having nice geometric and algebraic meaning in their own rights separately.\
$(b)$ It seems that a functor of abelian categories induces a morphism of schemes in rarely cases. But the cases in which this happens are the cases of enough importance to consider them. Here we mention some cases which this happens.\
**(i)** Let $f:X \rightarrow \operatorname{Spec}(k)$ be a morphism of finite type between schemes. Then it can be shown $f$ is induced by $$f_{*}:\operatorname{QCoh}(X) \rightarrow
\operatorname{QCoh}(\operatorname{Spec}(k))$$ by Rosenberg’s construction. This example is important because it can be a source of motivation, to translate notions from commutative case to noncommutative one.\
**(ii)** Also the following result of Rosenberg is worth to note:\
Let $A$ be an abelian category.\
(a) For any topologizing subcategory $T$ of $A$, the inclusion functor $T\rightarrow A$ induces an embedding $\operatorname{Spec}(T) \rightarrow
\operatorname{Spec}(A)$.\
(b) For any exact localization $Q:A \rightarrow A/S $ and for any $P \in
\operatorname{Spec}(A)$, either $P \in \operatorname{Obj}(S)$ or $Q(P)\in \operatorname{Spec}(A/S)$; hence $Q$ induces an injective map from $\operatorname{Spec}(A)-\operatorname{Spec}(S)$ to $\operatorname{Spec}(A/S)$.
[ See Proposition $(A.0.3)$ of [@A.; @L.; @R]. ]{}
**Acknowledgements:** The authors are grateful for referee/s carefully reading of the paper, notable remarks and valuable suggestions about it.
[99]{} J. Harris, I. Marrison, Moduli of Curves, Graduate Texts in Mathematics, Springer-Verlag, 1994. R. Hartshorne, Deformation Theory, Springer-Verlag, 2010. R. Hartshorne, Algebraic Geometry, Graduate Texts in Mathematics, Springer-Verlag, 1977. W. Lowen , M. V. Bergh, Deformation theory of Abelian categories, Trans. AMS, v.358, n.12, p.5441-5483, 2006. M. Manetti, Extended deformation functors, arxiv:math.AG/9910071 v2 16Mar2001. H. Matsumura, Commutative Ring Theory, Cambridge University Press, 1986. A. L. Rosenberg, Noncommutative schemes, Compositio Mathematica **112**: 93-125, 1998. M. Schlessinger, Functors of Artin rings, Trans. AMS **130**, 1968, 208-222. K. Schwede, Gluing schemes and a scheme without closed points, unpublished, K.Schwede, math.stanford.edu E. Sernesi, An Overview of Classical Deformation Theory, Notes from seminars Algebraic geometry 2000/2001, Univ. La Sapienza. E. Sernesi, Deformations of schemes, Series: Grundlehren der Mathematicien Wissenchaften, Vol.334, Springer-Verlag, 2006.
|
---
abstract: 'Let $X=\Gamma \backslash D$ be a Mumford-Tate variety, i.e., a quotient of a Mumford-Tate domain $D=G({{\mathbb R}})/V$ by a discrete subgroup $\Gamma$. Mumford-Tate varieties are generalizations of Shimura varieties. We define the notion of a special subvariety $Y \subset X$ (of Shimura type), and formulate necessary criteria for $Y$ to be special. Our method consists in looking at finitely many compactified special curves $C_i$ in $Y$, and testing whether the inclusion $\bigcup_i C_i \subset Y$ satisfies certain properties. One of them is the so-called relative proportionality condition. In this paper, we give a new formulation of this numerical criterion in the case of Mumford-Tate varieties $X$. In this way, we give necessary and sufficient criteria for a subvariety $Y$ of $X$ to be a special subvariety in the sense of the André-Oort conjecture. We discuss in detail the important case where $X=A_g$, the moduli space of principally polarized abelian varieties.'
address: 'Universität Mainz, Fachbereich 08, Institut für Mathematik, 55099 Mainz, Germany'
author:
- Abolfazl Mohajer
- 'Stefan M[ü]{}ller-Stach'
- Kang Zuo
title: 'Special subvarieties in Mumford-Tate varieties'
---
Introduction
============
Griffiths domains [@cmp] are flag domains, i.e., quotients of the form $D=G({{\mathbb R}})/V$, where $G$ is a certain algebraic group and $V$ a compact stabilizer subgroup. Griffiths domains parametrize pure Hodge structures of given weight and Hodge numbers. Any moduli space ${\mathcal M}$ of smooth, projective varieties induces, after a choice of cohomological degree and a base point, a period map $${\mathcal P}: {\mathcal M} \rightarrow \Gamma \backslash D,$$ where $\Gamma$ is the monodromy group, i.e., the image of the fundamental group of ${\mathcal M}$ in $G({{\mathbb R}})$, a finitely generated, discrete subgroup.
In general, the image of the period map ${\mathcal P}$ is not surjective, but has image contained in quotients of so-called Mumford-Tate domains by discrete subgroups, see [@cmp Chap. 15] or [@ggk]:
After possibly replacing ${\mathcal M}$ by a finite, étale cover, the period map ${\mathcal P}$ factors as $${\mathcal P}: {\mathcal M} \longrightarrow \Gamma^{nc} \backslash D(M^{nc}) \times \Gamma^{c} \backslash D(M^{c}) \times D(M^f),$$ into a product of quotients of domains of non-compact, compact or flat (i.e., constant) type. Here $M^\bullet$ denotes a Mumford-Tate group of the respective type. The composition with the third projection is constant. In addition, for each $x_1 \in \Gamma^{nc} \backslash D(M^{nc})$ and $x_3 \in D(M^f)$, one has that ${\rm Im}(\mathcal{P}) \cap (x_1 \times \Gamma^{c} \backslash D(M^{c}) \times x_3)$ is finite.
This theorem asserts that the ”non-compact part” of the period map is the essential one. The (derived) Mumford-Tate group of the Hodge structure of a general element in ${\mathcal M}$ contains the algebraic monodromy group, i.e., the Zariski closure of the topological monodromy group, as a normal subgroup by a theorem of Y. André [@cmp Prop. 15.8.5].
[**In the rest of this paper, we will assume that $G$ is of non-compact type, ${{\mathbb Q}}$-simple and adjoint.**]{} It is not difficult to reduce to this case. Only in rare cases, $D$ itself is Hermitian symmetric [@cmp]. In these cases, $\Gamma \backslash D$ is a connected component of a Shimura variety under some arithmetic condition on $\Gamma$ [@deligne; @moonen]. An important example is the moduli space $A_g=\Gamma \backslash {{\mathbb H}}_g$ of principally polarized abelian varieties of dimension $g$ with some level structure induced by $\Gamma$. Shimura varieties contain distinguished subvarieties which are called special subvarieties. The zero-dimensional special subvarieties are the CM points, i.e., the points corresponding to Hodge structures with commutative Mumford-Tate group. Positive dimensional special subvarieties are more difficult to understand. However, the André-Oort conjecture claims that special subvarieties of Shimura varieties are precisely the loci which are the Zariski closures of sets of CM points. This conjecture has recently attracted a lot of interest, see the work of Edixhoven, Klingler, Pila, Ullmo, Tsimerman, Yafaev and others [@edixhoven-yafaev; @klingler-yafaev; @pila; @ullmo-yafaev; @tsimerman]. In 2015, Tsimerman [@tsimerman] has given a proof of the André-Oort conjecture for $A_g$ using an avaraged version of a conjecture of Colmez proved by Yuan and Zhang [@zhang].
Our aim is to give sufficient and effective Hodge theoretic criteria for a subvariety of $X=\Gamma \backslash D$ to be a special subvariety in some precise sense.
In [@mvz09] and [@mz11], we have studied special subvarieties in Shimura varieties of unitary or orthogonal type. Our method consisted of characterizing special subvarieties by a relative proportionality principle. Hence, the main goal of the present work is to generalize this principle to quotients of Mumford-Tate domains.
Results in the case $X=A_g$ {#results-in-the-case-xa_g .unnumbered}
---------------------------
For the reader’s convenience, we first study the case where $X=A_g$. Let $A_g=\Gamma \backslash {{\mathbb H}}_g$ be a smooth model, i.e., we require that $\Gamma$ is torsion-free. We choose a smooth toroidal compactification $\overline{A}_g$ as constructed by Mumford et al. [@amrt chap. III], such that the boundary $S \subset \overline A_g$ is a divisor with normal crossings. Our results do not depend on such choices. We consider a smooth projective subvariety $Y \subset \overline A_g$ meeting $S$ transversely and define $Y^0:=Y \cap A_g$. Throughout this paper we denote subvarieties contained in the locally symmetric part $A_g$ of $\overline A_g$ with a superscript $0$.
Such a subvariety $Y$ is called special, if it is an irreducible component of a Hecke translate of the image of some morphism $Sh_K(G,X)\rightarrow A_g=Sh_{K(N)}(GSp(2g),{{\mathbb H}}_g^{\pm})$, defined by an inclusion of a Shimura subdatum $(G,X)\subset(GSp(2g),{{\mathbb H}}_g^{\pm})$ together with some compact open subgroup $K\subset G(\mathbb{A}_f)$ such that $K\subset K(N)$. See Section \[shimura\] for details about Shimura varieties and special subvarieties.
We look for necessary and sufficient effective criteria, such that $Y^0$ is a special subvariety with minimal dimension containing a union $\bigcup_{i \in I} C_i^0$ of finitely many special curves $C_i$. Already in our previous work [@mvz09] and [@mz11] we have found a necessary condition for $Y^0$ to be special, provided a compactified special curve $C \subset \overline{A}_g$ is contained in $Y$:
[ ]{}\
Let $C \subset Y \subset \overline{A}_g$ be an irreducible special curve with logarithmic normal bundle $N_{C/Y}$, and $3$-step Harder-Narasimhan filtration $0 \subset N^0_{C/Y} \subset N^1_{C/Y} \subset N^2_{C/Y}=N_{C/Y}$ (both notions are explained in Section \[relprop\]). Then one has the relative proportionality inequality $$\deg N_{C/Y} \leq \frac{{{\rm rank}}(N^1_{C/Y})+{{\rm rank}}(N^0_{C/Y})}{2} \cdot \deg T_C(-\log S_C).$$ If $C$ and $Y$ are special subvarieties, then equality holds.
For curves $C$ on Hilbert modular surfaces or Picard modular surfaces, this condition is only a simple numerical criterion involving intersection numbers, see [@mvz09] and [@mz11].
Suppose we are given a finite number of compactified special curves $C_i$ in $\overline{A}_g$, contained in some irreducible subvariety $Y$ of dimension $\dim(Y) \ge 2$. We assume for simplicity that $Y$ and all $C_i$ intersect the boundary $S$ of $A_g$ transversely. Fix a base point $y \in Y^0 \subset A_g$ contained in the union of all $C_i$ and assume for simplicity that the union $\bigcup_{i \in I} C_i^0$ is connected. Recall that over each point $y \in A_g$, there is an associated polarized Hodge structure ${{\mathbb V}}_y$.
Our first result is:
\[Theorem1\] Let $Y^0$ be a smooth, algebraic subvariety of $A_g$ such that $Y^0$ has unipotent monodromies at infinity. Assume the following:\
(BIG) The ${{\mathbb Q}}$-Zariski closure in $G={\rm Sp}(2g)$ of the monodromy representation of $\pi_1(\bigcup_{i \in I} C_i^0,y)$ equals the ${{\mathbb Q}}$-Zariski closure of the representation of $\pi_1(Y^0,y)$.\
(LIE) If $H=H_y$ is the largest ${{\mathbb Q}}$-algebraic group fixing all infinitesimally parallel Hodge classes in tensor powers of ${{\mathbb V}}_y$ and its dual over the point $y$, then one has $\dim H/K \le \dim Y$ for the period domain $H/K \subset {{\mathbb H}}_g$ associated to $H$.\
(RPC) All compactified special curves $C_i$ satisfy relative proportionality.\
Then, $Y^0$ is a special subvariety of $A_g$.
In addition, the proof of the theorem implies that the group $H=H_y$ is of Hermitian type, $K$ is a maximal compact stabilizer group, and in the Hodge decomposition $\mathfrak{h}_{{\mathbb C}}=\mathfrak{h}^{-1,1} \oplus \mathfrak{h}^{0,0} \oplus \mathfrak{h}^{1,-1}$ of the real Lie algebra $\mathfrak{h}={\rm Lie} \, H({{\mathbb R}})$, one has $\mathfrak{h}^{-1,1}=T_{Y^0,y}$ for the holomorphic tangent space of $Y^0$ at $y$. In particular, the group $H_y$ does not depend on the base point $y$ in a crucial way.
More can be said about the group $H=H_y:$ In fact, in general the holomorphic tangent space $T_{Y^0,y}$ is a subspace of the holomorphic tangent space of $H/K$ at $y$ and $H/K \subset {{\mathbb H}}_g$ is the smallest Mumford-Tate subdomain which contains $y$ and such that the holomorphic tangent space of $H/K$ at $y$ contains $T_{Y^0,y}$. Therefore, one always has $\dim H/K\geq \dim Y^0$, and condition (LIE) implies that $\dim H/K=\dim Y^0$.
Theorem \[Theorem1\] generalizes previous work in [@mvz09] and [@mz11], which was restricted to special subvarieties in unitary or orthogonal Shimura varieties, hence the case of rank $\le 2$. There are explicit examples of connected cycles $\cup_i C_i^0$ of special curves $C_i^0$ in $A_g$ for $g \ge 2$, for which the minimal enveloping special subvariety of $\cup_i C_i^0$ is $A_g$ but not smaller. This shows that condition (LIE) is necessary. We saw already above that (RPC) is also necessary. Condition (BIG) is probably not a necessary condition. All three conditions are not independent, but their relations are not fully understood. In the course of the proof, we will see that condition (BIG) together with (RPC) implies that the group $H=H_y$ coincides, up to taking connected components and the derived group, with the Mumford-Tate group $MT({{\mathbb V}}_{Y^0})_y$ stabilizing all parallel Hodge tensors over $Y^0$ at the base point $y$. Since parallel Hodge tensors are infinitesimally parallel, one always has $H_y \subset MT({{\mathbb V}}_{Y^0})_y$. This condition $$\textrm{(M-T)} \quad \quad H_y \sim MT({{\mathbb V}}_{{{\mathbb Y}}^0})_y$$ is necessary and sufficient. See the last section of this introduction for a strategy of the proof of Theorem \[Theorem1\].
Results in the case of a Mumford-Tate variety $X=\Gamma \backslash D$ {#results-in-the-case-of-a-mumford-tate-variety-xgamma-backslash-d .unnumbered}
---------------------------------------------------------------------
Now we turn to the general case. As far as we know, there is no good notion of Hecke operators on Mumford-Tate domains $D=G({{\mathbb R}})/V$. In addition, there are no good compactifications of a Mumford-Tate variety $X=\Gamma \backslash D$ known in these cases in general [@grt].
Therefore, to avoid these two difficulties, by a special curve in $X$ we will denote an étale morphism $$\varphi^0: C^0 \longrightarrow X$$ from a Shimura curve $C^0$, which is induced from a morphism of algebraic groups $G' \to G$ defined over ${{\mathbb Q}}$, such that a certain Shimura datum for $G'$ defines $C^0$. Assume also that we are given a quasi-projective variety $Y^0 \subset X$ containing the image of $\varphi^0$ and with a good smooth compactification $Y$. Denote by $S_Y=Y \setminus Y^0$ the boundary divisor, and by $S_C=C \setminus C^0$, so that $S_C$ is the pullback of $S_Y$ to $C$, and $\varphi^0$ extends to a finite map $\varphi: C \to Y$.
In Section \[relprop2\], we show that there is a filtration $$N^{0}_{C/Y}\subset N^{1}_{C/Y}\subset \cdots \subset
N^{s}_{C/Y}=N_{C/Y}$$ on the logarithmic normal bundle of $N_{C/Y}$, induced by the Harder-Narasimhan filtration on $N_{C/X}$. The logarithmic normal bundle $N_{C/Y}$ is defined by the exact sequence $$0 \to T_C(- \log S_C) \to \varphi^* T_Y(- \log S_Y) \to N_{C/Y} \to 0.$$ The relative proportionality condition can be stated as:
[ ]{}\
The curve $\varphi:C\to Y$ satisfies the *relative proportionality condition* (RPC) if the slope inequalities $$\mu(N^{i}_{C/Y}/ N^{i-1}_{C/Y})\leq \mu(N^{i}_{C/X}/N^{i-1}_{C/X}), \text{ for } i=0,...,s$$ are equalities. The sheaves $N^{i}_{C/X}$ are properly defined in Section \[relprop2\]. The integer $s$ depends on $C$ and $X$. Summing up these inequalities, yield the relative proportionality inequality $$\deg N_{C/Y} \leq r(C,Y,X) \cdot \deg T_C(-\log S_C),$$ where $r(C,Y,X) \in {{\mathbb Q}}$ is a rational number depending on $C$, $Y$ and $X$, and hence on $G$. If $C$ and $Y$ are special subvarieties, then equality holds.
Now we prove the analogue of Theorem \[Theorem1\] for Mumford-Tate varieties. We will assume that $Y$ is a *horizontal subvariety* of $X$, i.e., that $T_Y$ is contained in the horizontal tangent bundle of $X$. Recall that over each point $y \in X$, there is a associated polarized Hodge structure ${{\mathbb V}}_y$.
\[Theorem2\] Let $X=\Gamma \backslash D$ be a Mumford-Tate variety associated to the Mumford-Tate group $G$. Let $Y^0$ be a smooth, horizontal algebraic subvariety of $X$ such that $Y^0$ has unipotent monodromies at infinity. Assume the following:\
(BIG) The ${{\mathbb Q}}$-Zariski closure in the Mumford-Tate group $G$ of the monodromy representation of $\pi_1(\bigcup_{i \in I} C_i^0,y)$ equals the ${{\mathbb Q}}$-Zariski closure of the representation of $\pi_1(Y^0,y)$.\
(LIE) If $H=H_y$ is the largest ${{\mathbb Q}}$-algebraic group fixing all infinitesimally parallel Hodge classes in tensor powers of ${{\mathbb V}}_y$ and its dual over the point $y$, then one has $\dim H/K \le \dim Y$ for the period domain $H/K \subset D$ associated to $H$.\
(RPC) All compactified special curves $C_i$ satisfy relative proportionality.\
Then, $Y^0$ is a special subvariety of $X$ of Shimura type.
In addition, as in the case of $A_g$, the proof of the theorem implies that the group $H$ essentially does not depend on $y$, that $H$ is of Hermitian type, $K$ is a maximal compact stabilizer group, and in the Hodge decomposition $\mathfrak{h}_{{\mathbb C}}=\mathfrak{h}^{-1,1} \oplus \mathfrak{h}^{0,0} \oplus \mathfrak{h}^{1,-1}$ of the real Lie algebra $\mathfrak{h}={\rm Lie} \, H({{\mathbb R}})$, one has $\mathfrak{h}^{-1,1}=T_{Y^0,y}$ for the holomorphic tangent space of $Y^0$ at $y$.
Note that the assumption on unipotent monodromies is not necessary, as one can always take an étale cover. The (RPC) condition implies that the above filtration is in fact the Harder-Narasimhan filtration on $N_{C/Y}$.
Strategy of the proof {#strategy-of-the-proof .unnumbered}
---------------------
The proof of both theorems is based on the following observations:
Let $X=\Gamma \backslash D$ be a Mumford-Tate variety associated to the Mumford-Tate group $G$. Let $Y^0$ be a smooth, horizontal algebraic subvariety of $X$ such that $Y^0$ has unipotent monodromies at infinity. Assume the conditions (M-T) and (LIE). Then $Y^0$ is special.
Let $\Gamma$ be the image of $\pi_1(Y^0,y)$ under $\rho$ in $G$. Using condition (M-T), the period map may be viewed as a map $$Y^0 \hookrightarrow Z^0 =\Gamma \backslash H({{\mathbb R}})^+/K.$$ By condition (LIE), one has $Y^0=Z^0$ for dimension reasons. Since $Y^0$ is horizontal by assumption, it follows that $Y^0$ is special.
Using this Proposition, the proofs of Theorem \[Theorem1\] and Theorem \[Theorem2\] are reduced to the proof of the following Theorem:
\[Theorem3\] Let $X=\Gamma \backslash D$ be a Mumford-Tate variety associated to the Mumford-Tate group $G$. Let $Y^0$ be a smooth, horizontal algebraic subvariety of $X$ such that $Y^0$ has unipotent monodromies at infinity. Then, conditions (BIG) and (RPC) imply condition (M-T).
The condition (BIG) may be replaced by other conditions: for example, one may require that there is an integral linear combination $\sum_{i \in I} a_iC_i$ which deforms in $X$ and fills $X$ out. In [@mz11], we showed that this assumption implies condition (BIG) as well. In this light, we pose the following
Suppose an irreducible Mumford-Tate variety $X$ associated to $G$ contains (infinitely many) special curves. Then, condition (BIG) holds for $X$, i.e., there are finitely many compactified special curves $C_i$ in $X$, such that the ${{\mathbb Q}}$-Zariski closure of the monodromy representation of $\pi_1(\bigcup_{i \in I} C_i^0,y)$ is equal to $G$.
In addition, one wants to find an effective bound of the number of special curves needed. This conjecture is known to be true in the case where $G=SO(2,n)$ and $G=SU(1,n)$ for $n \ge 1$, see [@mz11 Remark 3.7]. However, it appears to be open even in the case $G={\rm Sp}_{2g}$ for large $g$. It may be possible to solve this conjecture by looking at one special curve $C^0$ containing a CM-point $y$ and taking finitely many Hecke translates of $C^0$ which fix the point $y$.
Theorem \[Theorem3\] will be proved in the last section. In the sections before, we recall the notions of special subvarieties and explain the condition (RPC).
Acknowledgements {#acknowledgements .unnumbered}
----------------
This work and the first author were supported by a project of Müller-Stach and Zuo funded inside SFB/TRR 45 of Deutsche Forschungsgemeinschaft. We would like to thank C. Daw, B. Edixhoven and E. Ullmo for discussion and the referee for pointing out a wrong statement in condition (LIE) in a previous version.
Mumford-Tate groups and Hodge classes {#mumtate}
=====================================
For any ${{\mathbb Q}}$-algebraic group $M$, we denote by $M_{{\mathbb R}}$ the associated ${{\mathbb R}}$-algebraic group. Let $V$ be a ${{\mathbb Q}}$-Hodge structure with underlying ${{\mathbb Q}}$-vector space also denoted by $V$. This corresponds to a real representation $$h: {{\mathbb S}}\longrightarrow GL(V)_{{\mathbb R}}$$ of the Deligne torus ${{\mathbb S}}={\rm Res}_{{{\mathbb C}}/{{\mathbb R}}} {{\mathbb G}}_m$.
The (large) Mumford-Tate group $MT(V)$ of $V$ is the smallest ${{\mathbb Q}}$-algebraic subgroup of $GL(V)$ such that $MT(V)_{{\mathbb R}}$ contains the image of $h$. The (special) Mumford-Tate group, or Hodge group, $Hg(V)=SMT(V)$ is the smallest ${{\mathbb Q}}$-algebraic subgroup of $SL(V)$ such that $SMT(V)_{{\mathbb R}}$ contains the image of the subgroup ${\rm Res}_{{{\mathbb C}}/{{\mathbb R}}} U(1) \subset {{\mathbb S}}$.
Depending on the context, we will use both groups under the general name Mumford-Tate group. If one looks at all Hodge classes in $V^{\otimes i} \otimes V^{\vee \otimes j}$ for all $(i,j)$, then the special Mumford-Tate group $SMT(V)$ is precisely the largest ${{\mathbb Q}}$-algebraic subgroup $G \subset Sp(2g)$ fixing all Hodge classes in such tensor products.
Let us look at Hodge structures of weight $1$. We fix a level $N$ structure $A_g^{[N]}$ on $A_g$ with $N \ge 3$. Therefore, there is a universal family $f \colon U \to A_g$ over $A_g$. Let ${{\mathbb V}}=R^1f_*{{\mathbb C}}$ be the natural local system of weight one on $A_g$. We denote by $${{\mathbb V}}^\otimes= \bigoplus_{i,j} {{\mathbb V}}^{\otimes i} \otimes {{\mathbb V}}^{\vee \otimes j}$$ the full tensor algebra. This is an infinite direct sum of polarized local systems, where each summand ${{\mathbb V}}^{\otimes i} \otimes {{\mathbb V}}^{\vee \otimes j}$ carries a family of Hodge structures of weight $i-j$. A Hodge class in ${{\mathbb V}}^\otimes$ is a flat section in some finite dimensional subsystem of ${{\mathbb V}}^\otimes$ defined over ${{\mathbb Q}}$ and corresponding fiberwise to a $(p,p)$-class.
Special Subvarieties in $A_g$ {#shimura}
=============================
Let us recall some useful notation concerning Shimura varieties and their special subvarieties.
A Shimura datum is a pair $(G,X)$ consisting of a connected, reductive algebraic group $G$ defined over ${{\mathbb Q}}$ and a $G({{\mathbb R}})$-conjugacy class $X \subset {\rm Hom}({{\mathbb S}},G_{{\mathbb R}})$ such that for all (i.e., for some) $h \in X$,\
(i) The Hodge structure on ${\rm Lie}(G)$ defined by ${\rm Ad} \circ h$ is of type $(-1,1)+(0,0)+(1,-1)$.\
(ii) The involution ${\rm Inn}(h(i))$ is a Cartan involution of $G^{\rm ad}_{{\mathbb R}}$.\
(iii) The adjoint group $G^{\rm ad}$ does not have factors defined over ${{\mathbb Q}}$ onto which $h$ has a trivial projection.
The connected components of $X$ are denoted by $X^+$ and form $G({{\mathbb R}})^+$-conjugacy classes. The weight cocharacter $h \circ w: {{\mathbb G}}_{m,{{\mathbb C}}} \to G_{{\mathbb C}}$ does not depend on the choice of $h$.\
Denote by $(GSp(2g),{{\mathbb H}}_g^{\pm})$ the Shimura datum in the sense of Deligne [@deligne] defining $A_g=A_g^{[N]}$ with level structure given by the compact open subgroup $K(N)$ of $GSp(2g)(\mathbb{A}_f)$. We refer to [@moonen] for an accessible reference concerning Shimura varieties.
\[specialdef\] A special subvariety of $A_g$ is a geometrically irreducible component of a Hecke translate of the image of some morphism $Sh_K(G,X)\rightarrow A_g=Sh_{K(N)}(GSp(2g),{{\mathbb H}}_g^{\pm})$, which is defined by an inclusion of a Shimura subdatum $(G,X)\subset(GSp(2g),{{\mathbb H}}_g^{\pm})$ together with some compact open subgroup $K\subset G(\mathbb{A}_f)$ such that $K\subset K(N)$.
In other words, there is a sequence $$Sh(G,X)_{{\mathbb C}}\longrightarrow Sh(GSp(2g),{{\mathbb H}}_g^{\pm})_{{\mathbb C}}{\buildrel g \over \longrightarrow}
Sh(GSp(2g),{{\mathbb H}}_g^{\pm})_{{\mathbb C}}{\buildrel \text{quot} \over \longrightarrow} A_g=Sh_{K(N)}(GSp(2g),{{\mathbb H}}_g^{\pm})$$ where $g \in G({{\mathbb A}}_f)$.
Special subvarieties are totally geodesic subvarieties with respect to the natural Riemannian (Hodge) metric, i.e., geodesics which are tangent to a special subvariety stay inside. In fact, there is almost an equivalence by a result of Abdulali and Moonen:
An irreducible algebraic subvariety of $A_g$ is special if and only if it is totally geodesic and contains a CM point.
See Theorem 6.9.1 in Moonen [@moonen].
Relative Proportionality in $A_g$ {#relprop}
=================================
Consider a non-singular projective curve $C$ and an embedding $$\varphi: C \hookrightarrow Y \hookrightarrow \overline{A}_g,$$ where $Y \subset \overline{A}_g$ is a smooth projective subvariety as in the introduction. We denote by $C^0:=\varphi^{-1}(Y^0)\not=\emptyset$ the ”open” part, where $Y^0=Y \cap A_g$. Assume that $C^0$ is a special curve in the following. Let $S_C$ and $S_Y$ be the intersections of $C$ and $Y$ with $S$. We assume overall that such intersections are transversal.\
The logarithmic normal bundles of $C$ in $Y$ and $\overline{A}_g$ are defined by the exact sequences $$0 \to T_C(-\log S_C) \to T_{\overline{A}_g}(-\log S) \to N_{C/\overline{A}_g} \to 0,$$ $$0 \to T_C(-\log S_C) \to T_{Y}(-\log S_Y) \to N_{C/Y} \to 0.$$
Let $N^\bullet_{C/Y}$ be the Harder-Narasimhan filtration on the logarithmic normal bundle $N_{C/A_g}$ intersected with $N_{C/Y}$. The following definition was given in [@mz11 Def. 1.4].
[ ]{}\
The map $\varphi: C \hookrightarrow Y$ satisfies the relative proportionality condition (RPC), if the slope inequalities $$\mu(N_{C/Y}^{i}/N_{C/Y}^{i-1})\leq\mu (N_{C/\overline A_g}^{i}/N_{C/\overline A_g}^{i-1}),\quad i=0,1,2$$ are equalities. For the slopes, one gets by [@mz11]: $$\begin{aligned}
\mu (N_{C/\overline A_g}^{2}/N_{C/\overline A_g}^1) & =0, \cr
\mu (N_{C/\overline A_g}^{1}/N_{C/\overline A_g}^0) & =\frac{1}{2} \deg T_C(-\log S_C), \cr
\mu (N_{C/\overline A_g}^{0}) & =\deg T_C(-\log S_C).\end{aligned}$$ Hence, we obtain a set of inequalities $$\begin{aligned}
\mu(N_{C/Y}^{2}/N_{C/Y}^1) & \le & 0, \\
\mu(N_{C/Y}^{1}/N_{C/Y}^0) & \le & \frac{1}{2} \deg T_C(-\log S_C), \\
\mu(N_{C/Y}^{0}) & \le & \deg T_C(-\log S_C). \end{aligned}$$ Adding all three inequalities we obtain a single inequality $$\label{rpc}
\deg N_{C/Y} \leq \frac{{{\rm rank}}(N^1_{C/Y})+{{\rm rank}}(N^0_{C/Y})}{2} \cdot \deg T_C(-\log S_C).$$ In case of equality, we say that (RPC) holds.
\[surfaces\] In case $Y$ is a smooth projective surface, and $C$ is a smooth special curve in $Y$ intersecting the boundary $S_Y$ transversally, then $$(K_Y+S_Y).C+2C^2=0,$$ if $Y$ is a Hilbert modular surface, and $$(K_Y+S_Y).C+3C^2=0,$$ if $Y$ is a ball quotient, see [@mvz09 Thm. 0.1], [@mz11 Ex. 1.6] and [@cmp Chap. 17].
The main consequence of (RPC) is the following:
\[splitting\] [ ]{}\
(i) If $\varphi: C \hookrightarrow Y$ satisfies (RPC), then $\varphi^*T_Y(-\log S_Y)$ is a direct summand of an orthogonal decomposition of $\varphi^*T_{\overline A_g}(-\log S)$ with respect to the Hodge metric.\
(ii) If $Y^0 \hookrightarrow A_g$ is a special subvariety, then $\varphi^*T_Y(-\log S_Y)$ is a direct summand of an orthogonal decomposition of $\varphi^*T_{\overline A_g}(-\log S)$ with respect to the Hodge metric and $\varphi: C \hookrightarrow Y$ satisfies (RPC).
[@mz11 Prop. 1.5].
In [@mz11 Formula 1.3] we showed that, if $C^0$ is a special curve, one has a splitting $$\varphi^* T_Y(-\log S_Y) \cong T_C(-\log S_C) \oplus N_{C/Y}.$$ This splitting is induced from a corresponding splitting of $\varphi^*T_{\overline A_g}(-\log S)$. If, in addition, (RPC) holds, then this splitting is compatible with the decomposition $$N_{C/Y}=\bigoplus_{i=0}^2 N^{i}_{C/Y}/N^{i-1}_{C/Y}.$$
Special Subvarieties in $X=\Gamma \backslash D$ {#griffiths}
===============================================
As far as we know, there is no good notion of Hecke operators on Mumford-Tate domains $D=G({{\mathbb R}})/V$. In addition, there are no good compactifications of $X=\Gamma \backslash D$ known in these cases, since $X$ does not even carry any algebraic structure in general [@grt].
Therefore, to avoid these two difficulties, by a *special curve* in $X$ we will denote an étale morphism $$\varphi^0: C^0 \longrightarrow X=\Gamma \backslash G({{\mathbb R}})/V$$ from a Shimura curve $C^0$ to $X$, induced from a morphism of algebraic groups $G' \to G$ defined over ${{\mathbb Q}}$. In other words, $C^0$ is the quotient of the orbit of a certain Hodge structure $h \in D$ under the conjugation action of $G'$.
The orbit under conjugation of any Mumford-Tate group $M$ in $D$ is a Mumford-Tate domain in the sense of [@ggk; @cmp]. More generally, we define:
\[specialdef2\] A *special subvariety of Shimura type* in $Z^0 \subset X$ is a horizontal, algebraic subvariety $Z^0 \subset X$, such that there is a Mumford-Tate group $M=M(Z^0)$, and $Z^0$ is the quotient of the orbit $D(M)$ of a certain Hodge structure $h \in D$ under the conjugation action of $M$. In other words, $D(M)$ is a connnected component of the image of a Shimura datum $Sh(M,X')$ in the Mumford-Tate datum $X=MT(G,X)$, see [@cmp Chap. 17].
Hence, by our definition of a special subvariety $Z^0$, we have a commutative diagram $$\begin{xy}
\xymatrix{
D(M) \ar[d]^{} \ar[r]^{}& D \ar[d]^{} \\
Z^0 \ar[r]^{} & X }
\end{xy}$$ Note that we always require a special subvariety to be horizontal and algebraic, so that $Z^0$ is of Shimura type, i.e., the $D(M)$ is a Hermitian symmetric domain. In most cases, $Z^0$ is a proper subvariety of $X$ by [@grt].
More general notions of special subvarieties in Mumford-Tate varieties are conceivable, for example horizontal subvarieties of maximal dimension in Mumford-Tate varieties. But it is not clear whether such definitions have good properties. For example such varieties may not carry any CM points.
Relative Proportionality in $X=\Gamma \backslash D$ {#relprop2}
===================================================
To define the relative proportionality condition (RPC), using the notation of the previous paragraph, we need first the following observations.
Let $C^0 {\buildrel \varphi^0 \over \longrightarrow} Y^0 {\buildrel i \over \hookrightarrow}X$ be a special curve and $Y^0$ an algebraic subvariety of $X=\Gamma \backslash D$. Let $Y$ be a smooth compactification of $Y^0$ and $C$ a compatible smooth compactification of $C^0$, which extends to a finite morphism $\varphi: C \to Y$. Note that for this we do not need to require that $X$ has an algebraic compactification.
Denote by $S_Y=Y \setminus Y^0$ the boundary divisor, and by $S_C=C \setminus C^0$, so that $S_C$ is the pullback of $S_Y$ to $C$.
Fix a base point corresponding to a Hodge representation $h: {{\mathbb S}}\to G_{{\mathbb R}}$ whose orbit under $G$ defines $X$. We have a weight zero Hodge structure on ${\mathfrak g}={\rm Lie}(G)$, $${\mathfrak g}=\bigoplus_p {\mathfrak g}^{-p,p}.$$ If $K$ is a maximal compact subgroup containing $V$, then its Lie algebra ${\mathfrak k}$ is given by the sum for even $p$, whereas its complement ${\mathfrak p}$ is the sum for all odd $p$ [@cmp Sec. 12.5]. For $p=1$, we obtain the horizontal, holomorphic tangent bundle. The vertical tangent bundle is given by the quotient of Lie algebras ${\mathfrak k}/{\mathfrak v}$. This terminology comes from the fibration [@cmp; @grt] $$\omega: D=G({{\mathbb R}})/V \longrightarrow G({{\mathbb R}})/K.$$
We denote by $T_X$ the holomorphic, horizontal tangent bundle to $X$ [@cmp Sec. 12.5]. That is, $T_X$ is the homogenous bundle on $X$ associated to ${\mathfrak g}^{-1,1}$.
This bundle agrees with the usual tangent bundle, if $V=K$ and $D$ is Hermitian symmetric, for example in the case of $X=A_g$.
Although $X$ does not have a compactification in general, we show:
Assume that $Y^0$ (and hence $C^0$) have unipotent monodromies at infinity. Then the bundle $(\varphi^0)^*T_X$ on $C^0$ extends to a canonical vector bundle on $C$ which we denote by $\varphi^*T_X(- \log S)$.
Let $\mathcal{V}^{p,q}$ be the universal vector bundles on $D$ which parametrize the $(p,q)$-classes on $X$. The horizontal, holomorphic tangent bundle $T_X$ of $X$ is contained in a direct sum of the Hodge bundles: $$T_{X}\subset {\mathcal End}^{-1,1} \left( \bigoplus_{p,q} \mathcal{V}^{p,q} \right) =
\bigoplus_{p,q} {\mathcal Hom} \left( \mathcal{V}^{p,q},\mathcal{V}^{p-1,q+1}\right).$$ All these bundles are homogenous on $D$, and the inclusion of the subbundle $T_{X}$ is defined by explicit conditions. Over the algebraic variety $Y^0$, the restricted bundles $\mathcal{V}^{p,q}|_{Y^0} $ on the right hand side, and also the subbundle $T_{X}|_{Y^0}$, have a Deligne extension $\overline{\mathcal V}^{p,q}|_{Y^0}$ to $Y$. Therefore, $T_X|_{Y^0}$ and $(\varphi^0)T_X$ have natural extensions to $Y$ and $C$ which we denote by $\varphi^*T_X(- \log S)$, although $S$ does not exist.
Using this, we can define the logarithmic normal bundle $N_{C/X}$ through the exact sequence $$0 \to T_C(- \log S_C) \to \varphi^* T_X(- \log S) \to N_{C/X} \to 0.$$ In a similar way, we have the exact sequence $$0 \to T_C(- \log S_C) \to \varphi^* T_Y(- \log S_Y) \to N_{C/Y} \to 0.$$ By a previous result [@mz11 Prop. 1.5.(ii)] of ours, see Prop. \[splitting\](ii) above, which is independent of $A_{g}$, we know that the logarithmic tangent bundle $T_{C}(-\log S_{C})$ is an orthogonal direct summand of the newly defined bundle $\varphi^*T_{X}(-\log S)$ with respect to the Hodge metric: $$T_{C}(-\log S_{C})\hookrightarrow \varphi^{*}T_{X}(-\log S).$$ We now show that certain local systems on $C^0$ split in a controlled way, giving a representation-theoretic proof of the following result of [@viehweg-zuo].
\[decomp\_lemma\] Assume that $\mathbb{V}$ is a $\mathbb{C}$-variation of Hodge structures of weight $k$ over $C^0$ which comes from a $G({{\mathbb R}})$-representation on $X$ by restriction. Then, $$\mathbb{V}=\mathbb{U} \oplus \bigoplus_i \left(S^{i}(\mathbb{L})\otimes
\mathbb{T}_{i} \right),$$ where $\mathbb{L}$ is a weight one local system of rank $2$ and $\mathbb{T}_{i}$ and $\mathbb{U}$ are unitary local systems of weights $k-i$ and $k$ respectively.
Since $C^0$ splits in at least one place, we may assume that the Mumford-Tate group of the Shimura curve $C^0$ has the form $SL(2) \times U_{1}\times...\times U_{r}$ for some $r \ge 0$, where the $U_{i}$ are compact Lie groups (i.e., anisotropic). This gives rise to an embedding $SL(2) \times
U_{1}\times...\times U_{r} \hookrightarrow G$ of algebraic groups. Now, since the groups $U_{i}$ are compact as real groups, it follows that every representation of them is a unitary representation and it is well-known that the representations of the group $SL(2)_{{\mathbb R}}$ are direct sums of symmetric products of the standard representation. Note also that the irreducible subrepresentations of the product representation is a product of the irreducible subrepresentations of each representation and that the product of unitary representations is again unitary. This means that there is a standard $2$-dimensional representation $\mathbb{L}$ and unitary representations $\mathbb{T}_{i}$ and $\mathbb{U}$ such that $\mathbb{V}$ has the asserted decomposition.
Note that, since $\mathbb{L}$ is a weight $1$ variation of Hodge structures, and $C^0$ is a special curve, by results of [@viehweg-zuo], its Deligne extension to $C$ corresponds to a Higgs bundle of the form $(\mathcal{L}\oplus\mathcal{L}^{-1}, \sigma)$ such that the Higgs field $\sigma:\mathcal{L}\to \mathcal{L}^{-1}\otimes
\Omega^{1}_{C}(\log S_{C})$ is an isomorphism, and hence $\mathcal{L}^{2}\simeq \Omega^{1}_{C}(\log S_{C})$.
We can now apply Lemma \[decomp\_lemma\] to the universal local system ${{\mathbb V}}$ of weight $k \ge 1$ on $X$. It implies that $(\varphi^0)^*\mathbb{V}= \mathbb{U} \oplus \bigoplus_i \left( S^{i}(\mathbb{L})\otimes \mathbb{T}_{i} \right)$ for local systems $\mathbb{L}$, $\mathbb{T}_{i}$ and $\mathbb{U}$ over $C^0$. We denote the Higgs bundles on $C$ corresponding to the local systems $(\varphi^0)^*{{\mathbb V}}$, $\mathbb{T}_{i}$ and $\mathbb{U}$ by $\mathcal{V}$, $\mathcal{T}_i$ and $\mathcal{U}$. The bundles $\mathcal{T}_i$ and $\mathcal{U}$ have degree $0$ and their Higgs fields are zero. Note that the Higgs field of $S^{i}(\mathbb{L})$ comes from that of $\mathcal{L}$, i.e., is equal to $S^{i}(\sigma)$ for $\sigma$ the Higgs field of $\mathcal{L}$. The Higgs field of $\mathbb{V}$ repects the direct sums and vanishes on $\mathbb{U}$. Therefore, $$T_{C}(-\log S_{C}) \subseteq \bigoplus_\Box
{\mathcal Hom}\left(\mathcal{L}^{i-2\mu}\otimes \mathcal{T}_{i,a}, \mathcal{L}^{j-2\nu}\otimes \mathcal{T}_{j,b}\right)
\subset {\mathcal End}^{-1,1} \left( \bigoplus_{p+q=k} \mathcal{V}^{p,q} \right),$$ where the bundles $\mathcal{T}_{i,a}$, $\mathcal{T}_{j,b}$ have slope $0$, and $\Box=\{(\mu,i,\nu,j, a, b) \in {{\mathbb N}}_0^6 \mid \mu \le i \le k, \; \nu \le j \le k, \; a\le k-i, \; b\le k-j , \; j+b-\nu=i+a-\mu-1\}$. In the above sum, $T_{C}(-\log S_{C})$ is a direct summand and orthogonal with respect to the natural Riemannian (i.e., Hodge) metric. Let $T_{C}(-\log S_{C})^{\perp}$ denote the orthogonal complement of $T_{C}(-\log S_{C})$ in this sum. Thus, there is a decomposition $$\varphi^{*}T_{X}(-\log S)= T_{C}(-\log S_{C})\oplus N_{C/X},$$ such that, as in [@mz11 Section 1], $$N_{C/X} \subset T_{C}(-\log S_{C})^{\perp} \oplus \bigoplus_{p+q=k}{\mathcal Hom}
\left( \mathcal U^{p,q}, \mathcal V^{p-1,q+1} \right) \oplus \bigoplus_{p+q=k}{\mathcal Hom}
\left( \mathcal V^{p,q}, \mathcal{U}^{p-1,q+1} \right ).$$ In particular, $N_{C/X}$ is a sum of polystable bundles of different slopes. Hence, one has a Harder-Narasimhan decomposition $$N_{C/X}=\bigoplus_{i=0}^s R_{i}$$ with polystable bundles $R_i$ of strictly increasing slopes $\mu(R_{i}) < \mu(R_{i+1})$. The length $s$ is an integer depending on $C$ and $X$.
Accordingly, the Harder-Narasimhan filtration on $N_{C/X}$ is given by $$N^{i}_{C/X}=R_{0}\oplus
R_{1}\oplus \cdots \oplus R_{i}, \text{ } 0\leq i \leq s.$$ Taking the induced filtration $N^i_{C/Y}:=N^i_{C/X} \cap N_{C/Y}$ on $N_{C/Y}$ obtained by intersection, we get a filtration on $N_{C/Y}$: $$N^{0}_{C/Y}\subset N^{1}_{C/Y}\subset \cdots \subset
N^{s}_{C/Y}=N_{C/Y}.$$ In analogy with the $A_g$ case, we can now make the following definition:
[ ]{}\
We say that $\varphi:C\to Y$ satisfies the *relative proportionality condition* (RPC), if the slope inequalities $$\mu(N^{i}_{C/Y}/ N^{i-1}_{C/Y})\leq \mu(N^{i}_{C/X}/
N^{i-1}_{C/X}), \text{ } i=0,\ldots,s$$ are equalities.
Adding all these inequalities, we obtain a single inequality $$\deg N_{C/Y} \leq r(C,Y,X) \cdot \deg T_{C}(-\log S_{C}),$$ where $r(C,Y,X) \in {{\mathbb Q}}$ is a rational number depending on $C$, $Y$ and $X$, and hence on $G$. However, it is not possible to write $r(C,Y,X)$ in a closed form, as in the case of $X=A_g$, since it would depend on $G$ and not only on the weight $k$. In example \[surfaces\], the constant $r(C,Y,X)$ is $1$ in the case of Hilbert modular surfaces and $\frac{1}{2}$ in the case of ball quotients. The assertions of Proposition \[splitting\] and [@mz11 Formula 1.3] also hold in this more general case by induction over $s$, i.e., we have a splitting $$\varphi^* T_Y(-\log S_Y) \cong T_C(-\log S_C) \oplus \bigoplus_{i=0}^s N^{i}_{C/Y}/N^{i-1}_{C/Y},$$ if $C^0$ is a special curve in $X$ satisfying (RPC).
Proof of Theorem \[Theorem3\]
=============================
In this section we prove Theorem \[Theorem3\]. From this, Theorem \[Theorem1\] and Theorem \[Theorem2\] follow, as we showed in the introduction.
We assume conditions (BIG) and (RPC) and look at a smooth and horizontal subvariety $Y^0 \hookrightarrow X$, where $X=\Gamma \backslash G({{\mathbb R}})/K$ is a (connected) Mumford-Tate variety.
We choose a base point $y \in \bigcup_i C_i^0$. Note that $X$ carries a universal family of Hodge structure ${{\mathbb V}}$ as a local system. It does not underly a variation of Hodge structures in general, since Griffiths transversality may not hold. However, when restricted to $Y^0$, or the curves $C_i^0$, this will be the case, since $Y^0$ is horizontal. We now consider the restriction of ${{\mathbb V}}$ to $Y^0$ only. Choose any finitely generated, irreducible sub local system ${{\mathbb W}}\subset {{\mathbb V}}^\otimes$ of even weight $2p$ and defined over ${{\mathbb Q}}$, where ${{\mathbb V}}^\otimes$ is the full tensor algebra generated by tensor powers of ${{\mathbb V}}$ and its dual. We denote the fiber of ${{\mathbb W}}_{{\mathbb Q}}$ over $y$ by $W_{y,{{\mathbb Q}}}$. Let $(E,\vartheta)$ be the Higgs bundle corresponding to ${{\mathbb W}}$ under the Simpson correspondence [@simpson].
Assume now that $\varphi: C \to Y$ compactifies the embedding $C^0 \hookrightarrow Y^0 \hookrightarrow X$ of a special curve $C^0$ in $X$. If $\varphi:C\to Y$ satisfies (RPC), then we have a decomposition: $$N_{C/Y}=N^{0}_{C/Y}\oplus N^{1}_{C/Y}/N^{0}_{C/Y}\oplus \cdots \oplus
N^{i}_{C/Y}/N^{i-1}_{C/Y}\oplus \cdots \oplus
N^{s}_{C/Y}/N^{s-1}_{C/Y}.$$
Under these assumptions, we define a complex vector space $$W_{y \in Y}:= \{ t \in E_y^{p,p} \mid \theta_{y \in Y}(t)=0 \}.$$ In a similar way, we define $W_{y \in Y,{{\mathbb Q}}}$ and $W_{y \in Y,{{\mathbb R}}}$. Here $$\theta_{y \in Y}:= E_y^{p,p} \to E_y^{p-1,p+1} \otimes \Omega^1_Y(\log S_Y)|_y$$ is the *thickening* of the Higgs field along $C$, see [@mz11 Def. 2.1], with splitting $$E_y^{p-1,p+1} \otimes \Omega^1_Y(\log S_Y)|_y \cong E_y^{p-1,p+1} \otimes
\left(\Omega^1_C(\log S_C)|_y \oplus N_{C/Y}^\vee|_y \right).$$
Let $H$ be the ${{\mathbb Q}}$-algebraic group from condition (BIG). It fixes precisely the Hodge classes in these vector spaces $W_{y \in Y}$. The Lie algebra ${\mathfrak h}={\rm Lie}\, H({{\mathbb R}})$ carries a Hodge decomposition $${\mathfrak h}_{{\mathbb C}}={\mathfrak h}^{-1,1} \oplus {\mathfrak h}^{0,0} \oplus {\mathfrak h}^{1,-1}.$$ Since the local system ${{\mathbb V}}$ has the form $\mathbb{V}=\bigoplus
\left( S^{i}(\mathbb{L})\otimes \mathbb{T}_{i} \right) \bigoplus \mathbb{U}$, where $\mathbb{L}$ is related to a local system of weight $1$ corresponding to a Higgs bundle $\mathcal{L}\oplus\mathcal{L}^{-1}$, by [@viehweg-zuo], the Higgs field is given by $$S^{i-2\mu}(\sigma): \mathcal{L}^{i-2\mu}\to
\mathcal{L}^{i-2\mu-2}\otimes \Omega^{1}_{Y}(\log S).$$ In particular, the sheaves $E^{p,q}$ can be decomposed into a direct sum of polystable sheaves $E^{p,q}_{\iota}$ of slopes $\mu(E^{p,q}_{\iota})=\iota \deg \mathcal{L}$ for $\iota \in [-qk,\cdots, pk]$. Using this, we prove:
The thickening $\theta_{y \in Y}$ on $E^{p,q}_{\iota}$ decomposes as a direct sum of morphisms: $$E^{p,q}_{\iota}\xrightarrow{\theta_{N^{i}_{C/Y}/N^{i-1}_{C/Y}}}E^{p-1,q+1}_{\iota+r_i}\otimes (N^{i}_{C/Y}/N^{i-1}_{C/Y})^{\vee}$$ between polystable bundles of the same slope. Here, $r_i$ is the number satisfying $\mu(R_i)=\mu(N^{i}_{C/Y}/N^{i-1}_{C/Y})=r_i \deg \mathcal{L}$.
Note that the above decomposition of $N_{C/Y}$ gives a corresponding decomposition as $$\theta_{C}+\theta_{N_{C/Y}}=\theta_{C}+\theta_{N_{C/Y}^{0}}+\theta_{N_{C/Y}^{1}/N_{C/Y}^{0}}+ \cdots +\theta_{N_{C/Y}^{s}/N_{C/Y}^{s-1}}.$$ Since, by Lemma \[decomp\_lemma\] on the curve $C$, one has $\mathbb{V}_{C}=\bigoplus \left( S^{i}(\mathbb{L})\otimes \mathbb{T}_{i} \right) \bigoplus \mathbb{U}$, the description of the sheaves $E^{p,q}$ shows that we can reduce to the situation $i=1$. This case is treated in [@mz11 Lemma 2.7] for $\mathbb{V}_{C}^{\otimes k}$. In fact, if $i=1$, then for $k=1$ we have the decompositions $$\mathcal{L}\otimes \mathcal{T}\to \mathcal{L}^{-1}\otimes
\mathcal{T}\otimes \Omega^{1}_{C}(\log S_{C})$$ $$\mathcal{L}\otimes \mathcal{T}\to \mathcal{L}^{-1}\otimes
\mathcal{T}\otimes (N^{0}_{C/Y})^{\vee}$$ $$\mathcal{L}\otimes \mathcal{T}\to
\mathcal{U}^{\vee}\otimes(N^{1}_{C/Y}/N^{0}_{C/Y})^{\vee}$$ $$\mathcal{U}\to \mathcal{U}^{\vee}\otimes
(N^{2}_{C/Y}/N^{1}_{C/Y})^{\vee}$$ and for arbitrary weight $k$, the result can be obtained by reducing to the case $k=1$ by remembering that $\theta^{\otimes k}_{y \in Y}$ is defined by the Leibniz rule.
Thus, we have shown that the kernels of $\vartheta$ decompose into vector bundles with vanishing slopes, and hence induce unitary Higgs bundles. This is the crucial ingredient for the remaining proof.
Under condition (RPC), the subspaces $W_{y \in Y,{{\mathbb Q}}}$ and $W_{y \in Y}$ are invariant under the monodromy action of $\pi_1(\bigcup_i C_i^0,y) \to G$ and define a unitary local system on each curve $C_i^0$. If conditions (RPC) and (BIG) both hold, then the subspaces $W_{y \in Y,{{\mathbb Q}}}$ and $W_{y \in Y}$ are invariant under the monodromy action of $\pi_1(Y^0,y) \to G$.
The proofs of Prop. 2.4, Prop. 3.1 and Prop. 3.3. of [@mz11] immediately carry over to this more general situation, although Prop. 3.3 in loc. cit. has a different assumption. However, the last part of the proof there uses only condition (BIG).
As in Cor. 3.5 of [@mz11], one gets the following corollary:
The subspaces $W_{y \in Y}$ define a unitary local subsystem ${{\mathbb U}}\subset {{\mathbb W}}$ on $Y^0$ with ${{\mathbb Q}}$-structure. The local system ${{\mathbb U}}$ extends to $Y$, and has finite monodromy.
Since we assumed that the monodromies at infinity are unipotent, which always holds after a finite étale cover of $Y^0$, this means that ${{\mathbb U}}$ is trivial, and all its global sections, i.e., all $(p,p$)-classes inside ${{\mathbb U}}$, which are by definition of $W_{y \in Y}$ invariant under $H$, are also monodromy-invariant. Recall that the Mumford-Tate group $MT({{\mathbb V}}_{Y^0})$ is the ${{\mathbb Q}}$-algebraic group fixing all parallel Hodge classes for all $p$ [@cmp Chap. 15]. We obtain therefore:
The infinitesimally fixed Hodge classes in ${{\mathbb W}}_{{\mathbb Q}}$ over points $y \in Y^0$ are globally monodromy-invariant. In particular, condition (M-T) holds.
Therefore, Theorem \[Theorem3\] is proven.
[XXX]{} A. Ash, D. Mumford, M. Rapoport, Y.-S. Tai: Smooth compactifications of locally symmetric varieties, second edition, Cambridge Univ. Press (2010). J. Carlson, S. Müller-Stach, C. Peters: Periods maps and period domains, Cambridge Studies in Adv. Math. Vol. 85, second edition (!), Cambridge University Press (in preparation). P. Deligne: Variétés de Shimura: interprétation modulaire, et techniques de construction de modèles canoniques, Proceedings of Symposia in Pure Mathematics Vol. 33, part 2, 247-290 (1979). B. Edixhoven, A. Yafaev: Subvarieties of Shimura varieties, Ann. of Math., Vol. 157, Nr. 2, 621-645 (2003). M. Green, Ph. Griffiths, M. Kerr: Mumford-Tate Groups and Domains: Their Geometry and Arithmetic, Annals of Math. Studies 183, Princeton Univ. Press (2012). Ph. Griffiths, C. Robles, D. Toledo: Quotients of non-classical flag domains are not algebraic, Journal of Alg. Geom. 1(1), 1-13 (2014). B. Klingler, A. Yafaev: The André-Oort conjecture, Ann. of Math., Vol. 180, Nr. 3, 867-925 (2014). B. Moonen: Models of Shimura varieties in mixed characteristics, in: Galois representations in arithmetic algebraic geometry, LMS Lecture Notes Series Vol. 254, Cambridge University Press, 267-350 (1998). S. Müller-Stach, E. Viehweg, K. Zuo: Relative Proportionality for subvarieties of moduli spaces of K3 and abelian surfaces, Pure and Applied Mathematics Quarterly, Vol. 5, Nr. 3, 1161-1199 (2009). S. Müller-Stach, K. Zuo: A characterization of special subvarieties in orthogonal Shimura varieties, Pure and Applied Mathematics Quarterly, Vol. 7, Nr. 4, 1599-1630 (2011). J. Pila: O-minimality and the André-Oort conjecture for ${{\mathbb C}}^n$, Ann. of Math., Vol. 173, Nr. 3, 1779-1840 (2011). E. Ullmo, A. Yafaev: Galois orbits and equidistribution of special subvarieties: towards the André-Oort conjecture, Ann. of Math., Vol. 180, Nr. 3, 823-865 (2014). C. Simpson: Higgs bundles and local systems, Publ. Math. IHES vol. 75, 5-95 (1992) J. Tsimerman: A proof of the André-Oort conjecture for $A_g$, arXiv:1506.01466v1. E. Viehweg, K. Zuo: Families with a strictly maximal Higgs field, Asian J. of Math. 7, 575-598 (2003). X. Yuan, S.-W. Zhang: On the averaged Colmez conjecture, see arXiv:1507.06903 (2015).
|
---
abstract: 'High resolution measurements of superfluid density $\rho_s(T)$ and broadband quasiparticle conductivity $\sigma_1(\Omega)$ have been used to probe the low energy excitation spectrum of nodal quasiparticles in underdoped [YBa$_2$Cu$_3$O$_{6 + y}$]{}. Penetration depth $\lambda(T)$ is measured to temperatures as low as 0.05 K. $\sigma_1(\Omega)$ is measured from 0.1 to 20 GHz and is a direct probe of zero-energy quasiparticles. The data are compared with predictions for a number of theoretical scenarios that compete with or otherwise modify pure $d_{x^2 - y^2}$ superconductivity, in particular commensurate and incommensurate spin and charge density waves; $d_{x^2 - y^2} +{\mathrm{i}}s$ and $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ superconductivity; circulating current phases; and the BCS–BEC crossover. We conclude that the data are consistent with a pure $d_{x^2 - y^2}$ state in the presence of a small amount of strong scattering disorder, and are able to rule out most candidate competing states either completely, or to a level set by the energy scale of the disorder, [$T_d$]{} $ \sim 4 $ K. Commensurate spin and charge density orders, however, are not expected to alter the nodal spectrum and therefore cannot be excluded.'
author:
- 'W. A. Huttema'
- 'J. S. Bobowski'
- 'P. J. Turner'
- Ruixing Liang
- 'W. N. Hardy'
- 'D. A. Bonn'
- 'D. M. Broun'
title: |
Stability of nodal quasiparticles in underdoped [YBa$_2$Cu$_3$O$_{6 + y}$]{} \
probed by penetration depth and microwave spectroscopy
---
Introduction
============
The physics of the cuprate high temperature superconductors is that of strong Coulomb repulsion in nearly half-filled CuO$_2$ planes.[@orenstein00; @bonn06] As charge carriers are doped into these materials, the two most prominent electronic states are the antiferromagnetic (AFM) Mott insulator and the $d$-wave superconductor. While the AFM and the optimal-to-overdoped superconductor appear to be well understood, the physics of the underdoped part of the phase diagram that lies between them remains firmly incompatible with standard theory. The most prominent feature of this region is a pseudogap that suppresses low energy spin and charge fluctuations and persists above the superconducting transition to a temperature $T^\ast$.[@warren89; @orenstein90; @homes93] The pseudogap temperature is highest close to the Mott insulator and decreases monotonically as doping, $p$, is increased towards optimal doping. Identifying the nature of the pseudogap state remains a difficult and open problem.
States of matter are characterized by their symmetries and their low energy excitation spectra. $d$-wave superconductivity, for instance, breaks four-fold rotational symmetry and is distinguished by the presence of nodal quasiparticles with a characteristic linear energy spectrum. The $d$-wave state in the cuprates was first identified from observations of a linear temperature dependence of penetration depth, $\lambda$, and superfluid density, $\rho_s \equiv 1/\lambda^2$.[@hardy93] The ability of superfluid density to couple directly to itinerant electronic degrees of freedom gives it the potential to be a sensitive thermodynamic probe of pseudogap physics, with many candidate states expected to leave characteristic signatures in the low energy quasiparticle spectrum. Here we search for these signatures using high resolution measurements of penetration depth and broadband quasiparticle conductivity, made on very clean crystals of underdoped [YBa$_2$Cu$_3$O$_{6 + y}$]{}.
There have been a wide range of proposals put forward to explain the cuprate pseudogap. In one important category, strong pair correlations are already built into the normal state. This scenario has its roots in Anderson’s resonating-valence-bond spin liquid,[@anderson87] and the idea that pair correlations emerge directly from the Mott insulator remains a compelling proposition. The ‘gossamer superconductor’ — a BCS wavefunction in which double occupancy has been heavily suppressed — typifies this approach and may provide a useful representation of the underdoped electronic state.[@laughlin06] The implication for the phase diagram is that $T^\ast$ marks the formation of tightly bound Cooper pairs, with low phase stiffness and strong quantum and thermal phase fluctuations heavily suppressing $T_c$.[@emery95; @franz01; @herbut02; @franz02; @herbut02a; @herbut05] At temperatures not too far above the superconducting transition, the idea of pre-existing pairs finds support from a number of experiments: terahertz spectroscopy reveals a finite phase-stiffness;[@corson99] Nernst-effect measurements appear to detect the phase-slip voltage from thermally diffusing vortices;[@xu00; @wang02] high-field magnetometry reveals excess diamagnetism;[@wang05] and STM[@gomes07] and $\mu$SR[@sonier08] detect what appear to be droplets of precursor superconductor. Related to this, the theory of the BCS to Bose–Einstein condensate (BEC) crossover makes a prediction that can be tested here: a $T^{3/2}$ power law in $\rho_s(T)$, due to the direct thermal excitation of bound Cooper pairs.[@chen98; @chen06]
Another class of proposals seeks to explain the pseudogap in terms of competing orders and quantum criticality. In such a scenario, $T^\ast(p)$ marks the boundary of a distinct thermodynamic phase; must be accompanied by a broken symmetry; and goes to zero at a quantum critical point within the superconducting phase. This idea was initially motivated by the observation near optimal doping of so-called marginal Fermi liquid behaviour,[@varma89] in which unusual power laws in resistivity $\rho(T)$, optical conductivity $\sigma_1(\Omega)$ and other physical quantities could be understood in terms of scattering from a scale-invariant fluctuation spectrum, as would be expected near a zero-temperature critical point.[@sachdev00] On crossing $T^\ast$, these fluctuations should generically condense to form the broken symmetry state of the pseudogap phase. While there is evidence of an AFM quantum critical point in electron-doped materials,[@dagan04] the situation on the hole-doped side is much less clear. Identification of a particular competing order that appears at $T^\ast(p)$ would have strong implications not just for the pseudogap, but for the origin of non-Fermi-liquid behaviour elsewhere in the cuprate phase diagram.
Competing orders are in fact prevalent in the cuprates, in part as a result of the extreme sensitivity of the doped Mott insulator to perturbations.[@sachdev03] Outside the AFM phase, long-range magnetic order is replaced by glassy spin correlations,[@kiefl89; @weidinger89; @panagopoulos02] although this short-range magnetism is likely a response to chemical disorder.[@sachdev03; @kivelson03] Neutron scattering experiments on [La$_{2-x}$Sr$_x$CuO$_4$]{} have revealed incommensurate spin correlations in superconducting samples[@thurston89; @cheong91; @mason92] that were later identified as stripe ordering of spins and holes.[@tranquada95; @tranquada97] Stripe correlations appear to be widespread in the underdoped cuprates, and are particularly strong near $p = \frac{1}{8}$ doping.[@kivelson03] In applied field, the suppression of superconductivity in vortex cores[@zhang97; @arovas97] leads to co-existing superconductivity and spin-density-wave order.[@lake01; @lake02; @demler01; @kivelson02; @sonier07] Scanning tunneling spectroscopy of the vortex cores reveals that this is accompanied by prominent checkerboard charge-density order.[@hoffman02] Similar four-lattice-constant modulations of the density of states are seen in zero field at various points in the phase diagram.[@howald03; @hanaguri04] For the most part, these competing orders occur in narrow ranges of doping; or in particular materials; or in response to external perturbations such as point disorder or applied magnetic field. While they attest to the complexity of the doped Mott insulator,[@sachdev03] they offer only hints at the physics underlying the formation of the pseudogap.
The lack of compatibility of the observed ordered states with $T^\ast(p)$ has led to interest in ‘hidden orders’, in which the broken symmetry is subtle and difficult to detect with standard scattering experiments. Proposals include circulating current phases that preserve translational symmetry[@varma97; @varma99; @varma06] and orbital antiferromagnetism, for example the $d$-density wave state (DDW).[@chakravarty01] Interestingly, a set of recent experiments now appears to have detected signatures of one or more of these phases. $\mu$SR[@sonier01] and polar Kerr effect[@xia08] have established the onset of time-reversal-symmetry breaking (TRSB) at $T^\ast(p)$ in [YBa$_2$Cu$_3$O$_{6 + y}$]{}, but the signals are extremely weak. It has been suggested that a variant of the DDW, the $d_{xy} + {\mathrm{i}}d_{x^2 - y^2}$ density wave,[@tewari08] would contain a subdominant but macroscopic TRSB component and be consistent with the small magnitude of the observed effects. Spin-polarized neutron scattering on [YBa$_2$Cu$_3$O$_{6 + y}$]{} has detected weak signatures of a novel magnetic order that preserves translational symmetry,[@fauque06] and has a form consistent with the $\Theta_{II}$ circulating current phase proposed by Varma,[@varma06] shown in schematic form in the inset of Fig \[thetaIIcurrents\]. The detailed picture is complicated by the presence of an in-plane component of magnetic moment, although it has been suggested that this could arise from orbital currents that circulate through apical oxygens while preserving the $\Theta_{II}$ symmetry.[@weber08] It also remains to be seen how ubiquitous the effects are: $\mu$SR experiments on [La$_{2-x}$Sr$_x$CuO$_4$]{} have so far failed to observe TRSB,[@macdougall08] but have not yet been carried out with the same sensitivity as Ref. . In contrast, new neutron scattering experiments[@li08] on [HgBa$_2$CuO$_{4+\delta}$]{} have detected the same type of $\Theta_{II}$ magnetic order seen in [YBa$_2$Cu$_3$O$_{6 + y}$]{}. As we will discuss in more detail below, this type of order has a strong effect on the low energy states of the superconductor, and should be highly visible in measurements of $\rho_s(T)$.
Finally, there have been suggestions that pure $d_{x^2 - y^2}$ superconductivity may compete with superconducting states of different symmetry,[@laughlin98; @balatsky00; @vojta00] motivated in part by reports of anomalously large inelastic scattering of nodal quasiparticles *below* $T_c$.[@valla99; @corson99] This critical-like scattering has been shown to be compatible with a quantum phase transition to a $d_{x^2 - y^2} + {\mathrm{i}}s$ or $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ state.[@vojta00] To date, there is a limited amount of direct experimental evidence in support of such phases[@krishana97; @dagan01; @daghero03] — here we use measurements of superfluid density to place tight constraints on the existence of such states in [YBa$_2$Cu$_3$O$_{6 + y}$]{}.
This paper is organized as follows. In Sec. \[penetrationdepththeory\] we show how measurements of penetration depth and broadband microwave conductivity can together be used as a probe of the quasiparticle excitation spectrum and the structure of the superconducting energy gap. In Sec. \[competingorders\] we catalog how different competing orders affect the superfluid density, including the effect of disorder. In Sec. \[experiment\] we introduce the experimental methods used to measure superfluid density and broadband microwave conductivity. Results are presented and discussed in Sec. \[results\], followed by a summary of our conclusions in Sec. \[conclusions\]. Appendix \[appendix\] presents analytic results for the effect of disorder on $d_{x^2 - y^2}$, $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ and $d_{x^2 - y^2} + {\mathrm{i}}s$-type superconductors with isotropic Fermi surfaces, and shows how this eventually blurs the distinction between $d_{x^2 - y^2}$ and $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ states.
Penetration Depth and Microwave Conductivity {#penetrationdepththeory}
============================================
Microwave experiments can be used to probe the low-energy excitation spectrum of a superconductor in two ways: through the temperature dependence of the penetration depth $\lambda(T)$; and from broadband measurements of the oscillator strength in the finite-frequency quasiparticle conductivity spectrum $\sigma_1(\Omega, T)$. The theory of penetration depth and microwave conductivity of unconventional superconductors has been developed in great detail,[@nam67; @pethick86; @hirschfeld88; @prohammer91; @schachinger03; @hirschfeld93; @hirschfeld93a; @borkowski94; @hirschfeld94] but useful insights about low-lying excitations can be obtained from the weak-coupling BCS theory. For the case of an isotropic Fermi surface, which should be adequate for describing the low-lying excitations in the cuprates, $\lambda(T)$ is given by[@tinkham; @waldram] $$\begin{aligned}
\frac{\lambda^2_0}{\lambda^2(T)} & = 1 - \int_{-\infty}^\infty \!\!\!\!{\mathrm{d}}\omega\left(\!\!-\frac{\partial f}{\partial \omega}\!\right) N(\omega)\label{lambdaone}\\
& = \tfrac{1}{2} \int_{-\infty}^\infty \!\!\!\!{\mathrm{d}}\omega\, \tanh\left(\frac{\omega}{2 k_B T}\right)\frac{\partial N(\omega)}{\partial \omega}\;.\label{lambdatwo}\end{aligned}$$ Here $\lambda_0$ is the zero-temperature penetration depth in the *absence* of disorder and competing phases, and $f(\omega/T)$ is the Fermi function. A Sommerfeld expansion reveals the direct connection between $\lambda(T)$ and the normalized density of quasiparticle states $N(\omega)$: if $N(\omega) = N_0 + N_1 \omega + \frac{1}{2}N_2 \omega^2 + ...$ then The residual density of states (DOS) $N_0$ represents zero-energy excitations, which arise in a superconductor either from impurity pair-breaking, or from certain types of competing order, notably the $\Theta_{II}$-type circulating current phase[@varma06; @berg08]. Note that $N_0$ does not appear in the temperature dependence of $\lambda$, but instead results in a deviation of $\lambda(T\!\!\to \!\!0)$ from $\lambda_0$. This shift in penetration depth is difficult to resolve experimentally, because $\lambda_0$ is neither known *a priori*, nor can the absolute value of $\lambda(T \to 0)$ usually be measured with sufficient accuracy. However, a direct determination of $N_0$ can be obtained from the uncondensed weight spectral in the quasiparticle conductivity $\sigma_1(\Omega, T)$. From the oscillator strength sum rule, $$N_0 = \tfrac{2}{\pi} \mu_0 \lambda_0^2 \int_0^{\Omega_c} \!\!\!\!\sigma_1(\Omega, T\!\to \!0)\, {\mathrm{d}}\Omega\; ,$$ where $\Omega_c$ is a frequency cut-off chosen to capture the oscillator strength of the conduction electrons only.
In a clean-limit BCS superconductor, $N(\omega)$ is determined by the k-space structure of the superconducting order parameter $\Delta_\mathbf{k}$: $N(\omega) = {\mathrm{Re}}\big\langle \omega /\sqrt{\omega^2 - \Delta_\mathbf{k}^2}\big\rangle_\mathrm{FS}$, where $\big\langle ... \big\rangle_\mathrm{FS}$ denotes a Fermi surface average. This makes $\rho_s(T)$ a sensitive probe of order parameter symmery. In particular, for a $d$-wave superconductor in two dimensions, the linear dispersion of $\Delta_\mathbf{k}$ about the gap nodes leads to $N(\omega) \propto \omega$ and $\Delta \rho_s(T) \propto T$. An $s$-wave superconductor, by constrast, usually has a finite energy gap and shows activated behaviour, $\rho_s(T) \propto \exp(-\Delta_\mathrm{min}/k_B T)$, where $\Delta_\mathrm{min}$ is the minimum of the energy gap on the Fermi surface. The effect of impurity scattering on $N(\omega)$ and $\rho_s(T)$ is important and is reviewed in Appendix \[appendix\], where we give analytic results for $d_{x^2 - y^2}$, $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ and $d_{x^2 - y^2} + {\mathrm{i}}s$ superconductors with isotropic Fermi surfaces in the presence of point defects. The main effect of disorder is for the quasiparticles to acquire a lifetime, the magnitude and energy dependence of which depend on the concentration and scattering strength of the defects. Near the unitarity limit, scattering leads to a zero-energy resonance that overlaps with the continuum of quasiparticle states in the $d_{x^2 - y^2}$-wave superconductor, resulting in a residual density of states in $N(\omega)$ and a crossover to $T^2$ behaviour in $\rho_s(T)$. This also happens for the $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ superconductor, despite there initially being a finite gap in the excitation spectrum. As a result, above a certain level of disorder, $d_{x^2 - y^2}$ and $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ states become impossible to tell apart using microwave spectroscopy. In Fig. \[disordercrossover\] we show how the distinction is lost when the energy scale of the disorder, $k_B T_d \gtrsim \Delta_{d_{xy}}$. The $d_{x^2 - y^2} + {\mathrm{i}}s$ superconductor is different in this respect: nonmagnetic scatterers do not cause pair breaking at low energies, and the gap in the spectrum is robust.
Superfluid density and competing orders {#competingorders}
=======================================
As a sensitive thermodynamic probe that couples directly to current-carrying excitations, measurements of superfluid density are well suited to detecting changes in the nodal quasiparticle spectrum arising from competing orders and other physics. A number of authors have investigated these effects theoretically. Sharapov and Carbotte have performed calculations for a $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ order parameter and for *incommensurate* spin density waves that nest the nodal points (nested SDW), obtaining analytic results for $\rho_s(T \to 0)$ and its leading temperature corrections.[@sharapov06] In the absence of disorder they find that both the nested SDW and the $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ superconductor have a finite gap everywhere on the Fermi surface, leading to activated exponential behaviour $\rho_s(T) \sim \exp(-\Delta'/k_B T)$, where $\Delta'$ is the magnitude of the SDW or $d_{xy}$ gap. However, nested SDW orders compete for Fermi surface, removing nodal states from the $T = 0$ condensate. In contrast, a transition to a clean $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ state leaves $\rho_s(T \to 0)$ unchanged. Unfortunately, this distinction is difficult to detect experimentally, for reasons discussed in Sec. \[penetrationdepththeory\]. In the presence of disorder, both the nested SDW and $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ states develop a leading quadratic temperature dependence, $\rho_s \sim T^2$, similar to that of a dirty $d_{x^2 - y^2}$ superconductor. However, an experimentally detectable difference now arises: pair-breaking in the $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ state is accompanied by zero-energy quasiparticles, whereas the disordered SDW continues to remove low energy states *without* creating a residual DOS. Atkinson has studied the competition between nested, incommensurate SDW and $d_{x^2 - y^2}$ superconductivity numerically and finds broadly similar results,[@atkinson07] pointing out that on the basis of the temperature dependence of $\rho_s$ alone, the effect of disordered magnetism cannot be distinguished from dirty but pure $d$-wave superconductivity. He shows that the suppression of zero-temperature superfluid density in the nested SDW case arises because nodal Cooper pairs cease to carry a well-defined current. Modre *et al.* have studied the $d_{x^2 - y^2} + {\mathrm{i}}s$ pairing state, which also has a finite energy gap and activated behaviour in $\rho_s(T)$ at low temperature.[@modre98] In contrast to the $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ case, the $d_{x^2 - y^2} + {\mathrm{i}}s$ gap is stable in the presence of disorder of *any* strength. In Appendix \[appendix\] we show how this arises from impurity renormalization of the $s$-wave gap component.
![(color online). The presence of a perturbation of the form Eq. \[thetaII\], from the $\Theta_{II}$-type circulating currents shown in Fig. \[thetaIIcurrents\], modifies the nodal spectrum of the $d_{x^2 - y^2}$ superconductor in a characteristic way: one node is shifted up in energy by $\approx 4 \Delta_\mathrm{cc}$, one is shifted down, and two are unperturbed.[]{data-label="thetaIIspectrum"}](fig1.eps){width="83mm"}
Berg *et al.* have studied the stability of the nodal quasiparticle spectrum in the presence of *commensurate* competing orders of all types.[@berg08] For commensurate perturbations that do not nest the nodal points, they prove that if the perturbation is invariant under time reversal or time reversal followed by a lattice translation, the nodal spectrum is stable. While it remains uncertain whether the converse holds in general, they examine several important cases in which the nodal spectrum breaks down, including certain stripe-like arrangements of spin and charge density, and the $\Theta_{II}$ circulating-current phase that has been detected by neutron scattering in [YBa$_2$Cu$_3$O$_{6 + y}$]{} and [HgBa$_2$CuO$_{4+\delta}$]{}. Confining themselves to a one-band model of the CuO$_2$ planes, Berg *et al.* have used the simpler arrangement of orbital currents shown in Fig. \[thetaIIcurrents\], which is equivalent to the $\Theta_{II}$ state in the more complicated three-band Cu–O lattice of Ref. . For a perturbation to the pure $d_{x^2 - y^2}$ superconductor of the form $$W = - {\mathrm{i}}\Delta_\mathrm{cc} \left\{\sum_{\mathbf{r}\mathbf{r}'\sigma} \eta_{\mathbf{r}\mathbf{r}'}c^\dagger_{\mathbf{r}\sigma} c_{\mathbf{r}'\sigma} + \mbox{h.c.} \right\}\;,\label{thetaII}$$ where $\eta_{\mathbf{r}\mathbf{r}'} = \pm 1$ is determined by the direction of the bond currents in Fig. \[thetaIIcurrents\], they find excitation energies . Here $E_\mathbf{k}^0$ is the unperturbed $d$-wave spectrum and $a$ is the lattice spacing. The perturbed nodal spectrum for the $\Theta_{II}$ state is plotted in Fig. \[thetaIIspectrum\]. The effect of the circulating currents is similar to the Doppler shift from a uniform current applied along a diagonal direction: one node shifts up in energy by $\approx 4 \Delta_\mathrm{cc}$, one node shifts down, and two are unperturbed. The individual and combined contributions to the low energy DOS are plotted in Fig. \[thetaIIcurrents\]. The net effect on $N(\omega)$ is a finite residual DOS $\approx 2\Delta_\mathrm{cc}/\Delta_0$, and a kink at $\omega \approx 4 \Delta_\mathrm{cc}$ above which the linear energy dependence doubles in slope. In the clean limit, the superfluid density can be obtained from Eqs. \[lambdaone\] and \[lambdatwo\] and is plotted in Table \[competingordertable\]. The limiting low temperature behaviour of $\rho_s(T)$ is linear, arising from excitations near the two unperturbed nodes. At a temperature of order $4 \Delta_\mathrm{cc}/k_B$, $\rho_s(T)$ crosses over to a second linear regime in which all four nodes contribute and the temperature slope doubles. In a clean sample, the combination of a residual DOS and a kink in $\rho_s(T)$ separating two linear regimes should be easily observable in experiments. Calculations in the presence of disorder have not been carried out, but we expect strong scattering impurities to induce additional residual DOS and to cause a crossover to $T^2$ behaviour in $\rho_s(T)$, as is seen in $d_{x^2 - y^2}$ and $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ superconductors. Although disorder will mask the effect of circulating currents when the crossover temperature $T_d \gtrsim 4 \Delta_\mathrm{cc}/k_B$, it is expected that tight limits on the size of $\Delta_\mathrm{cc}$ can nevertheless be placed, either using $\rho_s(T)$ or from the magnitude of the uncondensed spectral weight in $\sigma_1(\Omega, T \to 0)$.
![Individual nodal contributions to the density of states $N(\omega)$ from a circulating current perturbation of the form Eq. \[thetaII\]. Inset, upper left: the $\Theta_{II}$ circulating current pattern proposed in Ref. . Inset, lower right: an equivalent current pattern within a one-band model of the CuO$_2$ planes.[@berg08][]{data-label="thetaIIcurrents"}](fig2.eps){width="65mm"}
The effect of competing orders on $\rho_s(T)$ and the residual DOS is summarized in Table \[competingordertable\]. The SDW results are for the case of ordering wavevectors that nest the nodal points. The response to nested charge density waves is expected to be broadly similar, with the opening of a finite nodal gap that competes for Fermi surface.
[|c|c|ccccc|]{}
&
------------------------------------------------------------------------
$d_{x^2 - y^2}$ & $\Theta_{II}$ current loops & $d_{x^2 - y^2} + {\mathrm{i}}s$ & $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ & nested SDW\
&
------------------------------------------------------------------------
{width="29mm"} & {width="29mm"} & {width="29mm"} & {width="29mm"} & {width="29mm"}\
&
------------------------------------------------------------------------
{width="29mm"} & {width="29mm"} & & &\
&
------------------------------------------------------------------------
{width="29mm"} & {width="29mm"} & {width="29mm"} & {width="29mm"} & {width="29mm"}\
&
------------------------------------------------------------------------
clean & No & Yes & No & No & No\
&
------------------------------------------------------------------------
dirty & Yes & Yes & No & Yes & No\
&
------------------------------------------------------------------------
clean & $T$ & $T/2$ & ${\mathrm{e}}^{-\Delta_s/k_B T}$ & ${\mathrm{e}}^{-\Delta_{d_{xy}}/k_B T}$& $T{\mathrm{e}}^{-\Delta_\mathrm{SDW}/k_B T}$\
&
------------------------------------------------------------------------
dirty & $T^2$ & $T^2$ & ${\mathrm{e}}^{-\Delta_s/k_B T}$ & $T^{2}$ & $T^{2}$\
Experiment
==========
![(color online). $ab$-plane superfluid density $\rho_s(T)=\lambda^2(T)$ shown at 13 of 37 dopings measured in this study. The straight lines are linear fits the data between 5 K and [$T_c$]{}. The curved lines are a quadratic fit below 4 K. []{data-label="rhos"}](fig4.eps){width="70mm"}
Measurements of $\rho_s(T)$ and $\sigma_1(\Omega,T)$ have been made on a single-crystal ellipsoid of [YBa$_2$Cu$_3$O$_{6.333}$]{}, prepared as described in Ref. . Following high pressure annealing under a hydrostatic pressure of 35 kbar, controlled relaxation of oxygen order in the CuO chains has been used to continuously tune $T_c$ in the range 17 K to 3 K. Broadband microwave spectroscopy was carried out early in the sequence, for $T_c = 15.6$ K. Measurements of $\rho_s(T)$ in the milliKelvin range were made in the fully relaxed state, where $T_c = 3$ K.
$\rho_s(T)$ is obtained from 2.64 GHz surface impedance measurements, as described in Refs. and . The sample is positioned at the $H$-field antinode of the TE$_{01\delta}$ mode of a rutile dielectric resonator, with the microwave $H$-field oriented along the $c$ axis of the ellipsoid to induce $ab$-plane screening currents. Surface impedance $Z_s=R_s+{\mathrm{i}}X_s$ is obtained using the cavity perturbation approximation: $$R_s+{\mathrm{i}}\Delta X_s = \Gamma\left\{\Delta f_B(T)-2 {\mathrm{i}}\Delta f_0(T)\right\},$$ where $\Delta f_B(T)$ is the change in bandwidth of the TE$_{01\delta}$ mode upon inserting the sample into the cavity; $\Delta f_0(T)$ is the shift in resonant frequency upon warming the sample from base temperature to $T$; and $\Gamma$ is an empirically determined scale factor. The absolute reactance is set by shifting $\Delta X_s(T)$ so that it matches $R_s(T)$ in the normal state. We expect local electrodynamics to be a good approximation, giving $\sigma=\sigma_1-{\mathrm{i}}\sigma_2={\mathrm{i}}\omega\mu_0/Z_s^2$ for the microwave conductivity. The superfluid density is defined to be $\rho_s\equiv 1/\lambda^2=\omega\mu_0\sigma_2$.
Broadband spectroscopy of the quasiparticle conductivity $\sigma_1(\Omega,T)$ has been carried out using bolometric measurements of $R_s(\Omega,T)$ between 0.1 and 20 GHz, as described in Refs. and . The [YBa$_2$Cu$_3$O$_{6.333}$]{} ellipsoid and a Ag:Au reference sample were positioned in symmetric locations at the end of a rectangular coaxial transmission line, with the microwave $H$-field again oriented along the $c$ axis of the ellipsoid. $R_s(\Omega,T)$ has been inferred from the synchronous rise in sample temperature in response to incident microwave fields modulated at 1 Hz. The Ag:Au sample acts a power meter, providing an absolute calibration. At low frequencies, $\sigma_1$ can be obtained from $R_s$ from a knowledge of the penetration depth: in this limit $\sigma_1 \approx 2 R_s/\Omega^2 \mu_0^2 \lambda^3$. At higher frequencies, the quasiparticle conductivity starts to contribute to electromagnetic screening, effectively reducing $\lambda$. The quasiparticle shielding effect must be taken into account self-consistently, and the procedure for doing this is described in detail in Appendix C of Ref. . As part of this process, the quasiparticle contribution to $\sigma_2$ is inferred from a Kramers–Krönig transform of $\sigma_1(\Omega)$. This in turn requires a robust means of extrapolating $\sigma_1(\Omega)$ outside the measured frequency range. In previous work,[@turner03; @ozcan06] we have shown that the phenomenological form, $$\sigma_1(\Omega) = \sigma_0/[1 + (\Omega/\Gamma)^y]\;,\label{phenomenological}$$ works well for cuprate superconductors, with the exponent $y$ ranging from 1.4 to 1.7. A Drude model, on the other hand, corresponds to $y = 2$. Physically, the non-Drude exponents stem from the strong energy dependence of scattering rate in an unconventional superconductor. At low temperatures, thermally excited quasiparticles make a relatively small contribution to electromagnetic screening, so the extraction of $\sigma_1(\Omega)$ from $R_s(\Omega)$ is not particularly sensitive to variations in $y$. A similar procedure is used to estimate the quasiparticle conductivity spectral weight: in that case there is more sensitivity to the choice of exponent when integrating $\sigma_1(\Omega)$.
![(color online). $\rho_s(T)$ plotted versus $T^2$. The straight lines are quadratic fits to the data below 4 K except in the case of the lowest doping ($T_c = 3$ K), where the fit is to just below [$T_c$]{}. The data are linear in $T^2$ up to $T \approx 5$ K.[]{data-label="rhosT^2"}](fig5.eps){width="70mm"}
Results and Discussion {#results}
======================
$\rho_s(T)$ is plotted in Fig. \[rhos\] for a subset of the dopings. The most prominent feature of the data is the linear $T$ dependence of $\rho_s$ in the middle of the temperature range, which crosses over to a weaker temperature dependence at low $T$. The main questions about these data are: what is the limiting low temperature form of $\rho_s(T)$?; is the crossover the result of disorder?; and is the linear $T$ dependence at higher temperatures characteristic of the behaviour of the ideal, clean system? To address these issues, we first look at the low temperature range in more detail. Fig. \[rhosT\^2\] plots the data from Fig. \[rhos\] vs. $T^2$, showing that $\rho_s(T)$ indeed crosses over to accurately quadratic behaviour. For the lowest doping (the fully relaxed state with $T_c \approx 3$ K), the sample has been remounted in our dilution refrigerator system and measured down to $T = 0.05$ K. This data is plotted vs. $T^2$ in Fig. \[mKrhos\]. We see that the quadratic behaviour is robust to the lowest temperatures, neither flattening out to activated behaviour nor turning up to reveal a power law intermediate between $T^1$ and $T^2$.
![(color online). For the lowest doping in this study ($T_c = 3$ K), $\rho_s(T)$ has been measured down to $T = 0.05$ K. The data, plotted versus $T^2$, reveal that the asymptotic low $T$ behaviour is quadratic in temperature.[]{data-label="mKrhos"}](fig6.eps){width="55mm"}
To test whether the curvature is the result of disorder, we switch now to broadband microwave spectroscopy, which probes the spectral weight of the zero-energy quasiparticles. Fig. \[rsdatafit\] shows $R_s(\Omega)$ at $T = 1.7$ K for the $T_c = 15.6$ K doping. This has been converted to conductivity $\sigma_1(\Omega)$ in Fig. \[sigma1fit17\], using the self-consistent procedure described in the previous section. As mentioned above, we use a phenomenological form to fit to the conductivity: $\sigma_1(\Omega) = \sigma_0/[1 + (\Omega/\Gamma)^y]$. Spectra with $y = 1.4$ and $y=1.7$ provide equally good fits to the $R_s(\Omega)$ data in Fig. \[rsdatafit\] — a Drude fit ($y=2$), however, shows marked deviations at the high frequency end. At low frequency there is a narrow peak in $\sigma_1(\Omega)$, of uncertain origin, that may be a fluctuation effect. In any case we are content to omit it from the fitting procedure as it contains an insignificant fraction of the total oscillator strength. Using the phenomenological model of conductivity, we calculate the uncondensed spectral weight, for different choices of exponent. Expressed in superfluid density units, we obtain $\Delta \rho_s = 1.05~\mu$m$^{-2}$ for $y = 1.4$ and $\Delta \rho_s = 0.70~\mu$m$^{-2}$ for $y = 1.7$. Fig. \[rhos\] also shows linear and quadratic fits to $\rho_s(T)$ at low temperature. For comparison with the integrated $T = 1.7$ K spectral weight in $\sigma_1(\Omega)$, we should use the difference between the linear extrapolation of $\rho_s$ to $T = 0$, and $\rho_s(T = 1.7~\mathrm{K})$: this is $\Delta \rho_s = 1.03~\mu$m$^{-2}$. As this falls within the range estimated from integrating $\sigma_1(\Omega)$, we conclude that the crossover to $T^2$ behaviour in $\rho_s(T)$ is most likely a disorder effect in an otherwise pure $d_{x^2 - y^2}$ state, and that linear fits to $\rho_s(T)$ in the middle of the temperature range should provide a good measure of the low temperature slope in the absence of disorder.
![(color online). Broadband bolometric measurement of the surface resistance, $R_s(\Omega)$, at $T = 1.7$ K. Data are for a doping state with $T_c = 15.6$ K. The solid line is a fit using the phenomenological conductivity model, Eq. \[phenomenological\], with $y = 1.7$. A fit with $y = 1.4$ is practically indistinguishable and provides an equally good representation of the data. The dashed line, a best fit to the Drude model ($y = 2$), shows clear deviations at high frequencies.[]{data-label="rsdatafit"}](fig7.eps){width="60mm"}
![(color online). The real part of the conductivity spectrum determined from the $R_s(\Omega)$ data in Fig. \[rsdatafit\]. The solid line is a fit to the conductivity spectrum for $y=1.7$ using the phenomenological model Eq. \[phenomenological\]. The small, narrow peak at low frequencies is a robust result of the analysis and indicates long lived currents, possibly associated with superconducting fluctuations.[]{data-label="sigma1fit17"}](fig8.eps){width="60mm"}
![(color online). The doping dependence of the disorder crossover temperature $T_d$ and the uncondensed superfluid density $\Delta \rho_s$. $T_d$ is the temperature at which linear and quadratic fits to $\rho_s(T)$ match in slope, as defined in the text. $\Delta \rho_s$ is the uncondensed spectral weight predicted from the difference between linear and quadratic extrapolations of $\rho_s(T)$ to $T = 0$, and is consistent with the residual conductivity spectral weight directly measured at $T_c = 15.6$ K via broadband spectroscopy. []{data-label="Tddeltarhos"}](fig9.eps){width="70mm"}
A useful characterization of the strength of disorder is provided by the temperature [$T_d$]{} at which $\rho_s(T)$ crosses over from quadratic to linear behaviour. Using an interpolation formula, $\Delta \rho_s(T) = A T^2/(T + 2 T_d)$, similar to that of Ref. , [$T_d$]{} is defined to be the point at which the slope of the high temperature linear behaviour, $\Delta \rho_s = \alpha T$, matches the slope of the low temperature quadratic behaviour, $\Delta \rho_s = \beta T^2$. Using values of $\alpha$ and $\beta$ obtained from fits similar to those shown in Figs. \[rhos\] and \[rhosT\^2\], we plot $T_d \equiv \alpha/2 \beta$ in Fig. \[Tddeltarhos\]. The crossover temperature lies between 4 K and 5 K at these low dopings. This is larger than the cross-over temperature in the best samples of Ortho-II [YBa$_2$Cu$_3$O$_{6.50}$]{} and Ortho-I [YBa$_2$Cu$_3$O$_{6.99}$]{}, where $T_d$ is less than 1 K. This is consistent with the lower degree of CuO chain order in [YBa$_2$Cu$_3$O$_{6.333}$]{}, which is known to be the dominant source of residual scattering in the best [YBa$_2$Cu$_3$O$_{6 + y}$]{} samples,[@bobowski06] and is likely enhanced by proximity to the Mott insulator. Also shown in Fig. \[Tddeltarhos\] is the residual DOS, expressed in superfluid density units as $\Delta \rho_s$, and inferred from the difference between linear and quadratic extrapolations of $\rho_s(T)$ to $T = 0$. $\Delta \rho_s$ falls on underdoping, but remains a roughly constant fraction of $\rho_s(T = 0)$, consistent with the weak doping dependence of $T_d$.
We are able to draw tight conclusions from these measurements about the types and magnitudes of electronic order than might be competing with pure $d_{x^2 - y^2}$ superconductivity in [YBa$_2$Cu$_3$O$_{6 + y}$]{}. We emphasize that to do this it is essential to have measurements of both the asymptotic low temperature form of $\rho_s(T)$, and the residual DOS from $\sigma_1(\Omega)$. On the basis of the limiting quadratic $T$ dependence, which we have followed down to $0.05$ K, we can rule out any of the clean-limit behaviours shown in Table \[competingordertable\], as well as the $d_{x^2 - y^2} + {\mathrm{i}}s$ state in the presence of disorder. We can also exclude the BCS–BEC crossover scenario, which predicts a $T^{3/2}$ term in $\rho_s(T)$ from incoherent Cooper pairs excited from the condensate. When disorder is included, four of the remaining states in Table \[competingordertable\] are compatible with quadratic behaviour in $\rho_s(T)$. Of these, nested spin and charge density waves can immediately be eliminated, as they are not expected to be accompanied by a residual DOS. Of the remaining three, the simplest possibility is pure $d_{x^2 - y^2}$ superconductivity in the presence of a small amount of strong scattering disorder. However, we cannot rule out a small ${\mathrm{i}}d_{xy}$ component, nor a weak $\Theta_{II}$-type circulating current phase. Nevertheless, we can place tight limits on the size of such effects. We show in Fig. \[disordercrossover\] that the ${\mathrm{i}}d_{xy}$ state only becomes visible once $\Delta_{d_{xy}} > k_B T_d$. Similarly, we would expect the clean-limit behaviour of the $\Theta_{II}$ state to be apparent once $4 \Delta_\mathrm{cc} > k_B T_d$, meaning that if a perturbation of the form Eq. \[thetaII\] is present, then $\Delta_\mathrm{cc}$ must be 1 K or less.[^1] The constraints become even tighter in Ortho-II [YBa$_2$Cu$_3$O$_{6.50}$]{} and Ortho-I [YBa$_2$Cu$_3$O$_{6.99}$]{}, where the disorder scale $T_d$ is less than 1 K.
Finally, while we can rule out nested spin and charge density waves, our data say very little about commensurate orders that connect parts of the Fermi surface *away* from the nodes, as these will generally not alter the low energy spectrum. One such a scenario has been revealed by recent STM measurements on [Bi$_2$Sr$_2$CaCu$_2$O$_{8+ \delta}$]{}, [@kohsaka08] which show ordered, nondispersing modulations of the DOS at high energies and simultaneously, at low energies, arcs of Bogoliubov quasiparticles associated with the nodal $d_{x^2 - y^2}$ spectrum. The ‘Bogoliubov arcs’ appear to terminate on the Bragg plane joining $(0,\pi)$ and $(\pi,0)$ points, leaving the nodal spectrum intact. This would be compatible with the conclusions we draw here about [YBa$_2$Cu$_3$O$_{6 + y}$]{}.
Conclusions
===========
We have shown that measurements of superfluid density can be used as a sensitive probe of electronic orders than might compete with pure $d_{x^2 - y^2}$ superconductivity. Broadband conductivity measurements provide complementary information on zero-energy quasiparticles that would be difficult to infer from $\rho_s(T)$ alone. Measurements on underdoped [YBa$_2$Cu$_3$O$_{6.333}$]{} reveal a crossover from linear to quadratic behaviour in $\rho_s(T)$ below a temperature $T_d \approx $ 4 K to 5 K. The $T^2$ power law has been followed as low as 0.05 K and appears to be the asymptotic low temperature behaviour. It is also accompanied by a residual quasiparticle spectral weight of corresponding magnitude, leading us to conclude that the crossover is a disorder effect. The observations immediately allow us to rule out BCS–BEC crossover physics; competition from $d_{x^2 - y^2} + {\mathrm{i}}s$ superconductivity; and spin and charge density waves that nest the nodal points. Due to the presence of disorder, we cannot eliminate the possibility of either disordered $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ superconductivity, provided $\Delta_{d_{xy}} \lesssim 4$ – 5 K; or a perturbation of the form Eq. \[thetaII\] from a $\Theta_{II}$-type circulating current phase, as long as $\Delta_\mathrm{cc} \lesssim 1$ K. The small magnitude of the term is compatible with related observations from $\mu$SR,[@sonier01] neutron scattering[@fauque06] and polar Kerr-effect measurements.[@xia08]
We would like to thank J. Carbotte, P. J. Hirschfeld, S. Kivelson and J. E. Sonier for useful discussions. This work was funded by the National Science and Engineering Research Council of Canada and the Canadian Institute for Advanced Research.\
[$d_{x^2 - y^2}$]{}, $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ and $d_{x^2 - y^2} + {\mathrm{i}}s$ states {#appendix}
====================================================================================================
The $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ state and the $d_{x^2 - y^2} + {\mathrm{i}}s$ state are two candidate order parameters that may compete with pure $d_{x^2 - y^2}$ superconductivity in the cuprates. In this appendix we review the theory of the penetration depth in the presence of disorder and gauge the extent to which these states can be distinguished by microwave experiments. The theory of unconventional superconductivity in the presence of elastic scattering disorder has been developed by many authors,[@nam67; @pethick86; @hirschfeld88; @prohammer91; @schachinger03; @hirschfeld93; @hirschfeld93a; @borkowski94; @hirschfeld94] and has been reviewed in several places.[@joynt97; @hussey02; @balatsky06] In these systems, disorder not only imparts a finite lifetime to the quasiparticles, it alters the excitation spectrum by pair-breaking, and the two effects must be dealt with together. The self-consistent $t$-matrix approximation (SCTMA) provides a powerful approach for capturing this physics, particularly in the resonant scattering limit, where the impurity is on the verge of binding a quasiparticle at the Fermi energy. In the SCTMA, impurities are usually approximated as point defects that scatter in the $s$-wave channel. The effect of the disorder is to renormalize the quasiparticle energy $\omega$ and the superconducting gap $\Delta_\mathbf{k}$, which can be expressed in the following way: $$\begin{aligned}
\omega \to \tilde\omega & = \omega + {\mathrm{i}}\pi \Gamma \frac{N(\omega)}{c^2 + N^2(\omega) + P^2(\omega)}\label{renorm1}\\
\Delta_\mathbf{k} \to \tilde\Delta_\mathbf{k} & = \Delta_\mathbf{k} + {\mathrm{i}}\pi \Gamma \frac{P(\omega)}{c^2 + N^2(\omega) + P^2(\omega)}\label{renorm2}\;.\end{aligned}$$ Here $\Gamma = n_i n/\pi^2 D(\epsilon_F)$, where $n_i$ is the impurity concentration, $n$ is the conduction electron density, and $D(\epsilon_F)$ is the density of states at the Fermi level.[@hirschfeld93a] The impurity scattering strength is characterized by $c$, the cotangent of the $s$-wave scattering phase shift. The quasiparticle density $N(\omega)$ and pair density $P(\omega)$ depend on details of the particular superconducting state and are defined below for the different types of order parameter. For purely unconventional order parameters, $\langle \Delta_\mathbf{k} \rangle_\mathrm{FS} = 0$ and $P(\omega)$ vanishes — these states are therefore unrenormalized by $s$-wave scatterers.
We are primarily interested in the behaviour of the low energy excitations so, without loss of generality, we take the two-dimensional Fermi surface to be isotropic, and the gap functions to be the simplest cylindrical harmonics of the required symmetry: $$\begin{aligned}
\Delta_{d_{x^2 - y^2}} & = \Delta_0 \cos 2 \phi\;,\\
\Delta_{d_{xy}} & = \eta \Delta_0 \sin 2 \phi\;,\\
\Delta_s & = \zeta \Delta_0\;.\end{aligned}$$ Here $\phi$ measures angle from the Cu–O bond direction and $\eta$ and $\zeta$ are constants. For the pure $d_{x^2 - y^2}$ state there is no gap renormalization. The quasiparticle density is $$N(\omega) = \left\langle \frac{\tilde\omega}{\sqrt{\tilde\omega^2 - \Delta_0^2 \cos^2 2 \phi}}\right\rangle_\phi = \frac{2}{\pi} K\!\left(\frac{\Delta_0^2}{\tilde\omega^2} \right)\;,\label{dos}$$ where $\langle ... \rangle_\phi$ is an angle average around the cylindrical Fermi surface, $K(x)$ is the complete elliptic integral of the first kind, and the branch of the square root in Eq. \[renorm1\] is chosen so that $\tilde \omega$ has positive imaginary part. In the strong-scattering (unitarity) limit, for instance, $c = 0$ and $\tilde\omega(\omega)$ is a root of $$\tilde\omega = \omega + {\mathrm{i}}\pi^2 \Gamma/2 K(\Delta_0^2/\tilde\omega^2).$$ $\tilde{\omega}(\omega)$ encodes all the physics of scattering and pair-breaking. Inserted into the real part of Eq. \[dos\] it gives the quasiparticle density of states in the presence of disorder. To calculate penetration depth using $\tilde \omega$, a modification of Eq. \[lambdatwo\] is used:[@hirschfeld93a] $$\frac{\lambda^2_0}{\lambda^2(T)} = \tfrac{1}{2} \int_{-\infty}^\infty \!\!\!\!\!{\mathrm{d}}\omega\, \tanh\frac{\omega}{2 k_B T}{\mathrm{Re}}\left\langle\!\frac{\tilde\Delta_\mathbf{k}^2 }{(\tilde\omega^2 - \tilde\Delta_\mathbf{k}^2)^\frac{3}{2}}\!\right\rangle_{\!\mathrm{FS}}\;.\label{dirtylambda}$$ The density of states factor is $$\begin{split}
\left\langle \frac{\tilde\Delta_\mathbf{k}^2 }{(\tilde\omega^2 - \tilde\Delta_\mathbf{k}^2)^\frac{3}{2}}\right\rangle_\mathrm{FS} = \left\langle \frac{\Delta_0^2 \cos^2 2 \phi}{(\tilde\omega^2 - \Delta_0^2 \cos^2 2 \phi)^\frac{3}{2}}\right\rangle_\phi\\
= \frac{2}{\pi \tilde\omega}\left(K(\Delta_0^2/\tilde\omega^2) + \frac{\tilde\omega^2 }{\Delta_0^2 - \tilde\omega^2}E(\Delta_0^2/\tilde\omega^2)\right),
\end{split}$$ where $E(x)$ is the complete elliptic integral of the second kind.
![(color online). The onset of quadratic temperature dependence of $\rho_s(T)$ in a $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ superconductor as a function of disorder strength, relative to that of a $d_{x^2 - y^2}$ state. $\beta_{d + {\mathrm{i}}d}$ is the $T^2$ coefficient of $\rho_s(T)$ for the $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ superconductor. $\beta_d$ is the same quantity for the $d_{x^2 - y^2}$ state. Disorder strength is characterized by the disorder crossover temperature $T_d$ of the $d_{x^2 - y^2}$ superconductor, as defined in the text. Data are plotted for different values of the $d_{xy}$ gap, $\Delta_{d_{xy}}$, and scale well as a function of $T_d/\Delta_{d_{xy}}$. The two pairing states are difficult to distinguish on the basis of $\Delta \rho_s(T)$ once $k_B T_d \gtrsim \Delta_{d_{xy}}$. []{data-label="disordercrossover"}](fig10.eps){width="83mm"}
For the $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ state, $\Delta(\phi) = \Delta_0(\cos 2 \phi + {\mathrm{i}}\eta \sin 2 \phi)$, and there is similarly no gap renormalization. The quasiparticle density is $$\begin{split}
N(\omega) & = \left\langle \frac{\tilde\omega}{\sqrt{\tilde\omega^2 - \Delta_0^2(\cos^2 2 \phi + \eta^2 \sin^2 2 \phi)}}\right\rangle_\phi\\
& = \frac{2}{\pi} \frac{\tilde\omega}{\sqrt{\tilde\omega^2 - \eta^2 \Delta_0^2}}K\!\left(\frac{(1 - \eta^2)\Delta_0^2}{\tilde\omega^2 - \eta^2 \Delta_0^2} \right)\;.
\end{split}$$ The density of states factor in Eq. \[dirtylambda\] becomes $$\begin{split}
& \left\langle \frac{\tilde\Delta_\mathbf{k}^2 }{(\tilde\omega^2 - \tilde\Delta_\mathbf{k}^2)^\frac{3}{2}}\right\rangle_\mathrm{FS} \\
&= \left\langle \frac{\Delta_0^2 (\cos^2 2 \phi + \eta^2 \sin^2 2 \phi)}{\left(\tilde\omega^2 - \Delta_0^2 (\cos^2 2 \phi + \eta^2 \sin^2 2 \phi)\right)^\frac{3}{2}}\right\rangle_\phi\\
& = \frac{2}{\pi} \frac{1}{\sqrt{\tilde\omega^2 - \eta^2 \Delta_0^2}} \; \times \\
& \left[K\!\left(\frac{(1 - \eta^2)\Delta_0^2}{\tilde\omega^2 - \eta^2 \Delta_0^2} \right) \!+ \!\frac{\tilde\omega^2 }{\Delta_0^2 - \tilde\omega^2}E\!\left(\frac{(1 - \eta^2)\Delta_0^2}{\tilde\omega^2 - \eta^2 \Delta_0^2} \right)\right]
\end{split}$$ In the $d_{x^2 - y^2} + {\mathrm{i}}s$ state, impurity renormalization of $\Delta_s$ must be taken into account. The renormalization equations \[renorm1\] and \[renorm2\] can be rewritten $$\begin{aligned}
1 & = \frac{\omega}{\tilde\omega} + {\mathrm{i}}\pi \Gamma \frac{N(\omega)/\tilde\omega}{c^2 + N^2(\omega) + P^2(\omega)}\label{renorms1}\\
1 & = \frac{\Delta_s}{\tilde\Delta_s} + {\mathrm{i}}\pi \Gamma \frac{P(\omega)/\tilde\Delta_s}{c^2 + N^2(\omega) + P^2(\omega)}\label{renorms2}\;,\end{aligned}$$ where $$\begin{aligned}
N(\omega) & = \left\langle \frac{\tilde\omega}{\sqrt{\tilde\omega^2 - \Delta_0^2 \cos^2 2 \phi - \tilde\Delta_s^2}}\right\rangle_\phi\\
P(\omega) & = \left\langle \frac{\tilde\Delta_s}{\sqrt{\tilde\omega^2 - \Delta_0^2 \cos^2 2 \phi - \tilde\Delta_s^2}}\right\rangle_\phi\;.\end{aligned}$$ Since $N(\omega)/\tilde\omega = P(\omega)/\tilde\Delta_s$, the quantities $\omega/\tilde\omega$ and $\Delta_s/\tilde\Delta_s$ obey identical equations and therefore $\tilde\Delta_s = \Delta_s\tilde\omega/\omega$. \[renorms1\] and \[renorms2\] can then be combined into a single equation $$\tilde\omega = \omega + {\mathrm{i}}\pi \Gamma \frac{N(\omega)}{c^2 + N^2(\omega)(1 + \Delta_s^2/\omega^2)}\;,\label{dis1}$$ where $$\begin{split}
N(\omega) & = \left\langle \frac{\tilde\omega}{\sqrt{\tilde\omega^2(1 - \Delta_s^2/\omega^2) - \Delta_0^2 \cos^2 2 \phi}}\right\rangle_\phi\\
& = \frac{2}{\pi} \frac{1}{\sqrt{1 - \Delta_s^2/\omega^2}}K\!\left(\frac{\Delta_0^2}{(1 - \Delta_s^2/\omega^2)\tilde\omega^2} \right)\;.\label{dis2}
\end{split}$$ The corresponding term in Eq. \[dirtylambda\] is $$\begin{split}
& \left\langle \frac{\tilde\Delta_\mathbf{k}^2 }{(\tilde\omega^2 - \tilde\Delta_\mathbf{k}^2)^\frac{3}{2}}\right\rangle_\mathrm{FS} \\
&= \left\langle \frac{\Delta_0^2 (\cos^2 2 \phi + \eta^2 \sin^2 2 \phi)}{\left(\tilde\omega^2 - \Delta_0^2 (\cos^2 2 \phi + \eta^2 \sin^2 2 \phi)\right)^\frac{3}{2}}\right\rangle_\phi\\
& = \frac{2}{\pi} \frac{1}{\tilde \omega\sqrt{1 - \frac{\Delta_s^2}{\omega^2}}} \; \times \\ & \left[K\!\left(\!\frac{\Delta_0^2}{\big( 1\!-\! \frac{\Delta_s^2}{\omega^2}\big)\tilde\omega^2}\! \right) \!+ \!\frac{\tilde\omega^2}{\Delta_0^2 \!\!-\!\! \big( 1\!\!-\!\! \frac{\Delta_s^2}{\omega^2} \big)\tilde\omega^2}E\!\left(\!\frac{\Delta_0^2}{\big( 1\!- \!\frac{\Delta_s^2}{\omega^2} \big)\tilde\omega^2}\! \right)\right]\;.
\end{split}$$
We are now in a position to compare results for the three order parameters. The forms for the density of states $N(\omega)$ and the superfluid density $\rho_s(T)$ are shown in Table \[competingordertable\], both in the clean limit and in the presence of strong scattering disorder ($c$ = 0). The key feature of the clean $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ and $d_{x^2 - y^2} + {\mathrm{i}}s$ states is a finite energy gap, giving rise to activated behaviour in $\rho_s(T)$. The $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ and $d_{x^2 - y^2} + {\mathrm{i}}s$ states behave very differently in response to disorder. In the $d_{x^2 - y^2} + {\mathrm{i}}s$ case, the energy gap is robust. This can be traced back to the expressions for the renormalized frequency, Eqs. \[dis1\] and \[dis2\]. Impurity renormalization of $\Delta_s$ leads to solutions for $\tilde \omega$ that are purely real for $\omega < \Delta_s$, preventing the formation of any low-lying quasiparticle states in $N(\omega)$.[@borkowski94] The $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ case is quite different: pair breaking occurs for even small amounts of disorder, leading immediately to a $T^2$ term in $\rho_s(T)$. The $T^2$ term starts out weak, but grows in magnitude until it is comparable to that of a pure $d_{x^2 - y^2}$ superconductor with a similar amount of disorder. This cross over is charted in Fig. \[disordercrossover\], which shows that the $d_{x^2 - y^2}$ and $d_{x^2 - y^2} + {\mathrm{i}}d_{xy}$ states become indistinguishable when the energy scale for the disorder, $k_B T_d$, becomes comparable to $\Delta_{d_{xy}}$.
[88]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, ****, ().
, , , , , , , ****, ().
, , , , , , , , ****, ().
, , , , , ****, ().
, , , , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , , , ****, ().
, , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , , ****, ().
, , , , , , ****, ().
, , , , , , , , , , , ().
, , , , ****, ().
, , , ****, ().
, , , , , ****, ().
, ****, ().
, , , , , ****, ().
, ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , ****, ().
, , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , ****, ().
, , , ****, ().
, , , , , ****, ().
, , , , , , ****, ().
, ****, ().
, , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , ****, ().
, , , , , ****, ().
, , , , , , , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , ****, ().
, , , , , , , ****, ().
, , , (), .
, , , , , , , , , ****, ().
, , , , , , , , , (), .
, ****, ().
, ****, ().
, , , ****, ().
, , , , , , , , ****, ().
, , , , , ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ** (, ).
, ** (, ).
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , , , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , ****, ().
, , , , , , **** ().
, , , , , (), .
, , , , , , , , , , (), .
, ****, ().
, ****, ().
, , , ****, ().
[^1]: Although the effect of disorder on $\rho_s$ in the $\Theta_{II}$ state has not been calculated, a rigorous upper bound is set by the residual density of states, which is observed to be about 15% from broadband quasiparticle spectroscopy of $T_c = 15.6$ K material. Conservatively assigning all of this to circulating current effects, we would have , implying $\Delta_\mathrm{cc} \lesssim 2.4$ K.
|
---
author:
- |
Chris Culnane, Benjamin I. P. Rubinstein, Vanessa Teague\
University of Melbourne\
[{vjteague, benjamin.rubinstein, christopher.culnane}@unimelb.edu.au ]{}
bibliography:
- 'references.bib'
title: Options for encoding names for data linking at the Australian Bureau of Statistics
---
Background and scope {#background-and-scope .unnumbered}
====================
Publicly, ABS has said it would use a cryptographic hash function to convert names collected in the 2016 Census of Population and Housing into an unrecognisable value in a way that is not reversible. In 2016, the ABS engaged the University of Melbourne to provide expert advice on cryptographic hash functions to meet this objective.[^1] After receiving a draft of this report, ABS conducted a further assessment of Options 2 and 3, which will be published on their website.
Summary {#summary .unnumbered}
=======
For complex unit-record level data, including Census data, auxiliary data can be often be used to link individual records, even without names. This is the basis of ABS’s existing bronze linking. This means that records can probably be re-identified without the encoded name anyway. Protection against re-identification depends on good processes within ABS.
The undertaking on the encoding of names should therefore be considered in the full context of auxiliary data and ABS processes. There are several reasonable interpretations:
1. That the encoding cannot be reversed except with a secret key held by ABS. This is the property achieved by encryption (Option 1), if properly implemented;
2. That the encoding, taken alone without auxiliary data, cannot be reversed to a single value. This is the property achieved by lossy encoding (Option 2), if properly implemented;
3. That the encoding doesn’t make re-identification easier, or increase the number of records that can be re-identified, except with a secret key held by ABS. This is the property achieved by HMAC-based linkage key derivation using subsets of attributes (Option 3), if properly implemented.
Each option has advantages and disadvantages. In this report, we explain and compare the privacy and accuracy guarantees of five different possible approaches. Options 4 and 5 investigate more sophisticated options for future data linking, though are probably not feasible for this year. We also explain how some commonly-advocated techniques can be reversed, and hence should not be used.
We examine the mathematical properties of each technique in order to explain what the assumptions on procedural protections are, for example whether there are keys that must be kept secret and whether the data remains re-identifiable. The security guarantees therefore depend on ABS processes for protecting whatever data remains sensitive, such as re-identifiable linked data. Our aim is to explain clearly what must be protected, for each proposed encoding method. We understand that ABS will be implementing additional IT security measures and processes such as encryption at rest and access control, although these are not within the scope of this report.
Introduction
============
Cryptographers take great care in defining
- the abilities of an *attacker* and
- the *security guarantees* of a protocol.
A cryptographic primitive such as hashing, encryption, or digital signatures might provide certain guarantees against a particular kind of attacker, but might not be secure against a stronger attacker or in a different context. For example, digital signatures guarantee the integrity of data (assuming the key is secure) but do not provide privacy; encryption schemes that were secure 30 years ago can be broken using modern cloud computing.
Our first step is to model carefully the attacker ABS needs to defend against. A technique that defends against trusted parties doesn’t necessarily defend against a motivated external attacker. For example, writing “confidential” on the outside of an envelope is an effective way of telling well-behaved people not to read the contents—it is not an effective way of keeping the contents secure from an adversary who wants to snoop. Much of the Privacy Preserving Record Linkage literature is oriented to defending against well-behaved researchers who don’t actively try to reverse protections, like the people who don’t open “confidential” envelopes. It is very important not to confuse this level of protection with something that cannot be reversed even by a motivated attacker.
The world is changing. We see more active and sophisticated attacks against government infrastructure. Espionage is conducted by well-funded nation-state attackers against government and corporate databases. The (allegedly) Russian attacks on the US Democratic National Committee emails were widely publicised. Less well known but even more devastating was an intrusion into the US Office of Personnel Management [@finklea2015cyber], blamed on China. Exfiltrated data contained details about military and intelligence personnel, including information given for security clearances. In a separate incident, an employee of the US National Security Agency was reported to have accidentally exposed their collection of powerful hacking tools [@NSAHack].
It is also important to consider re-identification of individuals based on the data itself—birthdates and suburbs could uniquely identify many households. Although this report focuses on ways to protect names (and addresses), any solution should be carefully aligned with secure methods for protecting the rest of the data.
The risk for ABS is that data could be deliberately stolen or accidentally exposed—and would then be subject to deliberate attack. The key is to assess the security of proposals given a clearly defined and accurate attacker model.
Overview of the technical challenges
------------------------------------
There are two quite separate technical problems:
The linking problem
: Maximising the accuracy of linking, both for reducing false matches and failures to find a match. The same person might have two different-looking names, due to typos, reading errors, changes of address, [*etc.*]{} Any solution needs to be robust against these small changes—this is called “fuzzy matching” or “probabilistic matching.”
The cryptographic problem
: Clarifying the assumptions behind techniques for keeping data secure. The key is to be explicit about what the security assumptions are so that ABS can make sure they are valid in practice.
Each of these problems is challenging on its own and presents tradeoffs among different requirements. For example, solutions to the cryptographic problem involve a tradeoff among the strength of the security guarantees, the computational burden, and the complexity of the cryptographic protocol. Linking policies must slide a scale between false positives and false negatives. The combination of the cryptographic problem with the linking problem is particularly difficult. Along with the tradeoffs within each issue, the security requirements are hard to reconcile with a rich set of linking policies.
Some key themes and questions:
- the separation of roles, and whether those separations could be implemented with cryptography, or by the access control mechanisms to be put in place by the Statistical Business Transformation Project (SBTP),
- the protection of other data, rather than only names and addresses, perhaps using encryption,
- the possibility to link using other data (not only names).
Many published schemes combine techniques for linking with some method of securing data, but the two are conceptually separate. For example, the technique by Schell [*et al.*]{} [@schnell2009privacy] combines a preprocessing stage of extracting n-grams from names to get nearby or likely alternatives, with a Bloom filter that finds exact matches. (This scheme is also mentioned in various surveys [@vatsalan2013taxonomy].) The overall scheme deals with fuzzy matching, but the two techniques could be analysed and re-used separately. For example, the same preprocessing stage could be used before a more secure way of making exact matches. It is the n-gram treatment, not the Bloom filter, that allows for fuzzy matching.
The choice of method for securing the data does impact on the possibilities for fuzzy matching. Encrypted data can be decrypted (with the right key) so that fuzzy matching can be performed on the decrypted data. Cryptographic hashing will check for exact matches only—a small change in the input causes a very different output of the hash. Some forms of encryption (homomorphic encryption) allow for certain kinds of edits to encrypted data. These schemes are promising for the future, but probably too computationally expensive for use this year.
Public attention before the census focused strongly on the retention of names. However, there are many other important aspects to a thorough protection of privacy, since a person’s date of birth, place of residence and other data could probably be used to identify them in many cases, even without the name. This is beneficial for linking, but it also represents a risk to privacy in the event that the dataset is leaked or attacked.
ABS has an entire governance structure, suite of legislation, policies and practices for managing risks associated with the confidentiality of data releases to external users. All ABS staff sign various legal undertakings upon joining the ABS and at regular intervals of time. The Acts under which ABS operates require them to protect the confidentiality of data when released and the legal undertakings signed by ABS staff give an assurance that ABS staff will abide by the ABS Acts as well other relevant Commonwealth Acts. However, this depends on both good intentions and sound engineering. Not everyone who wants to keep data secure understands the complex interaction of assumptions and protocols needed for security.
For example, the recent open publication of re-identifiable MBS-PBS records [@culnane2017health] could be attributed to a mismatch between the *assumptions* behind the mathematical protections and the *access protections*, which were non-existent. The mathematical techniques used for that dataset might have been sufficient for a secure research environment, but were not sufficient for open publication.
The purpose of this document is to clarify the assumptions of the cryptographic protocols for protecting data. Then ABS can ensure that the security guarantees of processes at ABS match those assumptions. For example, if a particular method relies on keeping a decryption key secret from the adversary, then ABS must have processes in place for protecting that key. If the data itself is re-identifiable (and detailed unit-record level data generally is) then the data itself must be protected. If security relies on the attacker never having access to the Librarian or Linker, then those computers must be very carefully isolated.
The following suggestions consider the security of the whole process, with an effort to remain consistent with ABS’s existing linking structure.
Some initial suggestions on general security:
Encrypt the analysis items
: with a key not known to anyone in the linking process.
Shuffle the output order of the lists.
: Otherwise names and data might be recovered simply using order.
We concentrate on privacy, not integrity—an attacker trying to modify the data will generally not be detected or prevented.
None of the techniques in this report are secure against a compromised Linker. We assume that the dataset to be linked arrives in plaintext, so the linker has the information necessary to link by definition. In the future, it would be better to transmit incoming datasets in encrypted form. Then it might be possible to link without the Linker observing any plaintext records, so even a compromised linker could not reverse names. This is the main advantage of Options 4 and 5 (Sections \[individual\_ids\] and \[hom\]), which are promising directions for the future though they are probably too complex to implement this year.
The options
-----------
We have five options, each with some variations. Since the data could be re-identifiable with some auxiliary information anyway, even without the name, we concentrate on clarifying what extra information or access is required to perform (authorised or unauthorised) linking or reversing. The choice of a good solution can then focus on which assumptions are valid in the ABS environment and which controls can be put in place.
The options
1. Encryption. Encrypt the names with the Linker’s key; keep the key carefully secured. Section \[sec:encrypt\].
2. Lossy encoding for names. Section \[sec:lossy\].
3. HMAC-based linkage key derivation using subsets of attributes, like UK ONS. Keep the key carefully secured. Section \[hmac\_linking\_keys\].
4. Assign each person a unique ID before linking. Section \[individual\_ids\].
5. Homomorphic encryption / secure computation. Section \[hom\].
In Section \[sec:BloomFilters\] we explain why Bloom Filters are not a privacy-preserving data structure, and conduct an empirical investigation of the linking quality of some of the constructions in the literature, including the combination of n-grams and Bloom filters. A broader literature review is included in Appendix \[sec:litreview\].
Before describing the options, we describe background cryptography in Section \[sec:scene\], then ABS’s security and functionality requirements in Section \[sec:requirements\]. We first explain why names that can be individually linked can also be reversed.
Why names that can be individually linked from plaintext can also be reversed {#sec:presinfo}
-----------------------------------------------------------------------------
### Linking by guessing all possible names {#DictionaryAttack}
It is not possible to give each name a unique encoding that allows one-to-one matching with plaintext names, but is not reversible.
To see why, suppose the ABS holds a database that includes a unique encoding for each name. There must be some process for matching those encodings with the names in a new, incoming database. This process must include, somehow, comparing a plaintext name with an encoded one to see whether they match. But that process clearly implies the capacity to link any individual name to its encoding—an attacker could run through the ABS database checking each encoded name against “Rubinstein” or “Teague” until there was a match. Alternatively, for a given encoded name, the attacker could run through a list of all possible names until there was a match. This allows the attacker to find the name that matches any chosen encoding, regardless of whether that name actually appears in an incoming database.
This is not a question of hashing vs encryption, but a fundamental limit of the information that is retained. Whenever there is a capacity to do individual linking by name, that capacity also permits the encoding to be reversed. This is true for any way of preserving the name information, including hashing, encryption, HMAC, or simply replacing the names with a random ID and using a lookup table.
Several options exist for putting procedural and mathematical controls in the way of unauthorised access, while still retaining the ability to link. For example, a cryptographic decryption key could be required for linking—this key could be carefully protected or shared among multiple trustworthy people within ABS, using secret sharing so that multiple people had to work together to decrypt the data. However, the fundamental limit remains: if the key allows linking, it can also be used to recover names. So “not reversible” is an impossibly high bar to set while still being able to perform exact matching to a unique encoded name.
One way around this is to use lossy encoding, meaning that several different names are mapped to the same encoding so information is truly lost. The ABS already has a technique for doing this, in which names are effectively assigned to bins, creating a level of indistinguishability between names within the same bin. In such systems there is a reduced amount of information preserved, though some still remains. The amount preserved is proportional to the size of the bins and the frequency of the names within those bins. Such approaches can reduce total information held, but at the cost of accuracy of record linking. In particular it would prevent exact matching of names. It also has some other important security limitations, because it reduces the attacker’s information only as much as it reduces the information available for authorised linking—see Section \[sec:lossy\].
Background on Cryptography and possible attacks {#sec:scene}
===============================================
This section presents very brief informal definitions of cryptographic primitives. More formal definitions can be found in [@daboShoupCryptoBook]. We then explain some known attacks applicable in this setting and describe which methods defend against them.
A *cryptographic hash function* $H$ takes a message $m$ and outputs a hash $h$. It should be infeasible to recover $m$ given $h$ *if $m$ was randomly chosen from the full input space of $H$.*
A *message authentication code (MAC)* is similar to a hash function, but requires a secret key $k$ to compute the hash. It should be infeasible to compute the correct hash without knowledge of $k$.
An *HMAC* [@krawczyk1997hmac] is a particular kind of MAC. Under certain assumptions, an HMAC’s output cannot be distinguished from random without knowing the key [@bellare2006new].
A *secret-key encryption function* takes a key $k$ and message $m$ (called the “plaintext”) and produces a ciphertext $c$. The message $m$ can be recovered from $c$ using $k$. (This is called “decrypting”.) Decrypting should be infeasible without $k$.
A *public-key encryption function* uses two different keys: a *public key* for encrypting the message $m$ and a *private key* for decrypting the ciphertext $c$. The public key is made public, but decryption should be infeasible without the private key.
Both secret-key and public-key encryption schemes generally include some randomness when encrypting, so that different encryptions of the same message are not the same.
*Secret sharing* [@shamir1979share] allows a secret such as a key to be shared among several participants so that it can be recovered only if some threshold meet and exchange their shares. Fewer than a threshold of participants can derive no information about the secret.
How cryptographic hashes of names can be reversed {#sec:hashing}
-------------------------------------------------
There is a persistent misunderstanding in the PPRL literature that cryptographic hash functions are impossible to reverse. This is incorrect. Irreversibility can be true only if the input is *randomly and uniformly* chosen from a sufficiently large set that it is infeasible to try them all to see which one matches the given output. Names are clearly not chosen in this way. The next sections explain some known ways of reversing hashes when the input set is predictable, as names are.
### Dictionary attacks
Name reversal can be applied to an entire database given a list of all (or many) probable names, derived for example from the Whitepages or the 2021 Census, or simply from the attacker’s memory of known names.
Simply trying all possible inputs, as described in Section \[DictionaryAttack\] is known as a *dictionary attack*. Modern security would require at least $2^{128}$ possible input values to be considered secure against a brute force (dictionary) attack. There are fewer than 400,000 last names currently in use in Australia, which is small enough to guess all possible values. Calculating cryptographic hash values for all of them would take mere seconds. We ran a demonstration of this in our seminar at the ABS, based on simply running through a directory of all Australian names[^2] to see which one matched a SHA-256 hash of a volunteer’s name. It took a few seconds to recover. Cryptographic hashing alone provides a near-zero level of security in this context.
### Why plain HMACs do not solve the problem
More recent proposals have replaced the plain cryptographic hash with an HMAC (Hashing-based Message Authentication Code). A number of papers incorrectly refer to this as a hash, when it is not. It uses a hash but has different properties and security guarantees. The most significant difference is that it requires a key $k$. The key must be generated in accordance with cryptographic procedures with good entropy.
The key is critical to the security of the output and must be kept absolutely secret. Were someone to gain access to the key, the HMACs tags (output values) would become as easy to reverse as plain cryptographic hash. A number of papers reject encryption as inappropriate because it permits decryption with knowledge of the key. Yet the same is true if using an HMAC over a small input set (such as names), for the reasons described above.
The misunderstanding of HMACs gives the mistaken impression that they comply with the requirement to be irreversible, when actually they do not. In most cases, the input sets are small and the security of HMACs reduces to being equivalent to encryption. There is certainly no guarantee of irreversibility—just as with encryption the security depends on the key and access to the key.
### Determinism and frequency attacks {#sec:freq}
Another vulnerability of HMACs is that, for a given key, all HMACs of the same message are equal. (Note that this is not generally true of encryption, though it is true of keyless hashing too.) Such approaches allow efficient exact matching via comparison of outputs. However, whenever a deterministic approach is used, the frequency distribution of the input is replicated in the output. This presents a particular problem where the input distribution is not uniform, as is the case with names. For example, the name “Smith" is overwhelmingly the most popular last name in Australia. By looking at the output HMAC tags it would be trivially easy to identify which one represented “Smith" *even without knowing the secret key*. In some cases, as in [@ONSM9], where similarity information is provided, being able to reverse a single encoding can lead to reversing of many more [@culnane2017vulnerabilities].
Even where schemes apply a further level of abstraction, as is the case with Bloom filters, it has been shown that frequency analysis can still be performed and used to recover plaintexts [@kuzu2011constraint].
So plain HMACs can be reversed if the attacker either
- knows the key and can guess the input value (for example by iterating over all possible input values), or
- doesn’t know the key but can identify an input by the matching frequency of its output.
ABS requirements {#sec:requirements}
================
Much of the Privacy-preserving record linkage literature is concerned with “... how two or more organisations can determine if their databases contain records that refer to the same real-world entities without revealing any information besides the matched records to each other or to any other organisation.”[@christen2012data] ABS’s setting is slightly different, because ABS is a single party aiming to link disparate datasets within the organisation, under the assumption that it arrives at the ABS in plaintext, but is then encoded to limit the recovery of the underlying name from the encoded name. Thus the ABS requirements are different from the usual requirements in the literature.
The following is a summary of the key requirements as captured on 7th December 2016 during discussions between ABS and the University of Melbourne. Functional requirements capture what the protocol ought to allow properly-authenticated ABS employees to do; security requirements capture what the protocol ought to prevent an attacker from doing.
Functional requirements
-----------------------
Link First and Last Name
: The approach should provide a way of linking the first and last name fields. The first and last name should be treated separately. Address will be handled via geo-coding.
Fuzzy Matching
: Ideally the approach would provide for inexact matching to handle typical data capture errors such as transposed characters and differences in spelling. Note: this is a desirable but not necessary requirement, since names could be canonicalised before matching.
Exact Matching
: The matching should aim for an exact match. This is on a data level, as opposed to a record level i.e. Bob Smith matches to Bob Smith, but there may be multiple records with the name Bob Smith. Note: this again is a desirable but not necessary requirement, since a one-to-many matching of names could still allow one-to-one matching of records given other information.
Integrate into Data Integration Protocol
: Ideally the approach will fit with the Data Integration Protocol. Whilst the protocol is not absolutely rigid, and could be modified, any modification would require an equivalent business case. Cryptography could be used to enhance security of the data integration protocol by enforcing existing rules that restrict the data visible to different participants.
Security requirements
---------------------
A key part of this project is translating into mathematical terms the requirements for security and privacy of the linking process.
Deletion of Names and Addresses
: The ABS has committed to the deletion of the names and addresses. The Senate submission includes the statement that “ABS confirms names and addresses will be destroyed when there is no longer any community benefit to their retention or four years after collection, whichever is soonest”
Cryptographic Hashing
: The ABS has made a public commitment to using a Cryptographic Hashing function. The statement reads “ABS will use a cryptographic hash function to anonymise name information prior to use in data linkage projects. This function converts a name into an unrecognizable value in a way that is not reversible...”.
Taken together and in an absolute sense, these requirements are impossible to deliver. The requirement to delete names and addresses, if taken in an absolute sense, would include deleting derivatives of the names, which would prevent linking. It would be clearer to distinguish between plaintext and encoded names, and only assert the deletion of the plaintext names. The assertion that a cryptographic hash cannot be reversed is mathematically incorrect in this setting, as explained in Section \[sec:hashing\].
For complex unit-record level data, including Census data, auxiliary data can be often be used to link individual records, even without name. This is the basis of ABS’s existing bronze linking. This means that they can probably be re-identified without the encoded name anyway. Protection against re-identification depends on good processes within ABS.
The undertaking on the encoding of names should therefore be considered in the full context of auxiliary data and ABS processes. There are several reasonable interpretations:
1. That the encoding cannot be reversed except with a secret key held by ABS. This is the property achieved by encryption (Option 1), if properly implemented;
2. That the encoding, taken alone without auxiliary data, cannot be reversed to a unique name. This is the property achieved by lossy encoding (Option 2), if properly implemented;
3. That the encoding doesn’t make re-identification easier, or increase the number of records that can be re-identified, except with a secret key held by ABS. This is the property achieved by HMAC-based linkage key derivation using subsets of attributes (Option 3), if properly implemented.
Using encryption, for example, would mean that “not reversible” must be reinterpreted to mean “not reversible except given certain secret keys.” These keys would need to be stored securely or secret-shared among several entities within ABS—much like is already done in the ABS Data Integration Protocol. The security of such an approach is based on the assumption of trust and compliance with a process or protocol for key management. The distribution of trust could possibly be designed to align with existing protocols—this is the topic of the next sections of this report.
The following sections describe the tradeoffs among the various options for linking. For each option, we discuss how it addresses both the security requirements and the functionality and efficiency requirements. Some of the options could be combined. For example, if encoded names needed to be stored for longer periods, they could be generated using HMACs on subsets of attributes (Section \[hmac\_linking\_keys\]) but then encrypted with the public key of the Linker for storage (Option \[sec:encrypt\]).
Option 1: Encrypting names using public-key encryption {#sec:encrypt}
======================================================
The simplest secure approach is to encrypt names with the public key of the Linker.[^3] Other linkage items such as year of birth and location could also be encrypted, which would improve the security of the whole system. This would scarcely alter the ABS’s existing linkage process at all, except that the Linkage File produced by the Librarian would be encrypted. Indeed, the whole process could be considerably simplified: rather than a separate manager for names, there could be an initial step in which the names and any data used for linking were encrypted. The data could then be stored in encrypted form and simply passed to the Librarian and on to the Linker whenever linking was performed.
Variables that are both linking variables and analysis variables (such as year of birth) could either be sent separately to the Assembler (as they are now), or decrypted by the Linker and sent to the Assembler with the Linkage Output File.
Information required to make the anonymised name/linkage file:
: the public key of the Linker; the names and (possibly) other linking variables.
Information required for linking:
: the private key of the Linker.
Information required for reversing:
: the private key of the Linker.
Ways of inhibiting unauthorised reversing or linking:
: keep the Linker’s private key secret. For example, it could be secret-shared among several people at ABS or even some people outside, so that many had to participate to decrypt. Depending on how the SBTP access control mechanisms are implemented, it may be possible to simply re-use their key management infrastructure.
Fuzzy matching:
: Yes, by the Linker, after decrypting the data. Any fuzzy matching algorithm could be applied.
Linking accuracy:
: As good as linking on unencrypted values, because the Linker can see all unencrypted values.
Implementation difficulty:
: The protocol for encrypting, decrypting, and managing keys would need to be implemented with care by professionals, but would use standard techniques and might be able to re-use methods from the SIAM/SBTP technologies.
Computational Efficiency:
: This needs to be tested, but would probably be very efficient in practice because the cryptography is simple encryption/decryption, and the linking is performed on unencrypted values.
Other advantages:
: In future, other agencies sending their data to ABS could also encrypt their names and data with the Linker’s public key. This would mean that nobody within ABS would see the incoming names, except with access to the Linker’s private key. It would also protect the names and data from compromise during transmission between agencies. Although the private key must be kept secret, it would not have to be distributed very widely ever. In particular, it is not needed for the *production* of the encrypted files—that is the great advantage of public key cryptography.
Other disadvantages:
: Strong requirements for keeping the private (decryption) key secret.
Option 2: Lossy encoding for names {#sec:lossy}
==================================
Lossy encoding creates a many-to-one mapping between inputs (names) and outputs (encoded names). The idea is to encode first and last names, separately, using a bucket that includes a whole set of possible names. A simple example is retaining initials rather than whole names. This is impossible to reverse without auxiliary data because many names have the same initial[^4]. It nevertheless provides some useful information for linking, because a particular set of initials won’t match most names. All methods for lossy encoding have this same logical structure: information is truly lost, but some information is retained which can be used to eliminate some incorrect matches. Of course, the information that is retained also helps an attacker do unauthorised linking/re-identification if the dataset is leaked.
This option leverages the redundancy of identifiable information—most people are unique based on a subset of the attributes usually associated with Census data, so deleting some information from the name may not severely reduce the accuracy of linkage. The Linker would also need to consider age, gender and country of origin or last residence for accurate linking. Unfortunately, this feature is also a challenge for privacy—attributes that can be used by the Linker for more accurate linking could also be used by an attacker for unauthorised re-identification. This approach therefore relies on proper processes within ABS for keeping the dataset secret.
Different lossy encodings might retain very different amounts of information. For example, the first three letters of a name provide much more information than the initial alone. The amount of information lost affects (legitimate) linking quality as well as the likely success of an attacker.
One approach to lossy encoding is to define a function that maps input names to output buckets, for example using the ASCII character codes. Ideally, the function should have a near uniform output. One way to achieve this would be to use an HMAC and truncate it to an appropriate length for the number of buckets. For example, if you wanted 256 buckets you could truncate to the first 8 bits, and treat it as an integer.
The downside of the simple approach above is that it will to some degree replicate the frequency distribution of the input in the output, particularly for high frequency values. For example, the bucket with “Smith" will be detectable by looking for the most popular bucket (though other names may map to that bucket too). We simulated this approach using between 10, 50, 100 and 500 buckets. Smith was in the largest bucket every time.
One approach to mitigating this is to apply a more structured mapping that attempts to smooth the frequency distribution of the output. Such a mapping could combine less popular names together to create output buckets that are broadly of equal size. In such an approach the Name Manager produces, and the Librarian and Linker use, a table that lists the correspondence between name and bucket. Without that list, it should be hard to know what bucket contains what names.
One step of privacy protection is to keep the name-bucket correspondence secret; the other step is that many names have the same code, so extra information (such as age or location) is required to link an individual record.
However, this protection is quite easily reversed given some information about some people. Once one person has been re-identified based on other attributes such as their location and date of birth, the adversary can infer that everyone else with the same name is in the same bucket. It would only take a little auxiliary data, with a few successful re-identifications, to learn at least part of the name-bucket correspondence.
Information required to make the anonymised name/linkage file:
: the names.
Information required for linking:
: The table or function associating each name with its bucket, and also some other linkage variables such as age or location.
Information required for reversing:
: The name-bucket table, plus also some auxiliary information about the other linkage variables.
Ways of inhibiting unauthorised reversing or linking:
: Keeping the name-bucket table secret; encrypting other linkage variables. Unfortunately, unauthorised reversing or linking would be straightforward unless the other attributes were all encrypted.
Fuzzy matching:
: Possibly, depending on how the buckets were assigned. Fuzzy matching would come automatically if similar names could be assigned to the same bucket. However, it would be very difficult for names that were similar but assigned to different buckets.
Linking accuracy:
: Less than for full name matching. This could see an increase in false positives, particularly as a result of any overt frequency smoothing that has been applied. Accuracy would depend on how many other linkage variables were available.
Implementation difficulty:
: Depends on the method of generating the name-bucket table. This table would need to be stored in a secure way. The functional approach is simpler but remains susceptible to frequency attacks.
Computational Efficiency:
: Generating the table is probably quite efficient, depending on the method of generating the name-bucket table. However, the efficiency of the linking process could suffer because of the increased rate of false-positive matches.
Other advantages:
: Compliance with a literal interpretation of a name encoding that “cannot be reversed” to a unique name, if properly implemented.
Other disadvantages:
: Reduced accuracy of linking.
An analysis of name frequencies and the implications of incorporating other variables {#sec:freqsmooth}
-------------------------------------------------------------------------------------
As discussed above, most lossy encoding techniques remain susceptible to frequency attacks unless they are deliberately designed to produce buckets of equal size. In this section we look at how that could be achieved, and discuss some of the issues associated with it.
### Input distribution equals output distribution
If the mapping is deterministic, in that the same input is mapped to the same output each time, the input frequency distribution is largely replicated in the output. This is exactly what we want to avoid, since it leaks information and risks breaking down many-to-one relationship, at least probabilistically. For example, “Smith" is overwhelmingly the most popular last name, and whatever bin it is assigned to will be proportionally more popular than any others. As such, any row assigned to that bin has a high probability of being “Smith”. Attempting to create a uniform output distribution will likely lead to significant loss in accuracy. The frequency of “Smith" cannot be subdivided into multiple bins without increasing the false negative rate, so the only way to smooth the output distribution is to combine output bins to create the same large frequency. Looking at figure \[fig:freq\_last\_name\], which shows the frequency distribution of last names in Australia it is clear that the dominance of “Smith" is an issue. In order to achieve a uniform output, or something close to uniform, it will require combining many low frequency names into a single bin. Even combining high values together, for example, “Jones" and “Williams" (the next two most popular names) will still result in a bin containing nearly 64000 unique names. This will give high accuracy to popular names, but very poor accuracy to everything else. The skew in the input distribution is just too significant to meaningfully smooth it without a significant loss of accuracy.
Summary
-------
The main advantage of lossy encoding is a literal adherence to the promise that the encoding of names “cannot be reversed.” Lossy encoding itself doesn’t significantly mitigate the risk of unauthorised linking—the same auxiliary information used for ABS linking could be used by an unauthorised attacker. It is therefore still very important to encrypt or otherwise protect the other variables.
![Frequency of last name[]{data-label="fig:freq_last_name"}](name_freq-eps-converted-to.pdf){width="\textwidth"}
Option 3: HMAC-based anonymised linkage identifiers using subsets of attributes {#hmac_linking_keys}
===============================================================================
In this section we describe a method for combining multiple attributes into a single anonymised linking identifier. This has many advantages for both computational efficiency and privacy. There is some degradation in linking quality compared to Option 1, but this may not be significant depending on how the linking identifiers are chosen. It could also be combined with a lossy encoding of names if required. The main advantage of this approach over plain lossy encoding is that the linking could be performed on the anonymised linking identifiers by a Linker that didn’t need to know the decryption key (though this would require some modifications to the current process).
Smoothing the input distribution by including multiple attributes
-----------------------------------------------------------------
We want a distribution in which all inputs are unique. We can then assign those values to different outputs to maintain both privacy and accuracy. The only way to achieve this is to include many attributes in the input value. Combining first and last name will have some effect, although it will still display a skew. For example, there will be more “Steve Smith”s" than “Shanika Karunasekera”s. Also, birthdate correlates with first name because first names follow fashions that change over time. The best idea is to combine many more fields into the input and then create multiple anonymised linking identifiers in an approach similar to that used by the UK Office of National Statistics (ONS). In [@ONSM9] they create 11 anonymised linkage identifiers with various different parts of first, middle and last name, date of birth, postcode, and gender. They report a uniqueness value of at least 98% for all the linkage identifiers. The ONS do not perform a lossy encoding on those attributes, instead matching on them directly, but they could be lossy encoded.
The cryptographic construction
------------------------------
An HMAC is a function that takes a message and a secret key and returns a digest (often called a hash). We have explained elsewhere that, even without knowing the key, the HMAC of a list of names can be reversed because of frequency attacks.
Using unique inputs is a good way of mitigating frequency attacks on name-based HMAC. If you incorporate enough extra data, every record should be unique. Without the key, it could not be reversed (because an HMAC behaves like a random function in this case [@bellare2006new]). With the key, it could be reversed only by knowing (or guessing) all the attributes in one hash/encryption.
The idea is similar to a technique in use by the UK ONS [@ONSM9]. It requires the secret key to be securely generated and carefully protected. The idea is, for each record, to produce several encodings using different combinations of variables (combined using the secret key) and store them all. For example, one might use first name, DoB and address; another might use surname, DoB and country of last residence.
For example, if $k$ is the secret key then the Linkage File for a particular Link ID could be computed as $$\begin{array}{rl}
\textit{Digest}_1 & = {\textit{HMAC}}_k(\textit{first-name}, \textit{address}, \textit{birth-year}) \\
\textit{Digest}_2 & = {\textit{HMAC}}_k(\textit{first-name}, \textit{last-name}, \textit{address}) \\
\textit{Digest}_3 & = {\textit{HMAC}}_k(\textit{country-of-last-residence}, \textit{address}, \textit{birth-year}) \\
\textit{Digest}_4 & = {\textit{HMAC}}_k(\textit{first-name}, \textit{last-name}, \textit{birth-year}) \\
\end{array}$$ or whatever other combinations of variables seemed useful.
When a new database is linked, the same computation is repeated on the incoming variables. If the person has changed address, for example, the digests that use address will not match, but other digests should. If their surname has been mistyped, then the digests that don’t use surname or only use the first two letters might match. We assume the Linker has access to the plaintext of the dataset to be linked. So information about how common each name is, or how likely it is that a certain name has been mistyped, [*etc.*]{} could be derived from the non-Census data. Then it makes a linking decision based on how many collections of attributes seem to match and which ones it expects to have changed.
This technique could be combined with preprocessing of names for fuzzy matching, for example the n-gram approach of [@schnell2009privacy], or with known common transcription errors such as the reversal of names. The linker could try likely misspellings of names at linking time if the given one didn’t match. For example, given an input name “Smithe” the linking could be attempted using “Smithe” and, if it failed, re-attempted with “Smith” then “Smythe” [*etc.*]{} The same technique could be applied to other variables that may not quite match. For example, when comparing 2021 ages with 2016 ages, you could subtract 4,5 and 6 years from the ages before recomputing the hash/encryption. Dealing with typographical and transcription errors in names is harder, because you need to guess what they were. Name standardisation should help but may never produce results as good as having both names in the clear.
Ideally this would be handled by careful selection and construction of the encodings. The attributes should be selected to handle the typical distortions seen in the datasets. Where the above would potentially be useful is where there are compounded distortions, [*e.g.*]{} someone has moved, changed their name and mistyped their firstname.
Obviously the anonymised linkage identifiers are not conditionally independent—if one attribute changes, then several linkage identifiers might change. This complicates the analysis for matching—it means that when a record matches some, but not all, linkage identifiers, a careful inference must be made about which attribute(s) might have changed. In Deterministic linking, multiple potential links will not be linked. In probabilistic linking, quality measures usually depend on the strength of each linking variable. This needs to be adapted to give a quality measure to each collection of variables. Also, mismatches among independent collections should be regarded as much more important than mismatches among related collections. For example, if every identifier involving address fails to match, but all the other ones do match, then the person has probably moved; if a similar number of mismatches occur, but do not all have a common variable, then at least two variables must have changed and it is less likely to be the same person.
Information required to make the encoded name (and other data) file:
: The HMAC secret key, the names and other linkage variables. (Note that this technique only works by combining names with at least some other variables.)
Information required for linking:
: The HMAC secret key plus some name and linkage variables from the incoming dataset.
Information required for reversing:
: Either frequency information on collections of variables (we would aim for this to be nearly uniform) or the HMAC secret key combined with a successful guess of at least one collection of variables.
Ways of inhibiting unauthorised reversing or linking:
: Keeping the key secret; ensuring that the encodings incorporate several variables.
Fuzzy matching:
: Yes, the linking identifiers already provide some fuzzy matching. The ONS report a very high rate of matching on the linking identifiers alone. Furthermore, if a perfect match was not possible, the incoming names could be slightly perturbed and retried. This would not be quite as accurate as plaintext name matching, but could be quite good in practice.
Linking accuracy:
: It depends on which attributes are included in the HMAC digests. This would need some empirical investigation, the greater the degree of uniqueness the better the accuracy. A key which does not provide sufficient uniqueness not only risks privacy through frequency attacks, but also impacts on accuracy by causing false positives.
Implementation difficulty:
: Similar to encryption. It would need careful generation and management of the HMAC key and professional implementation of the cryptography. Possibly it could use existing libraries from the SIAM/SBTP project—this would need to be checked.
Computational Efficiency:
: Currently the most efficient approach we have (more efficient than plaintext similarity matching). This is largely due to it being deterministic matching, allowing extremely efficient matching on very large sets without requiring cross comparisons. It is efficient enough that it may not require any blocking, allowing whole population linking. We will discuss further in a later section. We require further analysis to establish the accuracy of the approach on very large sets.
Other advantages:
: The main advantage of this over simply applying HMAC to each name separately is the mitigation of frequency attacks, if properly implemented. It also means that, even if the HMAC secret key was compromised, an attacker would need to guess all of the attributes for one of the digests in order to recover the name.
Other disadvantages:
: This structure would not be directly useful for computing the statistical data necessary for assessing the accuracy of probabilistic linking. Some of those values could be computed independently, but the ones involving names could not.
Defending against frequency attacks
-----------------------------------
Even for an attacker without the key, HMACs are subject to frequency attacks. The technique described in this section is secure only if a large enough collection of attributes is chosen to make the linkage identifiers entirely unique. Section \[sec:freq\] described frequency attacks as applying to very frequent names, but the same problem occurs in sets of almost-unique identifiers with one or two repeated values. Suppose for example that two people with the same first and last name live at the same address, which happens occasionally in cultures where children are named precisely after their parents. Then a digest incorporating first name, last name and address would be almost entirely unique except for those households. This would allow those individuals to be isolated among a small set of possible records.
At the time of building the digests, it is critically important to check for duplicates. If there are any, then records should be removed until all the digests are unique—this obviously lowers linking accuracy, but is critical for preventing frequency attacks. If there are too many duplicates to remove, then that collection of variables should not be used for a linking identifier.
This is the reason that a fairly large number of attributes need to be included in each digest. Empirical testing, on the spot, could determine which collections produced unique (or close enough to unique) outputs.
Deterministic linking - performance advantages
----------------------------------------------
The linking identifiers could be stored in a database, with the corresponding original recordId as a field. Since most of the linking identifiers are unique they constitute an excellent record identifier, which can be effectively indexed. This is a critical performance advantage in practice: a database index allows the record to be found in a single lookup, rather than by searching through the entire list of millions of values until a match is found. When performing the linking, we only need to iterate over the incoming records and perform a single query for each of its record linking identifiers. Those queries are extremely quick because they are just looking up an index.
The database could be used directly as both the Anonymised Name File and the Linkage File in ABS’s current process. It also obviates the need for a separate Linkage Concordance file, though this could be included if desired.
One of the advantages of this approach is that it allows deterministic linking, whilst still handling some degree of distortion. Deterministic linking is considerably more efficient because it can be achieved by indexing the database by each encoding, thus avoiding a full cross comparison of the two datasets.
By way of an example, if we wanted to link the 2.9 million records in our sample dataset it would require a maximum of approximately 32 million queries. On a multi-core desktop machine we are able to perform those queries and the necessary linking in under 15 minutes. If we compare that to any scheme that involves cross comparison, we would have to perform $(2.9 \text{ million}) ^2 / 2=4205000000000$ record comparisons. Even if we could perform each comparison in a microsecond, on an 8 core machine, it would still take over 6 days. In such circumstances, blocking is essential to allow the linking to be feasibly performed.
However, blocking has its downsides, namely, that it will impact on accuracy. For example, if a geographically based blocking algorithm is used, and an individual changes address to somewhere outside of their block, they will definitely not be matched. This can be mitigated somewhat by performing multiple passes of blocking on different attributes, but it will still have an impact.
### Whole population linking
When looking at the figures above, it becomes apparent that the deterministic linking identifiers approach is efficient enough to perform whole population linking. This would both simplify the implementation and also avoid the negative impacts of blocking. However, it will be essential to ensure the linking identifiers remain unique across the entire population, and importantly, adequately handle the expected distortions.
Ideally we could get some data to evaluate the rates of uniqueness (which should be very high) for combinations of first name, last name, DoB, address, country of origin/last residence. Then also investigate which subsets of attributes are also (almost) all unique.
This creates the rather unintuitive situation of providing better privacy by adding more information about someone. An important caveat is that any dataset that is to be linked must provide the same granularity of data in order to create the necessary linking key. For example, if year of birth is included in the original dataset, but the incoming dataset doesn’t include that attribute, then none of the linking identifiers that incorporate year of birth can be used. Other linking identifiers that do not contain year of birth can. The decision of what attributes to use is vital and would need to be driven by the data that is available and the level of uniqueness it offers. If it is necessary to perform the same pre-processing as the ONS approach, it would make more sense to utilise their approach for linking, instead of apply a further lossy encoding that may not improve privacy much, but could impact on accuracy.
Option 4: Individual IDs {#individual_ids}
========================
Suppose that each person could be assigned a unique ID number. Then we could separate out two processes:
1. the process of linking a particular name, address and date of birth to the ID number, and
2. the process of linking the records associated with that ID number across different databases.
So suppose that the ABS (and other agencies) had a large table like this:
-------------- ------------------------- ------------- ----------------
[**Name**]{} [**Address**]{} [**DoB**]{} [**ID num**]{}
John Citizen 1 Tree St Broadmeadows 10 Jan 1970 5795935
Jane Citizen 5 Apple Rd Surrey Hills 25 Dec 1912 12334225
... ... ... ...
-------------- ------------------------- ------------- ----------------
This table is not intended to be secret or sensitive: it is like the whitepages, with a link to a non-secret ID like a tax file number or the US social security number.
The suggestion in this section is that stored datasets remove the names, addresses and dates of birth entirely, and store instead the ID number, encrypted so that the private key is secret-shared among multiple people. When a new dataset is received, the ABS should first link the name, address, and date of birth to an ID number—this uses no cryptography, just whatever techniques for fuzzy matching ABS is familiar with. When each record has been assigned to an ID number, the names, address and dates of birth should be removed—the rest of the linking process should occur by looking for exact matches of the ID number. This can be performed on encrypted values.
The assignment and encryption of ID numbers could also be done by other agencies before they send data to the ABS.
Methods for linking records based on exact ID matches
-----------------------------------------------------
Camenisch and Lehmann [@camenisch2015linkable] describe a protocol for linking individual ID numbers across government databases, in a very strong security model in which three different parties cooperate to perform linking while leaking very little information about individual identities. This would be a good starting point for a design of a protocol for future use. Their setting is:
- each of several data authorities may have a public key and some data,
- each person has an ID number,
- a linking authority knows some master information that allows it to translate IDs encrypted for one data authority into the same ID encrypted for a different data authority.
Linking is performed by performing blinded decryption - a process in which a random shift is first applied to the cipher so that the decryption is the real value plus an unknown blinding factor. This is effective for exact matching, since if the two plaintexts are equal and you apply the same blinding factor to both, they will still be equal when decrypted. If they are not equal nobody learns what the original ID was.
This protocol has many good properties. In particular, it is “blind” in the sense that the linking authority does not learn which ID it is linking.
Information required to make the anonymised name/linkage file:
: The table linking IDs to the name, address, [*etc.*]{}, The public key of the Linker.
Information required for linking:
: The Linker’s private key, which can be secret-shared.
Information required for reversing:
: The Linker’s private key and the ID table, which is public.
Ways of inhibiting unauthorised reversing or linking:
: Keep the Linker’s private key secure
Fuzzy matching:
: Yes, at the stage where a name/address/DoB is matched to an ID.
Linking accuracy:
: Could be very high, becase fuzzy matching is performed on cleartext names. The fuzzy matching would, however, be in two steps, effectively canonicalising a name each time.
Implementation difficulty:
: Complex and requiring careful cryptography.
Computational Efficiency:
: Feasible but taking longer than plain encryption.
Other advantages:
:
Other disadvantages:
:
Option 5: Homomorphic encryption {#hom}
================================
*Homomorphic encryption* allows certain computations to be performed on encrypted data. For example, it has been used to add encrypted votes and then decrypt only the totals, not the individual ballots [@adida2009electing]. Recent advances in cryptography include more efficient algorithms for wider classes of computation.
In principle it is now possible to compute any function (for Linking, comparison, [*etc.*]{}) on encrypted data, decrypting only the final answer. In practice, however, the most general techniques require an impractical amount of computation. It is not practically possible to compute a rich linking process, tolerating fuzzy matching and other issues, in a reasonable time.
However, it would be possible to implement some simple comparisons on encrypted names, such as computing the Hamming distance (the total number of different characters). In this case names could remain encrypted throughout the linking process, while only distances were decrypted. The keys used to decrypt the distances *could* still be used to decrypt the names, but in a proper linking process they never would be.
This process would be very secure because the decryption key would never need to leave the Linker. It could even be shared among multiple people so that encrypted names truly could not be reversed unless a threshold of those shares were compromised.
However, even this restricted notion of homomorphic encryption would require vast computational resources. This, combined with the restricted set of linking policies, make it an unattractive option at present. However, it may be worth revisiting in the future if techniques improve. It could be combined with a secure multiparty computation (SMC) technique for computing only the links without revealing the inputs (See Literature Review Section \[sec:lit:smc\]). Indeed, many MPC protocols use homomorphic encryption.
Information required to make the anonymised name/linkage file:
: The public key of the Linker.
Information required for linking:
: The private key of the Linker, but this could be stored in a distributed way and never explicitly recombined.
Information required for reversing:
: The private key of the Linker.
Ways of inhibiting unauthorised reversing or linking:
: Keeping the Linker’s private key secure; never explicitly computing it.
Fuzzy matching:
: Provides a modified hamming distance which will provide fuzzy matching equivalent to using such a string comparison metric. Could be improved further by careful string encoding.
Linking accuracy:
: Multiple comparison can be run, i.e. transposing first and last name if they don’t match. Accuracy likely to be high and close to plaintext matching.
Implementation difficulty:
: Very complex.
Computational Efficiency:
: Requires intensive computation.
Other advantages:
: This is a very secure option, because the decryption key would never need to leave the Linker. It could even be shared among multiple people so that encrypted names truly could not be reversed unless a threshold of those shares were compromised.
Other disadvantages:
: Would need careful analysis of what information could be obtained by the Linker from multiple runs of the protocol and measuring the similarity between different names.
Schemes based on homomorphic encryption or secure computation are the future of secure data processing. These sorts of schemes are an active area of cryptography research with many applications. These sorts of schemes would allow ABS to say truly that it was not able to reverse data if the key could be shared among other organisations. In the long run, these sorts of approaches should become the norm. For now, however, the difficulty of implementation probably means that this is better suited to a longer research project than a practical proposal for this year’s census data.
Empirical linking results based on a Synthetic Data Generator
=============================================================
In order to evaluate the various methods we constructed a synthetic dataset. Our aim was to create something that mirrored, as closely as possible, the frequency distribution of real world data. Validating this is difficult, since access to real world data is not an option. However, we have based our sampling on real world samples and aggregates.
Datasets and frequency distributions
------------------------------------
MBS Demographics
: At the base of the generation is the demographic information from the MBS/PBS release. This contains approximately 29 million records with YOB and Gender.
Last Name
: We obtained a list of 384370 last names the occur in Australia, and the corresponding frequencies. We draw from this at random with a probability distribution matching the frequencies to append a last name to each row of the MBS demographics.
First Name
: We use the NSW data release of frequencies of the top 100 first names for boys and girls from 1952 through to 2015, to select an appropriate Gender and YOB specific first name for each record in MBS demographics. Ideally, we would have more than 100 names, since this only provides 297 distinct boys names, and 377 distinct girls names. Where YOB is not in the NSW release we take the closest year.
Middle Name
: We re-use the NSW data, except we draw the middle name from YOB-20. This somewhat arbitrary, but done in an effort to get a different distribution of middle and first names for an particular year.
Mesh Block
: We originally used postcode frequency data, but postcode is not fine grained enough to provide an equivalent uniqueness to the UK postcode and therefore achieve equivalence with the ONS results. We subsequently switched to use 2011 Census mesh block population distribution data. We select these at random according the population distribution, providing the mesh block, and a value synonymous with an SA3 area[^5] for use in the linking identifiers.
Distortions
-----------
We create a number duplicate datasets with distortions applied in order to evaluate the effective of fuzzy matching. The distortion framework is extensible, so we can further or different distortions. We currently apply the distortions to all records in the duplicate and then evaluate overall impact. We have a mechanism for apply these probabilistically as well. The distortions we currently apply are as follows:
Change Gender
: We switch the Gender from M to F and F to M, primarily to simulate typos.
Change Middle Initial
: We select a different middle initial at random.
Change YOB
: Replace the YOB with a randomly selected YOB drawn from between 1916 and 2016.
First Last Transpose
: We transpose the first and last names
Mesh Block Change
: We randomly select a mesh block from the same distribution as used in the original generation.
Remove/Add Middle Initial
: Remove the middle initial, or randomly add one if there is not one
Transpose Inner Letters of Last Name
: We transpose 2 adjacent letters in the last name, picked at random.
Transpose Inner Letters of First Name
: We transpose 2 adjacent letters in the last name, picked at random.
Analysis of HMAC-based anonymised Linking identifiers
-----------------------------------------------------
In order to evaluate the effectiveness of the HMAC Linking identifier approach we constructed a dataset of the relevant keys. We subsequently imported that data into a MongoDB database, with one collection per type of identifier, i.e. ForenameSurnameYoBSexSA4 was a collection. Within that collection each generated HMAC Linking identifier was a document, indexed by the HMAC Linking identifier value. Within the document was an array containing the rowID of any record that generated that identifier. The advantage of this approach is that it permits easy indexing of the HMAC Linking identifiers, providing extremely fast look-up times. In most cases the array of matching documents is an array of 1, since the objective is to generate primarily unique linking identifiers. When linking a record, the same set of HMAC Linking identifiers are generated from the dataset to be linked, each one is then submitted as a query to the database to find all the records that match that linking identifier. Such a query takes approximately 5ms to perform. Additionally, there is no cross-comparison, so the number of queries is linear with regards to the number of records being linked. This allows for full population linking to be undertaken on even large dataset.
### Determining the best match
The ONS [@ONSM9] approach to performing the linking was to use a hierarchy of identifiers, stopping as soon as a unique match was found. Additionally, they removed matches from both sides of the matching. This approach has a number of problems, it weights identifiers resulting in a false positive in one identifier negating all the identifiers below it in the hierarchy. As such, the ordering of the identifiers becomes very important, but difficult to judge. The removal of matches from both sides of the linking also risks compounding errors. In that if matching two equal populations, a false positive will result in either a subsequent false negative (no match found), or further false negative (lower quality match found), since the correct match has already been removed due the first false positive. Additionally, removing records is inefficient in terms of indexing, negating the performance advantages of this approach. As such, we evaluated two different approaches to finding a match.
#### First unique match
In this simulation we maintain the hierarchical nature of the identifiers and stop as soon as we get a unique match, i.e. the array of matching rowID’s is of size 1. However, in a departure from the ONS [@ONSM9] approach we do not subsequently remove the matching record. If no identifiers have a unique match we consider the record to not be matched even, for example, if one identifier matched multiple records. \[Note that in a real run we would need to guarantee uniqueness.\]
#### Voting
The second approach for matching we evaluated was to perform a vote across all identifiers to determine the most likely match. This was calculated by returning the arrays of rowID’s and then performing a frequency analysis of the contents. Whichever rowID received the most matches was considered to be a match. If two rowID’s had the same frequency one was selected at random. A failure to match would only be returned if there were no matches to any of the HMAC Linking identifiers. This reduces the importance of the order of HMAC Linking identifiers as well as mitigating any identifiers that may have a higher false positive rate, which could be a problem if they appear too high in the hierarchy.
### Uniqueness
At the heart of this approach is the concept that the HMAC Linking identifiers are unique. In order to evaluate that we analysed the identifiers we generated for a synthetic dataset and determined their uniqueness across the dataset. Table \[tab:hmaclinkingkeysunique\] contains the uniqueness of the respective HMAC Linking identifiers. AS we can see most of the identifiers provide a very high level of uniqueness, across the 2.9 million records, the lowest being ForenameSurnameYoBSex at 94.524%. Uniqueness is essential to privacy protection—any degree of non-uniqueness presents a degree of privacy risk. For example, ForenameSurnameSexMeshblock is 99.998% unique, however, that leaves a tiny proportion that are not unique. That lack of uniqueness could be caused, in a real-world dataset, by people who are related, for example, a father and son who share the same first name. Such occurrences are rare, and access to auxiliary information could allow an attacker to look for just such rare occurrences. This is analogous to frequency attack, but on a very specific and small scale. One advantage of this approach is that it permits that risk to be quantified and mitigated. For example, it would be possible to remove entries that are not unique. Such an action would have some impact on recall, but may be preferable to the privacy risk. Such mitigation strategies become difficult when too large a percentage are not unique. For example, consider a linking identifier consisting of just Forename and Surname, which is only 59.778% unique in our test dataset. Depending on the identifier’s location within the hierarchy it could have an impact on precision. This set of attributes should not be used as a linking identifier.
**Uniqueness of Linking Keys**
---------------------------------------------- --------
ForenameSurnameYoBSexSA3 99.971
ForenameInitialSurnameInitialYoBSexMeshblock 99.972
ForenameSurnameYoBMeshblock 99.999
SurnameForenameYoBSexMeshblockTrans 99.999
ForenameSurnameYoBSexMeshblock 99.999
ForenameSurnameYoBSex 94.524
ForenameBiSurnameBiYoBSexMeshblock 99.844
ForenameSurnameSexMeshblock 99.997
SurnameInitialYoBSexMeshblock 99.708
ForenameInitialYoBSexMeshblock 99.601
MiddleNameSurnameYoBSexMeshblock 99.999
: Uniqueness of HMAC Linking Keys[]{data-label="tab:hmaclinkingkeysunique"}
### Matching results
Table \[tab:hmaclinkingkeys\] shows the comparison of the matching results for Voting and Non-Voting methods. Precision and Recall are calculated based on comparing the return match with actual match. The linking dataset is a shuffled, and if appropriate, distorted copy of the original dataset. No records inserted or deleted, as such, we would expect recall to be 1. Recall would only drop below 1 when no matching to any record was found. The precision indicates the the HMAC Linking Key approach is an effective method for matching records when faced with the tested distortions. It should be noted that we have not evaluated results based on composite distortions, for example, transposing letters and changing mesh block. However, that would be fairly straightforward to test if required. The robustness of the approach to distortion can be determined by examining the HMAC Linking Keys that are constructed. Effectively, to be robust to a distortion there must remain at least one key that is not impacted by that distortion. For example, where a mesh block changes within an SA3 area we are reliant on the ForenameSurnameYoBSexSA3 and ForenameSurnameYoBSex linking keys to determine matches. Where a mesh block changes outside an SA3area we are reliant on only the ForenameSurnameYoBSex key. This is reflected in the precision results that show mesh block changes cause the greatest reduction in precision. Combining the uniqueness information with expected distortions it is possible to determine whether the linking keys generated will be robust to it, without having to perform an evaluation on the actual dataset. For example, if both Year Of Birth and Gender change we can be certain that no keys will be able to provide a match.
The set of keys used in our evaluation is not exhaustive, different keys could be created to handle specific distortions, or composite distortions. The only requirement is that they are largely unique. As such, the exact set of linking keys to be used should be derived from the looking at the actual dataset. It is important to perform this step, since once identifying data is deleted additional keys including that data cannot be created. As such, the approach should aim to handle all expected distortions at the point of creation.
****
--------------------------- ------- ------- ------- -------
**Distortion**
changeInitial 1.000 0.999 0.999 1.000
firstLastTranspose 0.990 0.999 0.994 1.000
exact 1.000 1.000 1.000 1.000
lastName2LetterTranspose 0.999 0.999 0.999 1.000
removeAddInitial 1.000 0.999 0.999 1.000
firstName2LetterTranspose 1.000 0.999 0.999 1.000
meshblockChange 0.982 0.904 0.937 1.000
changeGender 0.987 0.999 0.967 1.000
: HMAC Linking Keys Results[]{data-label="tab:hmaclinkingkeys"}
Direct Bi-gram matching
-----------------------
By way of a comparison we also analysed a simple bi-gram matching process. In this scheme each bi-gram was encoded into an HMAC, with the HMAC then being compared as bi-grams. This provided a degree of privacy protection, although it would remain susceptible to frequency attacks, particularly given the analysis in Section \[sec:randvalues\_bigramskew\] that demonstrated strong skews in the frequency distribution of bi-grams. A major challenge in performing the bi-gram matching was the inefficiency of performing a cross-comparison. It was infeasible to do this for the entire dataset, and as such we had to deploy a blocking procedure. Even with a reasonable level of blocking the processing time was substantial. In order to allow us to evaluate different matching techniques we took a sample of 3 blocks, each consisting of between 15,000 and 18,500 records. We then performed the cross comparison within those blocks.
We did not evaluate multi-round matching that would involve different blocking methods to allow handling of geographical changes. We only evaluated distortions that would impact on the result, as such, changes to middle initial, age, and gender were not evaluated. Likewise, given that we know that a wholesale geographical change would lead to a precision of 0, we did not evaluate that either.
In order to determine whether two sets of bi-grams matched we tried two approaches. The simple approach was to calculate a dice-coefficient between the bi-gram sets. This was simple and fast, and maintained the order of the bi-grams. However, it is not robust to insertion or deletions of bi-grams, that wasn’t an issue in our tests because we were not performing that distortion, but would be an issue in a real world setting. The second approach was to calculate the q-gram similarity of the two sets of bi-grams [@ukkonen1992approximate]. This is calculated by first calculating the q-gram distance, which requires counting the occurrences of each bi-gram in the two strings and taking the sum of the absolute differences of those counts. We then sum the cardinality of the two bi-gram sets and set this as the maximum distance. We then take the calculated distance from the maximum and divide by the maximum to get a similarity score between 0 and 1.
The disadvantage of this approach is that it is more computationally expensive to calculate, which over a full set of blocks would impact on the time required to perform the linking.
Table \[tab:ngrammatching\] shows the results for the first matching approach. We can see that it performs well in exact matching and the letter transposition distortions. It is not quite as good as the HMAC Linking Key approach, but it is not far off.
The transposition of the entire first and last name performs badly in terms of precision, as we would expect, since we evaluate first and last name as distinct values and then combine their similarities. This could be mitigated by performing an additional comparison with the query first and last name transposed, however, this will have the effect of doubling the computational effort required to perform the matching, which could well push a time consuming process into an infeasible process.
**Bi-gram Linking Dice-Coefficient**
-------------------------------------- ------- ---
**Distortion**
firstName2LetterTranspose 0.980 1
exact 0.981 1
firstLastTranspose 0.001 1
lastName2LetterTranspose 0.969 1
: Bi-Gram Linking Results[]{data-label="tab:ngrammatching"}
The results for the second approach are shown Table \[tab:ngrammatchingqgram\]. They are marginally worse than for the simpler approach. This is somewhat to be expected, since this matching approach is more tolerant of changes, particularly insertion and deletion. As a result, the chance of false positive increases slightly.
**Bi-gram Linking q-gram Scoring**
------------------------------------ ------- ---
**Distortion**
firstName2LetterTranspose 0.975 1
exact 0.980 1
firstLastTranspose 0.001 1
lastName2LetterTranspose 0.957 1
: Bi-Gram Linking Results (q-gram)[]{data-label="tab:ngrammatchingqgram"}
The bi-gram matching approach performs reasonably well, as would be expected. However, the computational cost, combined with it susceptibility to frequency attacks weaken the argument for its use.
Conclusion
==========
All good cybersecurity solutions are a tradeoff among different objectives: security, usability, access, accuracy, computation time, cost, [*etc.*]{} A clear attacker model is critical for understanding what the security guarantees are, so that the best solution can be chosen. The aim of this report is to make clear the assumptions of the protocols, so that ABS’s careful processes for managing data security can be accurately matched to the assumptions on which the protocols’ security depends.
Detailed unit-record level data, including Census data, can often be re-identified even without the name, based on other information about the person or household, such as birthdates and location. The promise to encode names using a cryptographic hash function in a way that cannot be reversed is therefore, if taken absolutely, not achievable in the presence of auxiliary data—many records could be re-identified even if the names were completely removed.
We have presented several options that satisfy reasonable interpretations of the requirements, in the context of auxiliary data and ABS processes for securing Census data. Option 1 is simple encryption, which (if properly implemented) cannot be reversed except with the decryption key. Option 2, lossy encoding, sends many different names to the same encoded value. It can be reversed to a set of names, but not a unique one (if properly implemented), if the attacker has no auxiliary data about the person. Option 3 produces anonymised linking identifiers that do not make re-identification any easier for an attacker who doesn’t have the HMAC key (if properly implemented). It sacrifices some flexibility for very high computational efficiency. Options 4 and 5 provide suggestions for future directions using some more sophisticated cryptographic approaches based on homomorphic encryption and multiparty computation.
The most computationally efficient solution we could find is Option 3, to compute an HMAC on a collection of different subsets of attributes, then use exact matches at linking time (Section \[hmac\_linking\_keys\]). This provides some defence against a motivated attacker who does not know the secret key. The only information leaked is about the frequency of the different inputs — if the attributes are carefully chosen it is possible to ensure that every input is unique. This solution could be adopted, and has approximately the same security, with a lossy encoding of names.
Our literature review explains why some other proposals in the literature, including plain cryptographic hashing and Bloom filters, do not defend against a motivated attacker.
We would like to thank the ABS for their time and engagement in discussing these questions. We valued the conversations and the motivation to work on a challenging and important practical problem.
A key aspect of earning public trust is to be open about the details of the algorithms used for keeping data secure. Whichever solution ABS decides to adopt, we hope that this paper contributes to an open, factual discussion of linking options and census data security.
[^1]: University of Melbourne Research Contract 85449779
[^2]: We used the surname list from IP Australia and a list of baby names.
[^3]: In practice, public key encryption can be implemented by generating a random key for a secret-key algorithm such as AES, encrypting the data with that key, and then encrypting the key with the recipient’s public key.
[^4]: Though if only one person has a particular pair of initials, more information than this needs to be lost.
[^5]: We could convert from mesh block ID to actual SA3 codes, but it is not necessary for our analysis, because our distortions are performed only at meshblock level. When using the pseudo-SA3 value we need to just be representative of the number of codes, hence we derive it from the meshblock ID instead of going to the complexity of performing a full look-up
|
---
abstract: 'Person re-identification is the task of matching pedestrian images across non-overlapping cameras. In this paper, we propose a non-linear cross-view similarity metric learning for handling small size training data in practical re-ID systems. The method employs non-linear mappings combined with cross-view discriminative subspace learning and cross-view distance metric learning based on pairwise similarity constraints. It is a natural extension of XQDA from linear to non-linear mappings using kernels, and learns non-linear transformations for efficiently handling complex non-linearity of person appearance across camera views. Importantly, the proposed method is very computationally efficient. Extensive experiments on four challenging datasets shows that our method attains competitive performance against state-of-the-art methods.'
author:
- T M Feroz Ali
- Subhasis Chaudhuri
bibliography:
- 'egbibOrigICCV2019.bib'
title: 'Cross-View Kernel Similarity Metric Learning Using Pairwise Constraints for Person Re-identification'
---
Introduction
============
Person re-identification (re-ID) is the problem of matching person images from one camera view against the images captured from other non-overlapping camera views. Re-ID is a very challenging task as images of same person have significant appearance changes across views, due to large variation in illumination, background and pose. Also the low resolution surveillance cameras and common pedestrian attributes cause high visual similarity among different persons.
Most existing methods for person re-identification concentrate on (i) design of identity discriminative feature descriptors and (ii) distance metric learning. The hand crafted feature descriptors [@LOMO; @GOG; @LisantiPAMI14] have improved the re-ID performance, but they are alone insufficient in handling the large appearance changes across cameras. Hence the distance metric learning methods[@NK3ML; @LOMO; @rPcca; @Zheng:nfst; @IRS; @SSSVM; @MFML; @SemiNK3ML] are used to learn a better similarity measure such that, irrespective of the view, same class samples are closer and distinct class samples are well separated.
In recent years, though deep learning methods [@ImprDeep; @MuDeep; @PTGAN; @Beyond:triplet_loss; @DGD; @TCP; @SpindleNet; @SLSTM; @SCNN] have made good improvement in re-ID performance, they have a fundamental limitation in practical deployment as they need a large, annotated training data. Even with pre-trained networks, based on auxiliary/external supervision, such methods struggle to perform on small size training data. Hence we refrain from using deep learning methods in this paper and instead concentrate on the following problem: “Given a *small size training data* with given feature descriptors, can we design a better re-ID system, *without* using any auxiliary/external supervision”.
Metric learning methods have shown a good performance in handling small size training data. However, most of them have two fundamental limitations: (**I**) *Small Sample Size (SSS) problem*: The SSS problem occurs when the number of training samples is less than the feature dimension. This creates singularity of inter/intra class scatter matrices. Hence most methods use unsupervised dimensionality reduction, which tend to make them sub-optimal. (**II**) *Less Efficient Models*: Person appearance undergoes complex non-linear transformation across views. However, most existing methods use an inherent linear transformation of the input features, which limits their capability in learning non-linear features.
For addressing the above two limitations, we propose a new non-linear metric learning method, referred to as, *Kernel Cross-view Quadratic Discriminant Analysis (k-XQDA)*. It is a kernalized (non-linear) counterpart of XQDA[@LOMO], which is one of the most popularly applied metric learning method in re-ID literature. k-XQDA uses mapping of the data samples to a very high dimensional kernel space, where it learns a cross-view distance metric and a cross-view discriminative subspace simultaneously, using pairwise similarity constraints. It is capable of learning highly effective non-linear features in the input feature space. k-XQDA efficiently handles the non-linearity in cross-view appearance and perform competitively against state-of-the-art methods. Importantly, our kernelized approach is computationally more efficient compared to the baseline methods.
Related Methods
===============
Using given standard feature descriptors, the supervised metric learning methods generally learn a discriminative subspace or a Mahalanobis distance metric where the inter-class samples come closer and intra-class samples get well separated. The subspace learning methods like LFDA [@LFDA:CVPR], NFST[@Zheng:nfst], NK3ML [@NK3ML] and IRS [@IRS] use classification based model to learn discriminative features that generalize well to unseen data. For example, LFDA [@LFDA:CVPR] learned a discriminative subspace that maximize the ratio of between class variance and the within class variance, while preserving the local neighborhood structure of the data. NFST [@Zheng:nfst] used a more optimal discriminative nullspace to maximally collapse the same class samples to a single point. NK3ML [@NK3ML] and IRS [@IRS] were proposed to overcome the limitation of NFST in discriminating inter-class samples. The Mahalanobis distance metric based methods like LMNN [@LMNN1], LDML [@LDML], KISSME [@KISSME], MLAPG [@MLAPG] learn a Mahalanobis distance function of form $d(\mathbf{x}_i,\mathbf{x}_j)=(\mathbf{x}_i-\mathbf{x}_j)^T M(\mathbf{x}_i,-\mathbf{x}_j)$, where $M\succcurlyeq 0$ is a positive semi-definite matrix. LDML [@LDML] used a probabilistic view for learning the Mahalanobis metric. LMNN [@LMNN1] learned the metric using constraints that ensure a margin between similar and dissimilar class samples. KISSME [@KISSME] considered the space of pairwise differences to define similar and dissimilar class, and then used a log likelihood ratio test to obtain a Mahalanobis distance metric. In order to take advantage of both subspace learning and Mahalanobis distance metric learning methods, S. Liao [*et al*. ]{}proposed XQDA that simultaneously learned a cross-view discriminative subspace along with KISSME based cross-view distance metric.
However, due to the large non-linearity in person appearance across cameras, the linear transformation induced by the above methods are unlikely to discriminate the persons efficiently. Hence kernel based distance metric learning methods [@rPcca; @Zheng:nfst; @IRS; @kKISSME] were introduced to handle non-linearity in re-ID. F. Xiong [*et al*. ]{}kernalized LFDA[@LFDA:CVPR] to obtain kLFDA[@rPcca]. Similarly L. Zhang [*et al*. ]{}used kernel-NFST[@Zheng:nfst] and H. Wang [*et al*. ]{}used the kernel-IRS[@IRS]. Recently kernalized version of KISSME, namely k-KISSME[@kKISSME] was derived and used to successfully improve the re-ID performance.
XQDA[@LOMO] is one the most popular metric learning methods in re-ID literature and has been used in conjunction with many methods like GOG[@GOG], SSDAL[@SSDAL], SSM[@song:scalableManifold], and also applied with recent deep learning based methods [@Reranking:kreciprocal]. However, it uses inherent linear transformation for learning the features. Hence obtaining an efficient kernalized (non-linear) version of XQDA becomes highly relevant. However, deriving the kernalized version of a method is not always a trivial task and may need complex analysis. In this paper, we derive the kernalized version of XQDA, namely k-XQDA. We show that k-XQDA can learn highly efficient non-linear features to handle the complex variations in person appearance. k-XQDA naturally handles SSS problem, since k-XQDA is a kernel based method, where the inherent matrices used in its computations have dimensions that are independent of feature dimensions and depends only on the training sample size. Our k-XQDA can handle small size training data effectively. We also show through our rigorous derivations, though involved, we finally attain simplified expressions that are computationally very efficient and fast, making it suitable for practical implementation.
Kernel Cross-View Quadratic Discriminant Analysis
=================================================
We first revisit KISSME and XQDA. Then we present the proposed method k-XQDA.
KISSME revisit
--------------
KISSME learns distance metric based on equivalence constraints given as similar or dissimilar pairs. Given data samples $\mathbf{x} \in \mathbb{R}^d$ in the input feature space, belonging to $c$ classes, they consider the space of all pairwise sample differences $\Delta_{ij} = \mathbf{x}_i-\mathbf{x}_j$ and defines two classes, similar class $\Omega_S$ and dissimilar class $\Omega_D$, containing $n_S$ and $n_D$ samples, respectively. The pairwise difference would be comparatively small for similar class $\Omega_S$ samples and large for dissimilar class $\Omega_D$ samples. By distinguishing the variations of the two classes, any general multiclass classification problem is subsequently solved. As the pairwise differences are symmetric, both the classes $\Omega_S$ and $\Omega_D$ are assumed to be zero mean Gaussian distributions with covariance $\Sigma_S$ and $\Sigma_D$. Motivated by statistical inference perspective, the optimal decision function $\delta(\Delta_{ij})$ that indicates whether a difference pair $\Delta_{ij}$ belongs to the similar or dissimilar class is obtained by a log likelihood ratio test of the two Gaussian distributions. $$\begin{aligned}
\delta(\Delta_{ij}) &=&log \Big(\frac{p(\Delta_{ij}|\Omega_D)}{p(\Delta_{ij}|\Omega_S)}\Big)\\
&=&log\Bigg(\frac{\frac{1}{(2\pi)^{d/2}|\Sigma_D|}exp(-\frac{1}{2} \Delta_{ij}^T \Sigma_D^{-1}\Delta_{ij})}{\frac{1}{(2\pi)^{d/2}|\Sigma_S|}exp(-\frac{1}{2} \Delta_{ij}^T \Sigma_S^{-1}\Delta_{ij})}\Bigg)\end{aligned}$$ A high value of $\Delta_{ij}$ implies that $\Delta_{ij} \in \Omega_D$, while a low value implies $\Delta_{ij} \in \Omega_S$. The decision function is simplified [@KISSME] to get $$\begin{aligned}
\delta(\Delta_{ij}) &\propto& \Delta_{ij}^T(\Sigma_S^{-1}-\Sigma_D^{-1})\Delta_{ij} \, ,\end{aligned}$$ and finally the KISSME distance metric is obtained that mirror the properties of the log likelihood ratio test, as given below. $$\begin{aligned}
d(\mathbf{x}_i,\mathbf{x}_j) &=& (\mathbf{x}_i-\mathbf{x}_j)^T (\Sigma_S^{-1}-\Sigma_D^{-1})_{+} (\mathbf{x}_i-\mathbf{x}_j)
\label{eqn:KISSME}\end{aligned}$$ where $(\cdot)_{+}$ represents the projection to the cone of positive semi-definite matrices using eigen analysis, to ensure (\[eqn:KISSME\]) to be a valid Mahalanobis distance metric. It can be seen that learning the KISSME distance metric corresponds to estimating the covariance matrices $\Sigma_S$ and $\Sigma_D$. $$\begin{aligned}
\Sigma_S =\sum_{\Delta_{ij} \in \Omega_S} (\mathbf{x}_{i}-\mathbf{x}_j)(\mathbf{x}_{i}-\mathbf{x}_j)^T \nonumber \\
\Sigma_D =\sum_{\Delta_{ij} \in \Omega_D} (\mathbf{x}_{i}-\mathbf{x}_j)(\mathbf{x}_{i}-\mathbf{x}_j)^T
\label{KISSME_CovMat_calc}\end{aligned}$$
XQDA revisit
------------
KISSME becomes intractable in very high dimensions and hence it uses PCA on the input features to get a low dimensional subspace, where $\Sigma_S$ and $\Sigma_D$ are estimated. However, the unsupervised dimensionality reduction doesn’t consider distance metric learning and can loose discriminative information. Also KISSME considers single view data, [*i*.*e*., ]{}it does not account any distinction of camera views for considering the pairwise sample differences.
In order to address the above two limitations, S. Liao [*et al*. ]{}extended KISSME and proposed a *cross-view* metric learning approach called Cross-view Quadratic Discriminant Analysis (XQDA), where cross view data is used to learn a cross view discriminative subspace and a cross-view similarity measure simultaneously.
In particular, given samples from $c$ classes, with $n$ samples $\mathbf{X} = (\mathbf{x}_1, \mathbf{x}_2, \ldots,\mathbf{x}_n)$ from one view and $m$ samples $\mathbf{Z} = (\mathbf{z}_1, \mathbf{z}_2, \ldots,\mathbf{z}_m)$ from the other view, s.t. $\mathbf{x}_i, \mathbf{z}_i \in \mathbb{R}^d$, XQDA uses cross-view training set $\{\mathbf{X},\mathbf{Z}\}$ and considers the $nm$ pairwise sample differences *across views* to estimate the cross-view similar and dissimilar classes, making the distance metric more viewpoint invariant. XQDA learns a subspace $W = (\mathbf{w}_1,\mathbf{w}_2,\ldots, \mathbf{w}_b) \in \mathbb{R}^{d \times b}$ that maximize the discrimination between the two classes $\Omega_S$ and $\Omega_D$, and learn a distance measure, similar to Eq. (\[eqn:KISSME\]), as $$d(\mathbf{x}_{i},\mathbf{z}_{j}) = (\mathbf{x}_{i}-\mathbf{z}_{j})^T W(\Sigma^{\prime-1}_{S} - \Sigma^{\prime-1}_{D})_{+} W^T(\mathbf{x}_{i}-\mathbf{z}_{j})
\label{eqn:XQDAmetric}$$ where $\Sigma^{\prime}_{S} = W^T \Sigma_{S}W$, $\Sigma^{\prime}_{D} = W^T \Sigma_{D}W$. As the classes $\Omega_S$ and $\Omega_D$ have zero mean, Fisher criterion based LDA can not be directly used to learn the subspace $W$ that discriminates the classes. However, XQDA uses the class variances $\sigma_S$ and $\sigma_D$ to discriminate the classes. More specifically, XQDA obtains the discriminant vectors $\mathbf{w}_k$ in $W$ such that they maximize the ratio of the class variances $\sigma_D(\mathbf{w}_k)$ and $\sigma_S(\mathbf{w}_k)$, in the corresponding directions, which has a form of Generalized Rayleigh Quotient, $$\begin{aligned}
J(\mathbf{w}_k) = \frac{\sigma_D(\mathbf{w}_k)}{\sigma_S(\mathbf{w}_k)} = \frac{\mathbf{w}^T_k \Sigma_{D}\mathbf{w}_k}{\mathbf{w}^T_k \Sigma_{S}\mathbf{w}_k}\,.
\label{eqn:XQDAcost}\end{aligned}$$ Thus XQDA finds the subspace $W$ such that the variance of $\Omega_D$ is maximized, while variance of $\Omega_S$ is minimized, thereby discriminating the two class based on their variances. The optimal discriminants are composed of the eigenvectors corresponding to $b$ largest eigenvalues of $\Sigma_{S}^{-1}\Sigma_{D}$.\
**Efficient Computation**: As there are $nm$ pairwise sample differences, the calculation of cross-view covariance matrices $\Sigma_{D}$ and $\Sigma_{S}$ using (\[KISSME\_CovMat\_calc\]) requires $\mathcal{O}(mnd^2)$ and $\mathcal{O}(NKd^2)$, multiplications respectively, where $N=max(m,n)$ and $K$ is the average number of samples per class. However, the covariance matrices can be efficiently calculated without actually computing the $nm$ pairwise differences, by simplifying them as follows: $$\begin{aligned}
n_S \Sigma_S &= \widetilde{\mathbf{X}}\widetilde{\mathbf{X}}^T + \widetilde{\mathbf{Z}}\widetilde{\mathbf{Z}}^T - \mathbf{S}\mathbf{R}^T -\mathbf{R}\mathbf{S}^T \label{eqn:SigmaS}\\
n_D \Sigma_D &= m \mathbf{X}\mathbf{X}^T + n \mathbf{Z}\mathbf{Z}^T -\mathbf{s}\mathbf{r}^T -\mathbf{r}\mathbf{s}^T -n_S\Sigma_S
\label{eqn:SigmaD}
\end{aligned}$$ where $\widetilde{\mathbf{X}} = (\sqrt{m_1}\mathbf{x}_1, \sqrt{m_1}\mathbf{x}_2, \ldots, \sqrt{m_1}\mathbf{x}_{n_1}, \ldots, \sqrt{m_c}\mathbf{x}_n)$, $\widetilde{\mathbf{Z}} = (\sqrt{n_1}\mathbf{z}_1, \sqrt{n_1}\mathbf{z}_2, \ldots,$ $\sqrt{n_1}\mathbf{z}_{m_1} \ldots, \sqrt{m_c}\mathbf{z}_m),
\;\;\; \mathbf{S} = (\sum_{{y_i}=1}\mathbf{x}_i, \sum_{{y_i}=2}\mathbf{x}_i, \ldots, \sum_{{y_i}=c}\mathbf{x}_i), \;\;\;\; {\mathbf{s}=\sum_{i=1}^n \mathbf{x}_i,} \\
{\mathbf{R} = (\sum_{{y_j}=1}\mathbf{z}_j, \sum_{{y_j}=2}\mathbf{z}_j, \ldots, \sum_{{y_j}=c}\mathbf{z}_j)}, \;\;\;{\mathbf{r} = \sum_{j=1}^m \mathbf{z}_j}$, $y_i,y_j \in \{1,\ldots,c\}$ are the class labels of $\mathbf{x}_i$ and $\mathbf{z}_j$ respectively, $n_i$ is the number of samples for class $y_i$ in $\mathbf{X}$ and $m_i$ is the number of samples for class $y_j$ from $\mathbf{Z}$. The simplified expressions in (\[eqn:SigmaS\]) and (\[eqn:SigmaD\]), reduces the computations of both the covariance matrices to $\mathcal{O}(Nd^2)$.
Kernel-XQDA
-----------
Next, we propose how XQDA can be kernalized to obtain its non-linear version . Kernel methods use a non-linear mapping of input samples to a high dimensional space, implicitly determined by a kernel function. In the kernel space, the primary model and the inherent transformations are learned, which results in learning the corresponding non-linear models and transformations in the input feature space.
Let the kernel function be $k(\mathbf{x}_i,\mathbf{x}_j)= \langle\phi(\mathbf{x}_i),\phi(\mathbf{x}_j)\rangle$, where $\phi(\mathbf{x})$ is the non-linear mapping of the input sample $\mathbf{x}$ to the high dimensional kernel space $\mathcal{F}$. For kernalization, the XQDA model has to be formulated in terms of inner products $ \langle\phi(\mathbf{x}_i),\phi(\mathbf{x}_j)\rangle$, which is then replaced using the kernel function $k(\mathbf{x}_i,\mathbf{x}_j)$. Hence the derivation of k-XQDA involves mainly (**I**) the kernalization of the cost function $J(\mathbf{w}_k)$ in (\[eqn:XQDAcost\]) and (**II**) the distance metric function $d(\mathbf{x}_i,\mathbf{z}_j)$ in (\[eqn:XQDAmetric\]).
Note that the kernelization of the cost function (\[eqn:XQDAcost\]) involves kernelizing w.r.t the covariance matrices, for which, a clean and straightforward way is to use the expressions in (\[KISSME\_CovMat\_calc\]), based on indexing. However, it would require computing the outer product for $nm$ pairwise differences, making k-XQDA computationally inefficient. Hence we strictly adhere to use the expressions in (\[eqn:SigmaS\]) and (\[eqn:SigmaD\]) itself, in order to make k-XQDA computationally efficient. However, kernelizing using the later is a complex task mainly due to two reasons: (i) The matrices $\widetilde{\mathbf{X}}, \mathbf{S}, \mathbf{X}, \mathbf{s}$ depends on data samples from one view, while the matrices $\widetilde{\mathbf{Z}}, \mathbf{R}, \mathbf{Z}, \mathbf{r}$ depends on the data samples from the other view. Hence we need to separately account the kernel functions corresponding to each view. (ii)Computing the kernel functions corresponding to $\mathbf{S}, \mathbf{R}, \mathbf{s}, \mathbf{r}$ involves separately computing the kernel functions for the mean of each class and all classes from each view. However, we show that, though the derivations are little involved, we finally obtain clean and elegant kernelized expressions for the covariance matrices and the cost function (\[eqn:XQDAcost\]), which are also computationally very efficient for practical implementation.
Given the cross-view training data $(\mathbf{X},\mathbf{Z})\in \mathbb{R}^{d \times (n+m)}$, the kernel matrix $\mathbf{K} \in \mathbb{R}^{(n+m) \times (n+m)}$ can be calculated and expressed as block matrices of the form $$\begin{aligned}
\mathbf{K}=\left[\begin{array}{@{}c|c@{}}
K_{XX} & K_{XZ}\\
\hline
K_{ZX} & K_{ZZ}
\end{array}\right]
\label{eqn:mainK}\end{aligned}$$ where the block-matrices $K_{XX} \in \mathbb{R}^{n \times n}$, $K_{ZZ} \in \mathbb{R}^{m \times m}$, $K_{XZ} \in \mathbb{R}^{n \times m}$ and $K_{ZX} \in \mathbb{R}^{m \times n}$ are such that $$K_{XX}=\Phi_X^T\Phi_X,\;K_{ZZ}=\Phi_Z^T\Phi_Z,\;K_{XZ}=\Phi_X^T\Phi_Z, \;K_{ZX}=\Phi_Z^T\Phi_Z
\label{eqn:Kblockmatppty}$$ Note that each of the block matrices $K_{XX}$ and $K_{ZZ}$ are the kernel matrices corresponding to the samples of separate views, and the block matrices $K_{XZ}$ and $K_{ZX}$ are the kernel matrices corresponding to the samples across views. Also the block matrices have the following symmetry properties: $$K_{XX} = K_{XX}^T, \quad K_{ZZ} = K_{ZZ}^T, \quad K_{XZ} = K_{ZX}^T.
\label{eqn:Symmppty}$$ In the kernel space $\mathcal{F}$, every discriminant vector $\mathbf{w}_k$ lies in the span of the training data set $\{\phi(\mathbf{x}_1),\ldots, \phi(\mathbf{x}_n), \phi(\mathbf{z}_1), \ldots, \phi(\mathbf{z}_m)\}$. Hence $\mathbf{w}_k$ can be expressed in the form: $$\begin{aligned}
\mathbf{w}_k &=& \sum_{i=1}^n \alpha_i^{(k)} \phi(\mathbf{x}_i) + \sum_{j=1}^m \beta_j^{(k)} \phi(\mathbf{z}_j)
\label{eqn:rep_theorem}\end{aligned}$$ It should be noted that in conventional kernel methods, a vector $\mathbf{w}$ in the feature space $\mathcal{F}$ is expressed using expansion coefficients $\alpha$ as $\mathbf{w} = \sum_i \alpha_i^{(k)} \phi(\mathbf{x}_i)$. However, in (\[eqn:rep\_theorem\]) we use two expansion coefficients $\alpha$ and $\beta$, in order to separately account the samples belonging to each view. The vector $\mathbf{w}_k$ in (\[eqn:rep\_theorem\]) can be rewritten as $$\begin{aligned}
\mathbf{w}_k &=& \Phi_X \bm{\alpha}_k + \Phi_Z \bm{\beta}_k = \bm{\Phi}\bm{\theta}_k
\label{eqn:rep_theorem2}\end{aligned}$$ where $\Phi_{X} = [\phi(\mathbf{x}_1),\dots,\phi(\mathbf{x}_n)]$ and $\Phi_{Y} = [\phi(\mathbf{z}_1), \dots, \phi(\mathbf{z}_m)]$ are respectively the matrix functions that map all the samples of $\mathbf{X}$ and $\mathbf{Z}$ to the kernel space $\mathcal{F}$, and $\bm{\alpha}_k = [\alpha_1^{(k)}, \alpha_2^{(k)}, \ldots, \alpha_n^{(k)}]^T$ and $\bm{\beta}_k = [\beta_1^{(k)}, \beta_2^{(k)}, \ldots, \beta_m^{(k)}]^T$ are the expansion coefficient vectors corresponding to each view, $ \bm{\theta}_k = \left[\bm{\alpha}_k,\bm{\beta}_k\right]^T$ is the combined expansion coefficient vector and $\bm{\Phi} = [\Phi_{X}, \Phi_{Z}]$ . Hence $ \mathbf{w}_k$ in the kernel space is represented using $\bm{\alpha}_k$ and $\bm{\beta}_k$, or equivalently by $\bm{\theta}_k$.
In the following we show how XQDA’s cost function $J(\mathbf{w}_k)$ in (\[eqn:XQDAcost\]) and the distance metric $d(\mathbf{x}_i,\mathbf{z}_j)$ in (\[eqn:XQDAmetric\]) can be kernelized:\
**3.3.1 $\quad$ Kernelization of cost function $J(\mathbf{w}_k)$:**\
We show that both the numerator term $\mathbf{w}^T_k \Sigma_{D}\mathbf{w}_k$ and denominator term $\mathbf{w}^T_k \Sigma_{S}\mathbf{w}_k$ of the cost function $J(\mathbf{w}_k)$ can be formulated in terms of inner products and hence they can be separately kernalized.
**Kernelization of denominator** $\mathbf{w}^T_k \Sigma_{S}\mathbf{w}_k$: As seen in Eq.(\[eqn:SigmaS\]), $\Sigma_{S}$ is a function of $\widetilde{\mathbf{X}},\widetilde{\mathbf{Z}},\mathbf{S},\mathbf{R}$, which are in turn functions of the training set samples. So we first express these matrices in the kernel space $\mathcal{F}$ using the function $\phi(\cdot)$ as follows: $$\begin{aligned}
\Phi_{\widetilde{X}} &=& [\sqrt{m_1}\phi(\mathbf{x}_1),\dots,\sqrt{m_1}\phi(\mathbf{x}_{n_1}),\ldots,\sqrt{m_c}\phi(\mathbf{x}_n)] \label{eqn:PhiXtilde} \\
\Phi_{\widetilde{Z}} &=& [\sqrt{n_1}\phi(\mathbf{z}_1), \dots, \sqrt{n_1}\phi(\mathbf{z}_{m_1}), \dots, \sqrt{n_c}\phi(\mathbf{z}_m)] \label{eqn:PhiZtilde}\\
\Phi_{S} &=& (\sum_{{y_i}=1}\phi(\mathbf{x}_i), \sum_{{y_i}=2}\phi(\mathbf{x}_i), \ldots, \sum_{{y_i}=c}\phi(\mathbf{x}_i)) \label{eqn:PhiS}\\
\Phi_{R} &=& (\sum_{{y_j}=1}\phi(\mathbf{z}_j), \sum_{{y_j}=2}\phi(\mathbf{z}_j), \ldots, \sum_{{y_j}=c}\phi(\mathbf{z}_j))\label{eqn:PhiR}\end{aligned}$$ Then, using (\[eqn:SigmaS\]), the covariance matrix $\Sigma_S$ in $\mathcal{F}$ can be expressed as $$\begin{aligned}
n_S \Sigma_S &=& \underbrace{\Phi_{\widetilde{X}}\Phi_{\widetilde{X}}^T}_{A}
+ \underbrace{\Phi_{\widetilde{Z}}\Phi_{\widetilde{Z}}^T}_{B} -\underbrace{\Phi_{S}\Phi_{R}^T}_{C} -\underbrace{\Phi_{R}\Phi_{S}^T}_{D}\label{eqn:ABCD}\end{aligned}$$ Then using Eq. (\[eqn:rep\_theorem2\]) and (\[eqn:ABCD\]), the numerator term $\mathbf{w}_k^T \Sigma_S \mathbf{w}_k $ can be written as $$\begin{aligned}
\mathbf{w}_k^Tn_S \Sigma_S \mathbf{w}_k =
f_A(\bm{\alpha}_k,\bm{\beta}_k) + f_B(\bm{\alpha}_k,\bm{\beta}_k)+f_C(\bm{\alpha}_k,\bm{\beta_k})+f_D(\bm{\alpha}_k,\bm{\beta}_k) \label{eqn:Num}\end{aligned}$$ where the functions $f_A$, $f_B$, $f_C$ and $f_D$ are of the form $$\begin{aligned}
f_Y(\bm{\alpha}_k,\bm{\beta}_k) &= \bm{\alpha}_k^T \Phi_{X}^T Y \Phi_{X} \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_{Z}^T Y \Phi_{Z} \bm{\beta}_k \nonumber \\
& \qquad +\bm{\alpha}_k^T \Phi_{X}^T Y \Phi_{Z} \bm{\beta}_k +
\bm{\beta}_k^T \Phi_{Z}^T Y \Phi_{X} \bm{\alpha}_k \label{eqn:f_A}\end{aligned}$$ for $Y=A,B,C,D$, which are defined in (\[eqn:ABCD\]). Next we show that each of the functions in (\[eqn:Num\]) can be expressed in terms of inner products of $\Phi$ and hence can be individually kernelized. We have the following Lemmas.\
***Lemma 1:** $f_A(\bm{\alpha},\bm{\beta})$ can be kernalized as $f_A(\bm{\alpha}_k,\bm{\beta}_k) = \bm{\theta}_k^T\widetilde{A}\bm{\theta}_k$, where $$\begin{aligned}
\widetilde{A} &= \left[\begin{array}{@{}cc@{}}
K_{XX}\\
K_{ZX}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\widetilde{F}
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{XX} & K_{XZ}
\end{array}\right] \, ,
\label{eqn:Atilde}
\end{aligned}$$ $\widetilde{F} = \text{diag}(m_1 I_{n_1},m_2 I_{n_2},\ldots,m_c I_{n_c}) \in \mathcal{R}^{n \times n}$, such that $I_{n_i}$ is identity matrix of size $(n_i \times n_i)$.*\
***Proof:*** We have $A = \Phi_{\widetilde{X}}\Phi_{\widetilde{X}}^T $. However, for kernelization of A, we need to express it in terms of $\Phi_{X}$, which is not trivial due to the presence of coefficients $\sqrt{m_1},\ldots, \sqrt{m_c}$, as seen in (\[eqn:PhiXtilde\] ). In order to decouple the coefficients, we do the following. Let $\widetilde{F}$ be a diagonal matrix defined as $\widetilde{F} = \text{diag}(m_1 I_{n_1},m_2 I_{n_2},\ldots,m_c I_{n_c}) \in \mathbb{R}^{n \times n}$, [*i*.*e*., ]{} $$\begin{aligned}
\small
\widetilde{F} =
\left(\begin{array}{@{}ccc|ccc|c|ccc@{}}
m_1 & & & & & &&&&\\
& \ddots & & & & &&&&\\
&& m_1 & & & &&&&\\
\cline{1-6}
&&& m_2 & & & & &&\\
&&& & \ddots & & & &&\\
&&& && m_2 & & &&\\
\cline{4-7}
&&& &&& \ddots & & & \\
\cline{7-10}
&&& &&& &m_c & & \\
&&& &&& && \ddots & \\
&&& &&& && &m_c \\
\end{array}\right)
\label{eqn:Ftilde}\end{aligned}$$ where, $m_j$ is the number of samples for class $y_j$ from $\mathbf{Z}$. Then, using (\[eqn:PhiXtilde\]) and the definition of the matrix $A$, it can be factorized in terms of $\Phi_X$ using the decoupling matrix $\widetilde{F}$ as follows: $$\begin{aligned}
A = \Phi_{\widetilde{X}}\Phi_{\widetilde{X}}^T
=\Phi_X \widetilde{F} \Phi_X^T \label{eqn:A}
\end{aligned}$$ Then using Eq. (\[eqn:f\_A\]), (\[eqn:A\]) and (\[eqn:Kblockmatppty\]), we can express $f_A(\bm{\alpha},\bm{\beta})$ in terms of inner products of $\Phi$ and later kernelize as shown below:
$$\begin{aligned}
f_A(\bm{\alpha}_k,\bm{\beta}_k)
&= \bm{\alpha}_k^T \Phi_X^T A \Phi_X \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T A \Phi_Z \bm{\beta}_k \nonumber+ \bm{\alpha}_k^T \Phi_X^T A \Phi_Z \bm{\beta}_k +
\bm{\beta}_k^T \Phi_Z^T A \Phi_X \bm{\alpha}_k \\
&= \bm{\alpha}_k^T \Phi_X^T \Phi_X \widetilde{F} \Phi_X^T \Phi_X \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T \Phi_X \widetilde{F} \Phi_X^T \Phi_Z \bm{\beta}_k \\ & \qquad + \bm{\alpha}_k^T \Phi_X^T \Phi_X \widetilde{F} \Phi_X^T \Phi_Z \bm{\beta}_k
+ \bm{\beta}_k^T \Phi_Z^T \Phi_X \widetilde{F} \Phi_X^T \Phi_X \bm{\alpha}_k \\
&= \bm{\alpha}_k^T K_{XX} \widetilde{F} K_{XX} \bm{\alpha}_k +
\bm{\beta}_k^T K_{ZX} \widetilde{F} K_{XZ} \bm{\beta}_k \\
& \qquad + \bm{\alpha}_k^T K_{XX} \widetilde{F} K_{XZ} \bm{\beta}_k
+ \bm{\beta}_k^T K_{ZX} \widetilde{F} K_{XX} \bm{\alpha}_k\\
&= [\bm{\alpha}_k^T \bm{\beta}_k^T ]
\left[\begin{array}{@{}cc@{}}
K_{XX}\widetilde{F}K_{XX} & K_{XX}\widetilde{F}K_{XZ}\\
K_{ZX}\widetilde{F}K_{XX} & K_{ZX}\widetilde{F}K_{XZ}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{\alpha}_k \\
\bm{\beta}_k
\end{array}\right]\\
&= [\bm{\alpha}_k^T \bm{\beta}_k^T ]
\left[\begin{array}{@{}cc@{}}
K_{XX}\\
K_{ZX}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\widetilde{F}
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{XX} & K_{XZ}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{\alpha}_k \\
\bm{\beta}_k
\end{array}\right]\\
&=\bm{\theta}_k^T\widetilde{A}\bm{\theta}_k\end{aligned}$$
[$\square$]{}
***Lemma 2:** $f_B(\bm{\alpha}_k,\bm{\beta}_k)$ can be kernalized as $f_B(\bm{\alpha}_k,\bm{\beta}_k) = \bm{\theta}_k^T\widetilde{B}\bm{\theta}_k$, where $$\begin{aligned}
\widetilde{B}
&=\left[\begin{array}{@{}cc@{}}
K_{XZ}\\
K_{ZZ}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\widetilde{G}
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{ZX} & K_{ZZ}
\end{array}\right]
\label{eqn:Btilde}\end{aligned}$$ and $\widetilde{G} = \text{diag}(n_1 I_{m_1},n_2 I_{m_2},\ldots,n_c I_{m_c}) \in \mathcal{R}^{m \times m}$, such that $I_{m_i}$ is identity matrix of size $(m_i \times m_i)$.\
*\
***Proof:*** The kernelization of $f_B(\bm{\alpha}_k,\bm{\beta}_k)$ is similar to that of $f_A(\bm{\alpha}_k,\bm{\beta}_k)$. As $\widetilde{B} = \Phi_{\widetilde{Z}}\Phi_{\widetilde{Z}}^T$, we need to express it in terms of $\Phi_{Z}$ for kernelization, which is not directly possible as $\Phi_{\widetilde{Z}}$ is coupled with the coefficients $\sqrt{n_1},\ldots, \sqrt{n_c}$ (refer (\[eqn:PhiZtilde\])). Hence we use a decoupling matrix $\widetilde{G}$ as follows. Let $\widetilde{G}$ be a diagonal matrix defined as $\widetilde{G} = \text{diag}(n_1 I_{m_1},n_2 I_{m_2},\ldots,n_c I_{m_c}) \in \mathbb{R}^{m \times m}$,[*i*.*e*., ]{}, $$\begin{aligned}
\small
\widetilde{G} =
\left(\begin{array}{@{}ccc|ccc|c|ccc@{}}
n_1 & & & & & &&&&\\
& \ddots & & & & &&&&\\
&& n_1 & & & &&&&\\
\cline{1-6}
&&& n_2 & & & & &&\\
&&& & \ddots & & & &&\\
&&& && n_2 & & &&\\
\cline{4-7}
&&& &&& \ddots & & & \\
\cline{7-10}
&&& &&& &n_c & & \\
&&& &&& && \ddots & \\
&&& &&& && &n_c \\
\end{array}\right)
\label{eqn:Gtilde}\end{aligned}$$ where, $n_i$ is the number of samples for class $y_i$ from $\mathbf{X}$. Then, using (\[eqn:PhiZtilde\]), the decoupling matrix $\widetilde{G}$ and the definition of $B$, the later can be factorized in terms of $\Phi_Z$ as follows: $$\begin{aligned}
B = \Phi_{\widetilde{Z}}\Phi_{\widetilde{Z}}^T
=\Phi_Z \widetilde{G} \Phi_Z^T \label{eqn:B}
\end{aligned}$$ Then using (\[eqn:f\_A\]), (\[eqn:B\]) and (\[eqn:Kblockmatppty\]), we can kernelize $f_B(\bm{\alpha},\bm{\beta})$ as shown below: $$\begin{aligned}
f_B(\bm{\alpha}_k,\bm{\beta}_k) &= \bm{\alpha}_k^T \Phi_X^T B \Phi_X \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T B \Phi_Z \bm{\beta}_k
+ \bm{\alpha}_k^T \Phi_X^T B \Phi_Z \bm{\beta}_k
+ \bm{\beta}_k^T \Phi_Z^T B \Phi_X \bm{\alpha}_k\\
&= \bm{\alpha}_k^T \Phi_X^T \Phi_Z \widetilde{G} \Phi_Z \Phi_X^T \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T \Phi_Z \widetilde{G} \Phi_Z \Phi_Z^T \bm{\beta}_k \\
& \qquad + \bm{\alpha}_k^T \Phi_X^T \Phi_Z \widetilde{G} \Phi_Z \Phi_Z^T \bm{\beta}_k
+ \bm{\beta}_k^T \Phi_Z^T \Phi_Z \widetilde{G} \Phi_Z \Phi_X^T \bm{\alpha}_k \\
&= \bm{\alpha}_k^T K_{XZ} \widetilde{G} K_{ZX}^T \bm{\alpha}_k +
\bm{\beta}_k^T K_{ZZ} \widetilde{G} K_{ZZ}^T \bm{\beta}_k \\
& \qquad + \bm{\alpha}_k^T K_{XZ} \widetilde{G} K_{ZZ}^T \bm{\beta}_k
+ \bm{\beta}_k^T K_{ZZ} \widetilde{G} K_{ZX}^T \bm{\alpha}_k\\
&= [\bm{\alpha}_k^T \bm{\beta}_k^T ]
\left[\begin{array}{@{}cc@{}}
K_{XZ}\widetilde{G}K_{ZX}^T & K_{XZ}\widetilde{G}K_{ZZ}^T\\
K_{ZZ}\widetilde{G}K_{XZ}^T & K_{ZZ}\widetilde{G}K_{ZZ}^T
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{\alpha}_k \\
\bm{\beta}_k
\end{array}\right]\\
&= [\bm{\alpha}_k^T \bm{\beta}_k^T ]
\left[\begin{array}{@{}cc@{}}
K_{XZ}\\
K_{ZZ}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\widetilde{G}
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{ZX} & K_{ZZ}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{\alpha}_k \\
\bm{\beta}_k
\end{array}\right]\\
&= \bm{\theta}_k^T\widetilde{B}\bm{\theta}_k.\end{aligned}$$[$\square$]{}
Next, in order to kernelize $f_C(\bm{\alpha}_k,\bm{\beta}_k) $ and $f_D(\bm{\alpha}_k,\bm{\beta}_k)$, we define the following matrices. $$H_{XX} = \Phi_{X}^T \Phi_{S},\quad H_{ZZ} = \Phi_{Z}^T \Phi_{R}, \quad H_{XZ} = \Phi_{X}^T \Phi_{R}, \quad H_{ZX} = \Phi_{Z}^T \Phi_{S}
\label{eqn:H1}$$ The above matrices are of size $H_{XX},H_{XZ} \in \mathcal{R}^{n \times c}$ and $H_{ZX},H_{ZZ} \in \mathcal{R}^{m \times c}$. The $(p,q)$th element of each of these matrices can be expressed in terms of the kernel function $k(\mathbf{x}_i,\mathbf{x}_j)$ as $$\begin{aligned}
(H_{XX})_{pq} = \sum_{y_i=q} k(x_p,x_i), \;(H_{ZZ})_{pq} = \sum_{y_j=q} k(z_p,z_j) \nonumber\\
(H_{XZ})_{pq} = \sum_{y_j=q} k(x_p,z_j), \;(H_{ZX})_{pq} = \sum_{y_i=q} k(z_p,x_i)
\label{eqn:H2}\end{aligned}$$ Then, we have the below Lemma.\
***Lemma 3:** $f_C(\bm{\alpha},\bm{\beta})$ and $f_D(\bm{\alpha}_k,\bm{\beta}_k)$ can be kernalized such that $f_C(\bm{\alpha}_k,\bm{\beta}_k) = \bm{\theta}^T_k\widetilde{C}\bm{\theta}_k$ and $f_D(\bm{\alpha},\bm{\beta}) = \bm{\theta}^T_k\widetilde{C}^T\bm{\theta}_k$, where* $$\begin{aligned}
\widetilde{C} = \left[\begin{array}{@{}cc@{}}
H_{XX}\\
H_{ZX}
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
H_{XZ}^T & H_{ZZ}^T
\end{array}\right]
\label{eqn:C}\end{aligned}$$
***Proof:*** Using (\[eqn:f\_A\]), the relations in (\[eqn:H1\]) and the definition $C = \Phi_S \Phi_R^T$, we can kernelize $f_C(\bm{\alpha}_k,\bm{\beta}_k)$ as follows:
$$\begin{aligned}
f_C(\bm{\alpha}_k,\bm{\beta}_k)
&= \bm{\alpha}_k^T \Phi_X^T C \Phi_X \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T C \Phi_Z \bm{\beta}_k \nonumber+ \bm{\alpha}_k^T \Phi_X^T C \Phi_Z \bm{\beta}_k +
\bm{\beta}_k^T \Phi_Z^T C \Phi_X \bm{\alpha}_k \\
&= \bm{\alpha}_k^T \Phi_X^T \Phi_S \Phi_R^T \Phi_X \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T \Phi_S \Phi_R^T \Phi_Z \bm{\beta}_k \nonumber \\
&\qquad + \bm{\alpha}_k^T \Phi_X^T \Phi_S \Phi_R^T \Phi_Z \bm{\beta}_k +
\bm{\beta}_k^T \Phi_Z^T \Phi_S \Phi_R^T \Phi_X \bm{\alpha}_k \\
&= \bm{\alpha}_k^T H_{XX} H_{XZ}^T \bm{\alpha}_k +
\bm{\beta}_k^TH_{ZX}H_{ZZ}^T \bm{\beta}_k \\
&\qquad \bm{\alpha}_k^T H_{XX}H_{ZZ}^T \bm{\beta}_k
+ \bm{\beta}_k^T H_{ZX}H_{XZ}^T \bm{\alpha}_k \\
&= [\bm{\alpha}_k^T \bm{\beta}_k^T ]
\left[\begin{array}{@{}cc@{}}
H_{XX}H_{XZ}^T & H_{XX}H_{ZZ}^T\\
H_{ZX}H_{XZ}^T & H_{ZX}H_{ZZ}^T
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{\alpha}_k \\
\bm{\beta}_k
\end{array}\right]\\
&= [\bm{\alpha}_k^T \bm{\beta}_k^T ]
\left[\begin{array}{@{}cc@{}}
H_{XX}\\
H_{ZX}
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
H_{XZ}^T & H_{ZZ}^T
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{\alpha}_k \\
\bm{\beta}_k
\end{array}\right]\\
&=\bm{\theta}_k^T\widetilde{C}\bm{\theta}_k\end{aligned}$$
For kernelizing $f_D(\bm{\alpha}_k,\bm{\beta}_k)$, it can observed using Eq. (\[eqn:f\_A\]), the relations in (\[eqn:H1\]) and the definition $D = \Phi_R \Phi_S^T$, that $f_D(\bm{\alpha}_k,\bm{\beta}_k) = f_C^T(\bm{\alpha}_k,\bm{\beta}_k)$, as shown below: $$\begin{aligned}
f_D(\bm{\alpha}_k,\bm{\beta}_k)
&= \bm{\alpha}_k^T \Phi_X^T D \Phi_X \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T D \Phi_Z \bm{\beta}_k \nonumber+ \bm{\alpha}_k^T \Phi_X^T D \Phi_Z \bm{\beta}_k +
\bm{\beta}_k^T \Phi_Z^T D \Phi_X \bm{\alpha}_k \\
&= \bm{\alpha}_k^T \Phi_X^T \Phi_R \Phi_S^T \Phi_X \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T \Phi_R \Phi_S^T \Phi_Z \bm{\beta}_k \\
&\quad \quad + \bm{\alpha}_k^T \Phi_X^T \Phi_R \Phi_S^T \Phi_Z \bm{\beta}_k +
\bm{\beta}_k^T \Phi_Z^T \Phi_R \Phi_S^T \Phi_X \bm{\alpha}_k \\
&= (\bm{\alpha}_k^T \Phi_X^T \Phi_S \Phi_R^T \Phi_X \bm{\alpha}_k)^T +
(\bm{\beta}_k^T \Phi_Z^T \Phi_S \Phi_R^T \Phi_Z \bm{\beta}_k)^T \\
&\quad \quad + (\bm{\beta}_k^T \Phi_Z^T \Phi_S \Phi_R^T \Phi_X \bm{\alpha}_k)^T + (\bm{\alpha}_k^T \Phi_X^T \Phi_S \Phi_R^T \Phi_Z \bm{\beta}_k)^T
\\
&=\Big(\bm{\alpha}_k^T \Phi_X^T C \Phi_X \bm{\alpha}_k+
\bm{\beta}_k^T \Phi_Z^T C \Phi_Z \bm{\beta}_k \nonumber+ \bm{\alpha}_k^T \Phi_X^T C \Phi_Z \bm{\beta}_k +
\bm{\beta}_k^T \Phi_Z^T C \Phi_X \bm{\alpha}_k\Big)^T \\
&= f^T_C(\bm{\alpha}_k,\bm{\beta}_k) \end{aligned}$$ Therefore, it follows that $f_D(\bm{\alpha}_k,\bm{\beta}_k) = \bm{\theta}^T_k\widetilde{C}^T \bm{\theta}_k^T$. [$\square$]{}\
Based on (\[eqn:Num\]) and the Lemmas 1,2,3 above, we finally obtain the following theorem.\
***Theorem 1:** The denominator term $\mathbf{w}_k^T\Sigma_S\mathbf{w}_k$ in (\[eqn:XQDAcost\]) can be kernelized as $\mathbf{w}_k^T\Sigma_S\mathbf{w}_k = \bm{\theta}_k^T \Lambda_S \bm{\theta}_k$, where $$\begin{aligned}
\Lambda_S = (1/n_S)(\widetilde{A} + \widetilde{B} - \widetilde{C} - \widetilde{C}^T).
\label{eqn:LambdaS}\end{aligned}$$*\
This completes the kernelization of the denominator term of (\[eqn:XQDAcost\]). We next show how the numerator term of (\[eqn:XQDAcost\]) can be kernelized.\
**Kernelization of numerator** $\mathbf{w}^T_k \Sigma_{D}\mathbf{w}_k$: As seen in (\[eqn:SigmaD\]), the expression for $\Sigma_{D}$ contains $\mathbf{X}$, $\mathbf{Z}$, $\mathbf{s}$ and $\mathbf{r}$. Hence for kernelization, we obtain their representations in the kernel space $\mathcal{F}$ using the kernel function $\phi(\cdot)$ as follows: $$\begin{aligned}
\Phi_{X} &=& [\phi(\mathbf{x}_1),\dots,\phi(\mathbf{x}_{n_1}),\ldots,\phi(\mathbf{x}_n)] \label{eqn:PhiX} \\
\Phi_{Z} &=& [\phi(\mathbf{z}_1), \dots, \phi(\mathbf{z}_{m_1}), \dots, \phi(\mathbf{z}_m)] \label{eqn:PhiZ}\\
\Phi_{s} &=& \sum_{i=1}^n \phi(\mathbf{x}_i), \qquad \Phi_r = \sum_{i=1}^m \phi(\mathbf{z}_i) \label{eqn:Phisr}
\end{aligned}$$ Similar to (\[eqn:ABCD\]), the covariance matrix $\Sigma_D$ in $\mathcal{F}$ can be expressed using Eq. (\[eqn:SigmaD\]) as $$n_D \Sigma_D = \underbrace{m \Phi_X\Phi_X^T}_{U} + \underbrace{n \Phi_Z\Phi_Z^T}_{V} -\underbrace{\Phi_s\Phi_r^T}_{E} -\underbrace{\Phi_r\Phi_s^T}_{P} - n_S \Sigma_S
\label{eqn:MNJL}$$ Then using Eq. (\[eqn:rep\_theorem2\]) and (\[eqn:MNJL\]), we have $$\begin{aligned}
&\mathbf{w}_k^Tn_D \Sigma_D \mathbf{w}_k = f_U(\bm{\alpha}_k,\bm{\beta}_k) + f_V(\bm{\alpha}_k,\bm{\beta}_k)\nonumber\\
&\quad -f_E(\bm{\alpha}_k,\bm{\beta}_k)-f_P(\bm{\alpha}_k,\bm{\beta}_k) -\mathbf{w}_k^Tn_S \Sigma_S \mathbf{w}_k \label{eqn:Din}\end{aligned}$$ where the functions $f_U$, $f_V$, $f_E$ and $f_P$ are of the form $$\begin{aligned}
f_{\widetilde{Y}}(\bm{\alpha}_k,\bm{\beta}_k) &= \bm{\alpha}_k^T \Phi_{X}^T \widetilde{Y} \Phi_{X} \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T \widetilde{Y} \Phi_Z \bm{\beta}_k \nonumber\\
& \qquad + \bm{\alpha}_k^T \Phi_X^T \widetilde{Y} \Phi_Z \bm{\beta}_k +
\bm{\beta}_k^T \Phi_Z^T \widetilde{Y} \Phi_X \bm{\alpha}_k \label{eqn:f_M}\end{aligned}$$ for $\widetilde{Y}=U,V,E,P$, which are already defined in (\[eqn:MNJL\]). We next show that each of the terms in (\[eqn:Din\]) can be expressed as inner products of $\phi(\cdot)$ and hence can be separately kernelized. We have the following two Lemmas.\
***Lemma 4:** $f_U(\bm{\alpha}_k,\bm{\beta}_k)$ and $f_V(\bm{\alpha}_k,\bm{\beta}_k)$ can be kernalized as $f_U(\bm{\alpha}_k,\bm{\beta}_k) = \bm{\theta}_k^T\widetilde{U}\bm{\theta}_k$ and $f_V(\bm{\alpha}_k,\bm{\beta}_k) = \bm{\theta}_k^T\widetilde{V}\bm{\theta}_k$, where* $$\begin{aligned}
\widetilde{U} &=
m \left[\begin{array}{@{}c@{}}
K_{XX} \\
K_{ZX}
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{XX} & K_{XZ}
\end{array}\right] \label{eqn:U}\\
\widetilde{V} &=
n \left[\begin{array}{@{}c@{}}
K_{XZ} \\
K_{ZZ}
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{ZX} & K_{ZZ}
\end{array}\right]
\label{eqn:V}\end{aligned}$$ ***Proof:*** Using Eq. (\[eqn:f\_M\]), the definition $U=m\Phi_X\Phi_X^T$ and the relations in (\[eqn:Kblockmatppty\]), we can kernelize $f_{U}(\bm{\alpha}_k,\bm{\beta}_k)$ as follows: $$\begin{aligned}
f_{U}(\bm{\alpha}_k,\bm{\beta}_k) &= \bm{\alpha}_k^T \Phi_{X}^T U \Phi_{X} \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T U\Phi_Z \bm{\beta}_k + \bm{\alpha}_k^T \Phi_X^T U \Phi_Z \bm{\beta}_k +
\bm{\beta}_k^T \Phi_Z^T U \Phi_X \bm{\alpha}_k \\
&= \bm{\alpha}_k^T m\Phi_X^T \Phi_X \Phi_X^T \Phi_X \bm{\alpha}_k +
\bm{\beta}_k^T m\Phi_Z^T \Phi_X \Phi_X^T \Phi_Z \bm{\beta}_k \\
&\quad + \bm{\alpha}_k^T m\Phi_X^T \Phi_X \Phi_X^T \Phi_Z \bm{\beta}_k
+ \bm{\beta}_k^T m\Phi_Z^T \Phi_X \Phi_X^T \Phi_X \bm{\alpha}_k \\
&= \bm{\alpha}_k^T mK_{XX} K_{XX} \bm{\alpha}_k +
\bm{\beta}_k^T mK_{ZX} K_{XZ} \bm{\beta}_k \\
&\quad + \bm{\alpha}_k^T mK_{XX} K_{XZ} \bm{\beta}_k
+ \bm{\beta}_k^T mK_{ZX} K_{XX} \bm{\alpha}_k\\
&= m[\bm{\alpha}_k^T \bm{\beta}_k^T ]
\left[\begin{array}{@{}cc@{}}
K_{XX}K_{XX} & K_{XX}K_{XZ}\\
K_{ZX}K_{XX} & K_{ZX}K_{XZ}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{\alpha}_k \\
\bm{\beta}_k
\end{array}\right]\\
&= m[\bm{\alpha}_k^T \bm{\beta}_k^T ]
\left[\begin{array}{@{}cc@{}}
K_{XX}\\
K_{ZX}
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{XX} & K_{XZ}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{\alpha}_k \\
\bm{\beta}_k
\end{array}\right]\\
&=\bm{\theta}_k^T\widetilde{U}\bm{\theta}_k \end{aligned}$$ Similarly, $f_{V}(\bm{\alpha}_k,\bm{\beta}_k)$ can also be kernelized using Eq. (\[eqn:f\_M\]), the definition $V=n\Phi_Z\Phi_Z^T$, and the relations in (\[eqn:Kblockmatppty\]), as follows: $$\begin{aligned}
f_V(\bm{\alpha}_k,\bm{\beta}_k) &= \bm{\alpha}_k^T \Phi_{X}^T V \Phi_{X} \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T V\Phi_Z \bm{\beta}_k + \bm{\alpha}_k^T \Phi_X^T V \Phi_Z \bm{\beta}_k +
\bm{\beta}_k^T \Phi_Z^T V \Phi_X \bm{\alpha}_k \\
&= \bm{\alpha}_k^T n\Phi_X^T \Phi_Z \Phi_Z^T \Phi_X \bm{\alpha}_k +
\bm{\beta}_k^T n\Phi_Z^T \Phi_Z \Phi_Z^T\Phi_Z \bm{\beta}_k \\
&\quad \quad + \bm{\alpha}_k^T n\Phi_X^T \Phi_Z \Phi_Z^T \Phi_Z \bm{\beta}_k
+ \bm{\beta}_k^T n\Phi_Z^T \Phi_Z \Phi_Z^T \Phi_X \bm{\alpha}_k \\
&= \bm{\alpha}_k^T nK_{XZ} K_{ZX} \bm{\alpha}_k +
\bm{\beta}_k^T nK_{ZZ} K_{ZZ} \bm{\beta}_k \\
&\quad \quad + \bm{\alpha}_k^T nK_{XZ} K_{ZZ} \bm{\beta}_k
+ \bm{\beta}_k^T nK_{ZZ} K_{ZX} \bm{\alpha}_k\\
&= n \left[\begin{array}{@{}cc@{}}
\bm{\alpha}_k^T & \bm{\beta}_k^T
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{XZ}K_{ZX} & K_{XZ}K_{ZZ}\\
K_{ZZ}K_{ZX} & K_{ZZ}K_{ZZ}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{\alpha}_k \\
\bm{\beta}_k
\end{array}\right]\\
&= n \left[\begin{array}{@{}cc@{}}
\bm{\alpha}_k^T & \bm{\beta}_k^T
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{XZ}\\
K_{ZZ}
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{ZX} & K_{ZZ}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{\alpha}_k \\
\bm{\beta}_k
\end{array}\right]\\
&=\bm{\theta}_k^T\widetilde{V}\bm{\theta}_k \end{aligned}$$[$\square$]{}\
***Lemma 5:** $f_E(\bm{\alpha}_k,\bm{\beta}_k)$ and $f_P(\bm{\alpha}_k,\bm{\beta}_k)$ can be kernalized as $f_E(\bm{\alpha}_k,\bm{\beta}_k) = \bm{\theta}_k^T\widetilde{E}\bm{\theta}_k$, and $f_P(\bm{\alpha}_k,\bm{\beta}_k) = \bm{\theta}_k^T\widetilde{E}^T\bm{\theta}_k$ where* $$\begin{aligned}
\widetilde{E} =
\left[\begin{array}{@{}c@{}}
K_{XX}\\
K_{ZX}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{1}_{n \times m}
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{ZX} & K_{ZZ}
\end{array}\right]
\label{eqn:E}\end{aligned}$$ and $\bm{1}_{n \times m}$ is an $(n \times m)$ dimensional matrix of ones.
***Proof:*** For kernelizing $f_E(\bm{\alpha}_k,\bm{\beta}_k)$, we need to express $E=\Phi_s\Phi_r^T$ in terms of $\Phi_X$ and $\Phi_Z$. For that end, we rewrite $\Phi_s$ and $\Phi_r$ based on ( \[eqn:Phisr\]) as $$\begin{aligned}
\Phi_s &= \sum_{i=1}^n \phi(\mathbf{x}_i) =
[\phi(\mathbf{x}_1), \phi(\mathbf{x}_2),\ldots, \phi(\mathbf{x}_n)]
\mathbf{1}_n=\Phi_X \mathbf{1}_n\\
\Phi_r &= \sum_{i=1}^m \phi(\mathbf{z}_i) =
[\phi(\mathbf{z}_1), \phi(\mathbf{z}_2),\ldots, \phi(\mathbf{z}_m)]
\mathbf{1}_m=\Phi_Z \mathbf{1}_m
\end{aligned}$$ where $\mathbf{1}_n$ and $\mathbf{1}_m$ are column vectors of ones having length $n$ and $m$ , respectively. Now based on the definition of $E$, it can be expressed as $$\begin{aligned}
E = \Phi_s\Phi_r^T
=\Phi_X \mathbf{1}_{n} \mathbf{1}_{m}^T \Phi_Z^T
=\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T \label{eqn:Esolve}\end{aligned}$$ where $\mathbf{1}_{n \times m}$, is an ${(n \times m)}$ dimensional matrix of ones. Then using Eq. (\[eqn:f\_M\]), (\[eqn:Esolve\]) and the relations in (\[eqn:Kblockmatppty\]), we can kernelize $f_E(\bm{\alpha}_k,\bm{\beta}_k)$ as follows: $$\begin{aligned}
f_E(\bm{\alpha}_k,\bm{\beta}_k)
&= \bm{\alpha}_k^T \Phi_{X}^T E \Phi_{X} \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T E \Phi_Z \bm{\beta}_k + \bm{\alpha}_k^T \Phi_X^T E \Phi_Z \bm{\beta}_k +
\bm{\beta}_k^T \Phi_Z^T E \Phi_X \bm{\alpha}_k \\
&= \bm{\alpha}_k^T \Phi_X^T (\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T) \Phi_X \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T (\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T) \Phi_Z \bm{\beta}_k \\
&\qquad \qquad + \bm{\alpha}_k^T \Phi_X^T(\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T) \Phi_Z \bm{\beta}_k
+ \bm{\beta}_k^T \Phi_Z^T (\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T) \Phi_X \bm{\alpha}_k \\
&= \bm{\alpha}_k^T K_{XX} \mathbf{1}_{n \times m} K_{ZX} \bm{\alpha}_k +
\bm{\beta}_k^T K_{ZX} \mathbf{1}_{n \times m} K_{ZZ} \bm{\beta}_k \\
&\qquad \qquad + \bm{\alpha}_k^T K_{XX} \mathbf{1}_{n \times m} K_{ZZ} \bm{\beta}_k + \bm{\beta}_k^T K_{ZX} \mathbf{1}_{n \times m} K_{ZX} \bm{\alpha}_k\\
&= \left[\begin{array}{@{}cc@{}}
\bm{\alpha}_k^T & \bm{\beta}_k^T
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{XX}\mathbf{1}_{n \times m}K_{ZX} & K_{XX}\mathbf{1}_{n \times m}K_{ZZ}\\
K_{ZX}\mathbf{1}_{n \times m}K_{ZX} & K_{ZX}\mathbf{1}_{n \times m}K_{ZZ}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{\alpha}_k \\
\bm{\beta}_k
\end{array}\right]\\
&= \left[\begin{array}{@{}cc@{}}
\bm{\alpha}_k^T & \bm{\beta}_k^T
\end{array}\right]
\left[\begin{array}{@{}cc@{}}
K_{XX}\\
K_{ZX}
\end{array}\right]
[\mathbf{1}_{n \times m}]
\left[\begin{array}{@{}cc@{}}
K_{ZX} & K_{ZZ}
\end{array}\right]
\left[\begin{array}{@{}c@{}}
\bm{\alpha}_k \\
\bm{\beta}_k
\end{array}\right]\\
&=\bm{\theta}_k^T\widetilde{E}\bm{\theta}_k \end{aligned}$$\
\
For kernelizing $f_P(\bm{\alpha}_k,\bm{\beta}_k)$, it can be seen that $$\begin{aligned}
P = \Phi_r\Phi_s^T
=\Phi_Z \mathbf{1}_{m} \mathbf{1}_{n}^T \Phi_X^T
=\Phi_Z \mathbf{1}_{m \times n} \Phi_X^T.\end{aligned}$$ Then, $f_P(\bm{\alpha}_k,\bm{\beta}_k)$ can be kernelized by observing that $f_P(\bm{\alpha}_k,\bm{\beta}_k) = f_E^T(\bm{\alpha}_k,\bm{\beta}_k)$, as shown below: $$\begin{aligned}
f_P(\bm{\alpha}_k,\bm{\beta}_k)
&= \bm{\alpha}_k^T \Phi_{X}^T P \Phi_{X} \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T P \Phi_Z \bm{\beta}_k + \bm{\alpha}_k^T \Phi_X^T P \Phi_Z \bm{\beta}_k +
\bm{\beta}_k^T \Phi_Z^T P \Phi_X \bm{\alpha}_k \\
&= \bm{\alpha}_k^T \Phi_X^T (\Phi_Z \mathbf{1}_{m \times n} \Phi_X^T) \Phi_X \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T (\Phi_Z \mathbf{1}_{m \times n} \Phi_X^T) \Phi_Z \bm{\beta}_k \\
&\qquad \qquad + \bm{\alpha}_k^T \Phi_X^T(\Phi_Z \mathbf{1}_{m \times n} \Phi_X^T) \Phi_Z \bm{\beta}_k
+ \bm{\beta}_k^T \Phi_Z^T (\Phi_Z \mathbf{1}_{m \times n} \Phi_X^T) \Phi_X \bm{\alpha}_k \\
&= (\bm{\alpha}_k^T \Phi_X^T (\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T) \Phi_X \bm{\alpha}_k)^T +
(\bm{\beta}_k^T \Phi_Z^T (\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T) \Phi_Z \bm{\beta}_k)^T \\
&\qquad \qquad + (\bm{\beta}_k^T \Phi_Z^T (\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T) \Phi_X \bm{\alpha}_k)^T + (\bm{\alpha}_k^T \Phi_X^T(\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T) \Phi_Z \bm{\beta}_k)^T \\
&= \Big[\bm{\alpha}_k^T \Phi_X^T (\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T) \Phi_X \bm{\alpha}_k +
\bm{\beta}_k^T \Phi_Z^T (\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T) \Phi_Z \bm{\beta}_k \\
&\qquad \qquad + \bm{\alpha}_k^T \Phi_X^T(\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T) \Phi_Z \bm{\beta}_k + \bm{\beta}_k^T \Phi_Z^T (\Phi_X \mathbf{1}_{n \times m} \Phi_Z^T) \Phi_X \bm{\alpha}_k \Big]^T\\
&= f_E^T(\bm{\alpha}_k,\bm{\beta}_k)\end{aligned}$$\
\
Then it follows that $f_P(\bm{\alpha}_k,\bm{\beta}_k) = \bm{\theta}_k^T\widetilde{E}^T\bm{\theta}_k $. [$\square$]{}\
Using Eq. (\[eqn:Din\]), and the above Lemmas 4 and 5, we get the following theorem.\
\
***Theorem 2:** The kernalized form of the denominator term in (\[eqn:XQDAcost\]) is obtained as $\mathbf{w}_k^T\Sigma_D \mathbf{w}_k = \bm{\theta}_k^T \Lambda_D \bm{\theta}_k$ where* $$\Lambda_D =(1/n_D) ( \widetilde{U} + \widetilde{V} - \widetilde{E} - \widetilde{E}^T - n_S \Lambda_S).
\label{eqn:LambdaD}$$\
Based on Theorem 1 and 2, the kernalized version of the cost function $J(\mathbf{w}_k)$ in (\[eqn:XQDAcost\]) can now be finally written as $$\begin{aligned}
J(\bm{\theta}_k) = \frac{\bm{\theta}_k^T \Lambda_{D}\bm{\theta}_k}{\bm{\theta}_k^T \Lambda_S\bm{\theta}_k}
\label{eqn:KXQDAcost}\end{aligned}$$ The kernelized cost function $J(\bm{\theta}_k)$ is also of the form of Generalized Rayleigh Quotient. Hence the optimal solutions $\bm{\theta}_k$ that maximize (\[eqn:KXQDAcost\]) are composed of the eigenvectors corresponding to the $b$ largest eigenvalues of $\Lambda_S^{-1}\Lambda_D$. Similar to XQDA, the dimensionality $b$ of the kXQDA subspace is determined by the number of eigenvectors whose eigenvalues are larger than 1, as it ensures that variance of the dissimilar class $\Sigma_D$ is always higher than the variance of similar class $\Sigma_S$, facilitating effective discrimination between the classes based on difference in variances.\
\
**3.3.2 $\quad$ Kernelization of distance metric**\
\
Next, we kernelize the distance metric $d(\mathbf{x}_i,\mathbf{z}_j)$ in (\[eqn:XQDAmetric\]). In the kernel space $\mathcal{F}$, the distance metric will be of form $$\begin{aligned}
d (\Phi(\mathbf{x}_{i}),\Phi(\mathbf{z}_{j})) &= (\Phi(\mathbf{x}_{i})-\Phi(\mathbf{z}_{j}))^T W_{\phi}(\Sigma^{\prime-1}_{S} - \Sigma^{\prime-1 }_{D})_{+}
W_{\phi}^{T}(\Phi(\mathbf{x}_{i})-\Phi(\mathbf{z}_{j}))\, ,
\label{distmetric_Ker1}\end{aligned}$$ where $\Sigma^{\prime}_S = W_{\phi}^T \Sigma_S W_{\phi}^T$ and $\Sigma^{\prime}_D = W_{\phi}^T \Sigma_D W_{\phi}^T$.\
***Lemma 6:** The matrices $\Sigma^{\prime}_{S}$ and $\Sigma^{\prime}_{D}$ can be kernalized as $
\Sigma^{\prime}_{S} = \Theta^T \Lambda_S \Theta$, $\;
\Sigma^{\prime}_{D} = \Theta^T \Lambda_D \Theta,
$ where $\Theta=\left[ \bm{\theta}_1, \bm{\theta}_2, \ldots, \bm{\theta}_b \right]$.*\
***Proof:*** Based on Theorems 1 and 2, it can be seen that, for any general $p, q \in \mathbb{N}$, the kernelized version of $\mathbf{w}^T_p \Sigma_{D}\mathbf{w}_q$ and $\mathbf{w}^T_p \Sigma_{S}\mathbf{w}_q$ can be written as $$\begin{aligned}
\mathbf{w}^T_p \Sigma_{S}\mathbf{w}_q& = \bm{\theta}_p^T \Lambda_{S}\bm{\theta}_q \label{eqn:wpqS}\\
\mathbf{w}^T_p \Sigma_{D}\mathbf{w}_q &= \bm{\theta}_p^T \Lambda_{D}\bm{\theta}_q \label{eqn:wpqD}
$$ Using the definition of $\Sigma^{\prime}_{S}$, and Eq. (\[eqn:wpqD\]), we can kernelize $\Sigma^{\prime}_{S}$ as follows: $$\begin{aligned}
\Sigma^{\prime}_{S}
&= W_{\phi}^T \Sigma_{S}W_{\phi}\\
&=\left[\begin{array}{@{}cc@{}}
\mathbf{w}_1^T \\
\mathbf{w}_2^T\\
\vdots\\
\mathbf{w}_b^T
\end{array}\right] \Sigma_{S} \left[ \mathbf{w}_1, \mathbf{w}_2, \ldots, \mathbf{w}_b \right]
=\left[\begin{array}{@{}cc@{}}
\bm{\theta}_1^T \\
\bm{\theta}_2^T\\
\vdots\\
\bm{\theta}_b^T
\end{array}\right] \Lambda_S \left[ \bm{\theta}_1, \bm{\theta}_2, \ldots, \bm{\theta}_b \right]\\
&= \Theta^T \Lambda_S \Theta\end{aligned}$$ Similarly, we can can kernelize $\Sigma^{\prime}_{D}$ using its definition and Eq. (\[eqn:wpqS\]) as following: $$\begin{aligned}
\Sigma^{\prime}_{D}
&= W^T_{\phi} \Sigma_{D}W_{\phi}\\
&=\left[\begin{array}{@{}cc@{}}
\mathbf{w}_1^T \\
\mathbf{w}_2^T\\
\vdots\\
\mathbf{w}_b^T
\end{array}\right] \Sigma_{D} \left[ \mathbf{w}_1, \mathbf{w}_2, \ldots, \mathbf{w}_b \right]
=\left[\begin{array}{@{}cc@{}}
\bm{\theta}_1^T \\
\bm{\theta}_2^T\\
\vdots\\
\bm{\theta}_b^T
\end{array}\right] \Lambda_D \left[ \bm{\theta}_1, \bm{\theta}_2, \ldots, \bm{\theta}_b \right]\\
&= \Theta^T \Lambda_D \Theta\end{aligned}$$[$\square$]{}\
\
Using (\[eqn:rep\_theorem2\]), the matrix $W_{\phi}$ can be expressed as $$\begin{aligned}
W_{\phi} = \left[ \mathbf{w}_1, \mathbf{w}_2, ..., \mathbf{w}_b \right]=\bm{\Phi}\left[ \bm{\theta}_1, \bm{\theta}_2, \ldots, \bm{\theta}_b \right] = \bm{\Phi}\Theta
\label{eqn:Wkernel}\end{aligned}$$ Then, using (\[eqn:Wkernel\]), the initial part of the expression in (\[distmetric\_Ker1\]) can be kernalized as: $$(\Phi(\mathbf{x}_{i})-\Phi(\mathbf{z}_{j}))^T W_{\phi}
= (K_i - K_j)^T\Theta
\label{distmetric_Ker3}$$ where $K_i$ is the $i$th column of the kernel matrix $\mathbf{K}$ in (\[eqn:mainK\]).\
\
Using Lemma 6 and (\[distmetric\_Ker3\]), we finally obtain the following theorem:\
\
***Theorem 3:** The kernelized distance metric of kXQDA can be expressed as $$d(\Phi(\mathbf{x}_{i}),\Phi(\mathbf{z}_{j})) =
(K_i-K_j)^T\Theta \Gamma_{+} \Theta^T (K_i-K_j)
\label{eqn:KXQDAfinalMahDistM} \\$$ where $\Gamma = \big[(\Theta^T \Lambda_S \Theta)^{-1}
- \Theta^T \Lambda_D \Theta)^{-1}\big]$.*\
It can be seen that we obtain clean and simplified expressions for k-XQDA as shown in (\[eqn:KXQDAcost\]) and (\[eqn:KXQDAfinalMahDistM\]). They have similar structure compared to the expressions (\[eqn:XQDAcost\]) and (\[eqn:XQDAmetric\]) of XQDA. Though our derivations for kernelizing XQDA using (\[eqn:SigmaS\]) and (\[eqn:SigmaD\]) is little involved, it should be noted that in our kernelized formulation, there is no requirement of explicit computation of the $nm$ similar/dissimilar class pairs and their outer products for estimating the covariance matrices, which would have been other wise required if (\[KISSME\_CovMat\_calc\]) was used for kernelization. Thus our approach achieves a computational reduction of two orders of magnitude. The matrices $\widetilde{A}$, $\widetilde{B}$, $\widetilde{C}$, $\widetilde{U}$, $\widetilde{V}$, and $\widetilde{E}$ required for calculating matrices $\Lambda_D$ and $\Lambda_{S}$ are simplified for fast and efficient computation. They can be easily computed once the matrices $K_{XX}, K_{XZ}, K_{ZZ}, H_{XX}, Z_{ZZ}, H_{XZ}$ and $H_{ZX}$ are obtained. For the calculation of the eigen system of $\Lambda_S^{-1}\Lambda_D$, we add a small regularizer of $\lambda=10^{-7}$ to the diagonal elements of $\Lambda_S$ to make its estimation more smooth and robust.
Note that in small sample size case (where $n+m \ll d$) , $\Lambda_S \in \mathbb{R}^{(n+m)\times(n+m)}$ has a much lesser dimension compared to $\Sigma_S \in \mathbb{R}^{d\times d}$ of XQDA. Hence $\Lambda_S$ has lesser number of zero eigen values compared to $\Sigma_S$, making the former better regularizable for inversion. Thus k-XQDA can handle small sample size (SSS) problem more efficiently compared to XQDA. Also, as all other inherent matrices of k-XQDA depends on the number of samples, while that of XQDA depends on the feature dimension, k-XQDA is much faster compared to XQDA. The complete algorithm for k-XQDA is summarized in Algorithm \[algo:kxqda\].
Experiments
===========
**Evaluation Protocol**: In re-ID experiments, test set identities are considered unseen during training. Hence following the standard protocol [@NK3ML; @song:scalableManifold; @GOG; @SCSP; @LOMO; @metric_ensembles; @colornames], the dataset identities are divided equally into half forming the training set and the other half forming the test set. For training, each person is considered as one distinct class. For testing, the test images from one view form the query set and the rest forms the gallery set. The queries are matched against the gallery and a ranked list is obtained based on the matching score. Rank-N accuracy is calculated as the probability of true match occurring in the first N search results. The above procedure is repeated 10 times and the average performance is evaluated.\
\
**Datasets**: We use four standard datasets including CUHK01[@CUHK01], PRID450S[@PRID450S], GRID[@GRID1] and PRID2011[@PRID2011], which have small size training set for our experiments. They contain 971, 450, 250 and 200 persons, respectively, captured from two non-overlapping camera views. Each person has one image in each view, except the CUHK01 dataset, which has two images in each view. For CUHK01, we use both single-shot as well as multi-shot settings. The gallery of GRID and PRID2011 datasets have additional 775 and 549 images, respectively, which are of different identities from the query set and act as distractors.\
\
**Features and Parameters**: For each person image, we use standard feature descriptors including WHOS[@LisantiPAMI14], LOMO[@LOMO] and GOG[@GOG]. The LOMO and GOG are of dimensions 26,960 and 27,622 respectively. The WHOS feature is of two type, one with 2960 and the other with 5138 dimensions. We refer the first as WHOS\* and the second as $\text{WHOS}^{\dagger}$. We also use a new feature descriptor named $\text{LOMO}^{\dagger}$, which is the LOMO feature obtained without using Retinex [@LOMO] transformation, to make use of of color diversity. Re-ID datasets have large variation in illumination and background. Hence for k-XQDA, we use specific features and kernel functions for each dataset, to better model their inherent characteristics. We use RBF or polynomial kernel for k-XQDA.\
**Method of Comparison**: We conduct our experiments using only the given training data. There are some re-ID methods that use external supervision (like pre-trained networks on other datasets or auxiliary data like human pose, attributes or body part segmentation obtained using external trained systems) and post-processing (re-ranking) of the trained models using the test data. No such external supervision or post-processing is considered in our study and hence a direct comparison of our results with such methods is not advisable. However, we list them in separate rows for completeness.
Comparison with Baselines
-------------------------
As k-XQDA is the kernalized version of XQDA, we first compare its performance against XQDA. We extensively evaluate using multiple feature descriptors including WHOS\*, WHOS, LOMO and GOG, and the results are shown in Table \[table:BaselineXQDA\]. k-XQDA consistently outperforms XQDA with high margin, at all ranks. For WHOS\* descriptor, k-XQDA attains an improvement of 10.59% at rank-1 and 14.29% at rank-5, against XQDA. Similarly for $\text{WHOS}^{\dagger}$ descriptor, k-XQDA outperforms XQDA by 14.84% at rank-1 and 18.37% at rank-5. For LOMO and GOG feature descriptors, a rank-1 performance boost of 4.43% and 4.34% are respectively obtained by k-XQDA. Thus, independent of the feature descriptor used, k-XQDA has superior performance than XQDA. The results signify that, with the benefit of kernels, k-XQDA is able to learn efficient non-linear features than XQDA for handling the high non-linearity in person appearances across cameras. Next we compare the performance of k-XQDA against other state-of-the-art metric learning methods including MLAPG[@MLAPG], NFST[@Zheng:nfst], KNFST[@Zheng:nfst], KISSME[@KISSME], LFDA[@LFDA:CVPR] and kLFDA[@rPcca]. We conduct experiments using the same LOMO feature descriptor on CUHK01 dataset, and the results are shown in Table \[table:Baselinek-XQDA\]. It can be seen that k-XQDA outperforms all the compared metric learning methods. Note that KNFST[@Zheng:nfst] and kLFDA[@rPcca] are kernel based methods and our kernel based method k-XQDA attains the highest performance. The experiment also confirms the inferences drawn in [@Zheng:nfst] and [@rPcca] that kernel based methods are very crucial for handling non-linearity in person re-identification.
Comparison with State-of-the-art
--------------------------------
**Experiments with PRID2011 dataset:** PRID2011 is a challenging dataset with very small training data. We use GOG features for this dataset. As seen in Table \[table:PRID2011\], our proposed methods k-XQDA attains competitive performance against the state-of-the-art results for all ranks. We clearly outperform all the deep learning based methods including MuDeep[@MuDeep]. The deep learning methods PTGAN[@PTGAN] and MC-PPMN[@MCPPMN] uses auxiliary supervision while our method have better performance, even without using any extra information, except the given training images.\
**Experiments with CUHK01 dataset:** Concatenated LOMO, $\text{LOMO}^{\dagger}$ and GOG are used as the features. For *single-shot* settings, where every person has only one image in each view, the results are shown in Table \[table:CUHKM1\]. kXQDA attains the best results at all ranks. Note that we even outperformed the body pose based auxiliary supervised deep learning method PN-GAN[@PNGAN]. For *multi-shot* experiments also, we attain competitive performance against state-of-the-art methods, as shown in Table \[table:CUHKM2\]. This additionally signifies that our methods can also handle multiple images per class, efficiently.\
Methods Rank1 Rank10 Rank20
------------------------------------- ----------- ----------- -----------
MLFL[@midlevel] 34.30 65.00 75.00
XQDA[@LOMO] 50.00 83.40 89.51
KNFST[@Zheng:nfst] 52.80 84.97 91.07
TPC [@TCP] 53.70 91.00 96.30
CAMEL[@CAMEL] 57.30 - -
GOG[@GOG] 57.89 86.25 92.14
WARCA[@WARCA] 58.34 - -
MVLDML+[@MVLDML] 61.37 88.88 93.85
**k-XQDA** **67.77** **92.23** **95.94**
\*Semantic[@Symantic] 32.70 64.40 76.30
\*MetricEnsemble[@metric_ensembles] 53.40 84.40 90.50
\*Quadruplet[@Beyond:triplet_loss] 62.55 89.71 -
\*PN-GAN[@PNGAN] 67.65 91.82 -
: Comparison with state-of-the-art results on CUHK01 dataset using single-shot settings. The methods with a \* signifies post processing / external supervision based methods.[]{data-label="table:CUHKM1"}
Methods Rank1 Rank10 Rank20
----------------------------- ----------- ----------- -----------
*l*1-Graph[@UlGraph] 50.10 - -
GCT[@GCT] 61.90 87.60 92.80
XQDA[@LOMO] 61.98 89.30 93.62
CAMEL[@CAMEL] 62.70 - -
MLAPG[@MLAPG] 64.24 90.84 94.92
SSSVM[@SSSVM] 65.97 - -
KNFST[@Zheng:nfst] 66.07 91.56 95.64
GOG[@GOG] 67.28 91.77 95.93
IRS(LOMO)[@IRS] 68.39 92.60 96.20
**k-XQDA** **76.30** **95.39** **98.15**
\*DGD[@DGD] 66.60 - -
\*OLMANS[@OnlineNegSamples] 68.44 92.67 95.88
\*SHaPE[@SHaPE] 76.00 - -
: Comparison with state-of-the-art results on CUHK01 dataset using multi-shot settings.[]{data-label="table:CUHKM2"}
Methods Rank1 Rank10 Rank20
------------------------------- ----------- ----------- -----------
WARCA[@WARCA] 24.58 - -
SCNCD[@colornames] 41.60 79.40 87.80
CSL[@CSL] 44.40 82.20 89.80
TMA[@TMA] 52.89 85.78 93.33
k-KISSME[@kKISSME] 53.90 88.80 94.50
GCT[@GCT] 58.40 84.30 89.80
KNFST[@Zheng:nfst] 59.47 91.96 96.53
XQDA[@LOMO] 59.78 90.09 95.29
SSSVM[@SSSVM] 60.49 88.58 93.60
MC-PPMN[@MCPPMN] 62.22 93.56 -
MVLDML+[@MVLDML] 66.80 94.80 97.7
GOG+XQDA[@GOG] 68.00 94.36 97.64
**k-XQDA** **73.16** **95.91** **98.44**
\*Semantic[@Symantic] 44.90 77.50 86.70
\*SSM[@song:scalableManifold] 72.98 96.76 99.11
: Comparison with state-of-the-art results on PRID450S dataset. []{data-label="tab:PRID450Sall"}
**Experiments with PRID450S dataset:** We use concatenated GOG+LOMO+$\text{LOMO}^{\dagger}$ as the features in our methods. As shown in Table \[tab:PRID450Sall\], we attain competitive performance with state-of-the-art results. We also outperform the post-processing based method SSM[@song:scalableManifold]. It is a re-ranking method that utilize gallery data, while our method uses only the training data. Hence it can be expected that any general re-ranking method like SSM can be used on top of our method to further increase our performance.\
Methods Rank1 Rank10 Rank20
------------------------------- ----------- ----------- -----------
MtMCML[@MtMCML] 14.08 45.84 59.84
KNFST[@Zheng:nfst] 14.88 41.28 50.88
PolyMap[@ExPolyFeatMap] 16.30 46.00 57.60
XQDA[@LOMO] 16.56 41.84 52.40
MLAPG[@MLAPG] 16.64 41.20 52.96
KEPLER[@KEPLER] 18.40 50.24 61.44
DR-KISS[@DR-KISS] 20.60 51.40 62.60
SSSVM[@SSSVM] 22.40 51.28 61.20
SCSP[@SCSP] 24.24 54.08 65.20
GOG[@GOG] 24.80 58.40 68.88
**k-XQDA** **27.28** **58.96** **69.12**
\*SSDAL[@SSDAL] 22.40 48.00 58.40
\*SSM[@song:scalableManifold] 27.20 61.12 70.56
\*OL-MANS[@OnlineNegSamples] 30.16 49.20 59.36
: Comparison with state-of-the-art results on GRID dataset.
**Experiments with GRID dataset** GRID is a very challenging dataset. We use concatenated GOG, LOMO and $\text{LOMO}^{\dagger}$ as the features. Our method has competitive performance against the state-of-the-art methods. Though OLMANS[@OnlineNegSamples] have slightly higher performance at rank-1, we outperform it in rank-10 and 20. Moreover, OLMANS needs to compute a separate secondary metric for every query image, making it more computationally intensive, while our method is computationally efficient.\
Conclusion
==========
In this paper we proposed a new kernel based non-linear cross-view similarity metric learning approach that can learn non-linear transformations and handle complex non-linear appearance change of persons across camera views. Using kernel based mapping to a higher dimensional space, a discriminative subspace as well as a Mahalanobis metric is learned by discriminating the similar class and dissimilar class based on their ratio of variances. Through our rigorous derivations, we obtain simplified expressions for the distance metric, making it computationally very efficient and fast. The method handles small size training data for practical person re-identification systems and better solves the small sample size problem. Extensive experiments on four benchmark datasets shows that the proposed method achieves competitive performance against many state-of-the-art methods.\
**Acknowledgment.** This research work is supported under Visvesvaraya PhD Scheme by Ministry of Electronics and Information Technology (MeitY), Government of India.
|
---
abstract: 'Let $k$ be a number field, with algebraic closure $\bar{k}$, and let $\mathcal{A}$ be an abelian variety over $k$ of dimension $n=2^h$, where $h\geq 0$. Let $p$ be a prime number and let ${{\mathcal{A}}}[p]$ denote the $p$-torsion subgroup of ${{\mathcal{A}}}$. We prove that for every $h$, there exists a prime $p_h$, depending only on $h$, such that if ${{\mathcal{A}}}[p]$ is either an irreducible or a decomposable ${{\rm Gal}}(\bar{k}/k)$-module, then for all primes $p>p_h$ the local-global divisibility by $p$ holds in ${{\mathcal{A}}}(k)$ and $\Sha^1 (k,{{\mathcal{A}}}[p])$ is trivial. In particular, when ${{\mathcal{A}}}$ has dimension 2 or 4, we show $p_h=3$. This result generalizes some previous ones proved for elliptic curves. In the case when ${{\mathcal{A}}}$ is principally polarized, the vanishing of $\Sha^1 (k,{{\mathcal{A}}}[p])$ implies that the elements of the Tate-Shafarevich group $\Sha(k,{{\mathcal{A}}})$ are divisible by $p$ in the Weil-Châtelet group $H^1(k,{{\mathcal{A}}})$ and the local-global principle for divisibility by $p$ holds in $H^r(k,{{\mathcal{A}}})$, for all $r\geq 0$.'
author:
- 'Laura Paladino[^1]'
date:
title: Divisibility questions in abelian varieties
---
startsection [section]{}[1]{}[@]{}[-5.5ex plus -.5ex minus -.2ex]{}[1ex plus .2ex]{}[****]{}
============================================================================================
\[section\] \[thm\][Main Theorem]{}
\[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Proposition]{}
\[thm\][Property]{}
\[thm\][Remark]{} \[thm\][Definition]{}
0.5cm
1.5cm
Introduction
============
We consider two local-global problems, strongly related, that recently arose as generalizations of some classical questions. The setting is the one of an abelian variety ${{\mathcal{A}}}$ of dimension $g$ defined over number field $k$. Let $\bar{k}$ be the algebraic closure of $k$ and let $M_k$ be the set of places $v$ of $k$. For every positive integer $q$, we denote by ${{\mathcal{A}}}[q]$ be the $q$-torsion subgroup of ${{\mathcal{A}}}$ and by $k({{\mathcal{A}}}[q])$ the number field obtained by adding to $k$ the coordinates of the $q$-torsion points of ${{\mathcal{A}}}$. It is well-known that ${{\mathcal{A}}}[q]\simeq ({{\mathbb Z}}/q{{\mathbb Z}})^{2g}$ and that the Galois group ${{\rm Gal}}(k({{\mathcal{A}}}[q])/k)$ is isomorphic to the image of the representation of the absolute Galois group ${{\rm Gal}}(\bar{k}/k)$ in the general linear group ${{\rm GL}}_{2g}({{\mathbb Z}}/q{{\mathbb Z}})$. The behaviour of ${{\rm Gal}}(k({{\mathcal{A}}}[q])/k)$ is related to the answer of the following question, known as *Local-global divisibility problem*.
\[prob1\] Let $P\in {\mathcal{A}}(k)$ and let $q$ be a positive integer. Assume that for all but finitely many valuations $v\in k$, there exists $D_v\in {\mathcal{A}}(k_v)$ such that $P=qD_v$. Is it possible to conclude that there exists $D\in {\mathcal{A}}(k)$ such that $P=qD$?
This problem was stated in 2001 by Dvornicich and Zannier in the more general case when ${{\mathcal{A}}}$ is a commutative algebraic group and its formulation was motivated by a particular case of the famous Hasse Principle on quadratic forms and by the Grunwald-Wang Theorem (see [@DZ] and [@DZ2]). The vanishing of the first cohomology group $H^1({{\rm Gal}}(k({{\mathcal{A}}}[q])/k),{{\mathcal{A}}}[q])$ assures a positive answer (see for instance [@DZ], [@Won]). Clearly a solution to Problem \[prob1\] for all powers $p^l$ of prime numbers $p$ is sufficient to get an answer for all integers $q$, by the unique factorization in ${{\mathbb Z}}$ and Bézout’s identity.
In the case of elliptic curves the problem has been widely studied since 2001 and recently a complete answer has been proved when $k={{\mathbb Q}}$. The answer is affirmative when $q$ is a prime $p$ (see [@DZ], [@Won]) and for powers $p^l$, where $p\geq 5$ and $l\geq 2$ (see [@PRV2]). On the contrary, the answer is negative for $q=p^l$, with $p\in \{2,3\}$ and $l\geq 2$ (see [@Cre], [@DZ2], [@Pal], [@Pal3]). For a general number field $k$, the answer is still positive when $q$ is a prime $p$ (see [@DZ], [@Won]). With a mild hypothesis on $k$, the proof of [@DZ3 Theorem 1] implies the following statement (see also [@PRV]).
\[DZ1\] Let $p$ be a prime. Let ${{\mathcal{E}}}$ be an elliptic curve defined over a number field $k$ which does not contain the field ${{\mathbb Q}}({{\zeta}}_p+{{\zeta}}_p^{-1})$, where ${{\zeta}}_p$ is a primitive $p$th root of the unity. If ${{\mathcal{E}}}$ does not admit any $k$-rational isogeny of degree $ p $, then the local-global principle holds for divisibility by $p^l$ in ${{\mathcal{E}}}$ over $k$, for every positive integer $l$.
The hypothesis that $k$ does not contain ${{\mathbb Q}}({{\zeta}}_p+{{\zeta}}_p^{-1})$ is necessary (see [@PRV2 Sec. 6]). Stronger criteria for the local-global divisibility in elliptic curves have been given in [@PRV] and [@PRV2]. In particular there exists a prime $p_k$, depending only on $k$, such that if $p>p_k$ then the answer is positive for divisibility by $p^l$, for all $l\geq 1$. Here we prove the following statements that assures the local-global divisibility by $p$ on some abelian varieties of higher dimension satisfying certain conditions.
\[P17\_gal\] Let $p$ be a prime number. Let $k$ be a number field that does not contain ${{\mathbb Q}}({{\zeta}}_p+{{\zeta}}_p^{-1})$. Let ${{\mathcal{A}}}$ be an abelian variety defined over $k$, of dimension $n=2^h$, where $h\geq 0$. For every $h$, there exists a prime $p_h$, depending only on $h$, such that if ${{\mathcal{A}}}[p]$ is either an irreducible or a decomposable ${{\rm Gal}}(\bar{k}/k)$-module, then the local-global divisibility by $p$ holds in ${{\mathcal{A}}}(k)$, for all $p\geq p_h$. In particular, for abelian varieties of dimension 2 and 4, we have $p_1=p_2=3$.
Evidently, Theorem \[P17\_gal\] implies the following result that reminds of Theorem \[DZ1\] for divisibility by $p$ in higher dimension.
\[P17\] Let $p$ be a prime number. Let $k$ be a number field that does not contain ${{\mathbb Q}}({{\zeta}}_p+{{\zeta}}_p^{-1})$. Let ${{\mathcal{A}}}$ be an abelian variety defined over $k$, of dimension $n=2^h$, where $h\geq 0$. For every $h$, there exists a prime $p_h$, depending only on $h$, such that if ${{\mathcal{A}}}$ does not admit a $k$-rational isogeny of degree $p^{\alpha}$, with $1\leq \alpha\leq 2n-1$, then the local-global divisibility by $p$ holds in ${{\mathcal{A}}}(k)$, for all $p\geq p_h$. In particular, for abelian varieties of dimension 2 and 4, we have $p_1=p_2=3$.
Both Theorem \[P17\_gal\] and its direct consequence Corollary \[P17\] follow immediately by the proof of the next statement.
\[P1\_bis\] Let $p$ be a prime number and let $l,m$ be positive integers. Let $k$ be a number field that does not contain ${{\mathbb Q}}({{\zeta}}_p+{{\zeta}}_p^{-1})$. Let ${{\mathcal{A}}}$ be an abelian variety defined over $k$, of dimension $2^h$, where $h\geq 0$. Let $n=2^{h+1}$ and assume that ${{\rm Gal}}(k({{\mathcal{A}}}[p^l])/k)$ is isomorphic to a subgroup of ${{\rm GL}}_{n}(p^m)$, for some positive integer $m$. For every $h$, there exists a prime $p_h$, depending only on $h$, such that if $p>p_h$ and the local-global divisibility by $p$ fails in ${{\mathcal{A}}}(k)$, then ${{\rm Gal}}(k({{\mathcal{A}}}[p^l])/k)$ acts reducibly but not decomposably over ${{\mathcal{A}}}[p^l]$. In particular, for abelian varieties of dimension 2 and 4, we have $p_1=p_2=3$.
Our proof of Theorem \[P1\_bis\] shows that the Tate-Shafarevich group $\Sha^1(k,{{\mathcal{A}}}[p])$ is trivial when ${{\mathcal{A}}}[p]$ is either an irreducible or a decomposable ${{\rm Gal}}(\bar{k}/k)$-module and $p>p_h$. If ${{\mathcal{A}}}$ is principally polarized, then the triviality of $\Sha^1(k,{{\mathcal{A}}})$ implies $\Sha(k,{{\mathcal{A}}})\subseteq p H^r(k,{{\mathcal{A}}})$, for all $r\geq 0$, by [@Cre2 Theorem 2.1]. In that case we have an affirmative answer to the following second and more general problem, for all $r$.
\[prob2\] Let $q$ be a positive integer and let $\sigma\in H^r(k,{{\mathcal{A}}})$. Assume that for all $v\in M_k$ there exists $\tau_v\in H^r(k_v,{{\mathcal{A}}})$ such that $q\tau_v=\sigma$. Can we conclude that there exists $\tau \in H^r(k,{{\mathcal{A}}})$, such that $q\tau=\sigma$?
Problem \[prob2\] was firstly considered by Cassel for $r=1$ in the case when ${{\mathcal{A}}}$ is an elliptic curve ${{\mathcal{E}}}$ (see [@Cas Problem 1.3]). In particular Cassels questioned if the elements of the Tate-Shafarevich group $\Sha(k,{{\mathcal{E}}})$ were divisible by $p^l$ in the Weil-Châtelet group $H^1(k,{{\mathcal{E}}})$, for all $l$. Tate produced an affirmative answer for divisibility by $p$, but the question for powers $p^l$, with $l\geq 2$ remained open (see [@Cas2]). The mentioned results to Problem \[prob1\] imply an answer to Problem \[prob2\] too, since the proofs show the triviality or the non triviality of the corresponding Tate-Shafarevich group. So Cassel’s question has an affirmative answer for $p\geq 5$ and a negative one for $p\in \{2,3\}$ in elliptic curves. The problem was afterwards considered for abelian varieties by Bašmakov (see [@Bas]) and lately by Çiperiani and Stix, who gave some sufficient conditions for a positive answer (see [@CS]). In [@Cre] Creutz proved that for every prime $p$, there exist infinitely many non-isomorphic abelian varieties $A$ defined over ${{\mathbb Q}}$ such that $\Sha(k,A) \not\subseteq pH^1(k,A)$. In abelian varieties of dimension strictly greater than 1, even the local-global divisibility by $p$ may fail for both Problem 1 and Problem 2 (see also [@DZ §3]). Here we prove the following statement.
\[P\_Sha\] Let $p$ be a prime number. Let $k$ be a number field that does not contain ${{\mathbb Q}}({{\zeta}}_p+{{\zeta}}_p^{-1})$. Let ${{\mathcal{A}}}$ be a principally polarized abelian variety defined over $k$, of dimension $2^h$, where $h\geq 0$. There exists a prime $p_h$, depending only on $h$, such that if the $p$-torsion subgroup ${{\mathcal{A}}}[p]$ of ${{\mathcal{A}}}$ is either an irreducible or a decomposable ${{\rm Gal}}(\bar{k}/k)$-module, then the elements of $\Sha(k,{{\mathcal{A}}})$ are divisible by $p$ in the Weil-Châtelet group $H^1(k,{{\mathcal{A}}})$, i. e. $\Sha(k,{{\mathcal{A}}})\subseteq p H^1(k,{{\mathcal{A}}})$, for all $p>p_h$. In particular for abelian varieties of dimension 2 and 4 we have $p_h=3$.
As mentioned above, the conclusion of Theorem \[P\_Sha\] assures an affirmative answer to Problem 2, for all $r\geq 0$, in the case when ${{\mathcal{A}}}$ is abelian variety satisfying the hyphoteses of the statement and $p>p_h$. Then, for such abelian varieties and $p>p_h$, we have a local-global principle for divisibility by $p$ in $H^r(K,{{\mathcal{A}}})$, for all $r$. The result is particularly interesting for abelian varieties of dimension 2 or 4, since we have an explicit $p_h=3$.
A few preliminary results in the theory of groups and in local-global divisibility are stated in next section. In Section 2 we treat the special case in which ${{\rm Gal}}(k({{\mathcal{A}}}[p^l])/k)$ acts decomposably over $k({{\mathcal{A}}}[p^l])$, in particular when ${{\mathcal{A}}}$ is a product of elliptic curves. In the last and main part of the paper, we proceed with the proof of Theorem \[P1\_bis\].
Preliminary results
===================
We recall some known results about local-global divisibility and about group theory, that will be useful for the proof of Theorem \[P1\_bis\].
As above, let $k$ be a number field and let ${{\mathcal{A}}}$ be an abelian variety of dimension $g$, defined over $k$. Let $q:=p^l$, where $p$ is a prime number and $l$ is a positive integer. As introduced before, the $p^l$-torsion subgroup of ${{\mathcal{A}}}$ will be denoted by ${{\mathcal{A}}}[p^l]$ and the number field obtained by adding to $k$ the coordinates of the points in ${{\mathcal{A}}}[p^l]$ will be denoted by $F:=k({{\mathcal{A}}}[p^l])$. The $p$-torsion subgroup ${{\mathcal{A}}}[p^l]$ of ${{\mathcal{A}}}$ is a $G_k$-module, where $G_k$ denotes the absolute Galois group ${{\rm Gal}}(\bar{k}/k)$. Since ${{\mathcal{A}}}[p^l]\simeq ({{\mathbb Z}}/p^l{{\mathbb Z}})^{n}$, with $n=2g$, then $G_k$ acts over ${{\mathcal{A}}}[p^l]$ as a subgroup of ${{\rm GL}}_{n}({{\mathbb Z}}/p^l{{\mathbb Z}})$ isomorphic to $G:=\textrm{Gal}(k({{\mathcal{A}}}[p^l])/k)$. We still denote by $G$ the representation of $G_k$ in ${{\rm GL}}_{n}({{\mathbb Z}}/p^l{{\mathbb Z}})$. If $l=1$, in particular $G\leq {{\rm GL}}_{{n}}(p)$.
Let $\Sigma$ be a subset of $M_k$ containing all but finitely many places $v$, such that $v\notin \Sigma$, for all $v$ ramified in $F$. For every $v\in \Sigma$, we denote by $G_v$ the Galois group ${{\rm Gal}}(F_w/k_v)$, where $w$ is a place of $F$ extending $v$. In [@DZ] Dvornicich and Zannier proved that the answer to the local-global question for divisibility by $q$ of points in ${{\mathcal{A}}}(k)$ is linked to the behaviour of the following subgroup of $H^1(G,{{\mathcal{A}}}[q])$
$$\label{h1loc}
H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[q]):=\bigcap_{v\in \Sigma} \ker H^1(G,{{\mathcal{A}}}[q])\xrightarrow{\makebox[1cm]{{\small $res_v$}}} H^1(G_v,{{\mathcal{A}}}[q]),$$
where $res_v$, as usual, denotes the restriction map. By substituting $M_k$ to $\Sigma$ in , i. e. by letting $v$ vary over all the valuations of $k$, we get the classical definition of the Tate-Shafarevich group $\Sha^1(k,{{\mathcal{A}}}[q])$ (up to isomorphism)
$$\Sha^1(k,{{\mathcal{A}}}[q]):=\bigcap_{v\in M_k} \ker H^1(k,{{\mathcal{A}}}[q])\xrightarrow{\makebox[1cm]{{\small $res_v$}}} H^1(k_v,{{\mathcal{A}}}[q]).$$
In particular, the vanishing of $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[q])$ assures the triviality of $\Sha^1(k,{{\mathcal{A}}}[q])$, that is a sufficient condition to get an affirmative answer to Problem \[prob2\], for all $r\geq 0$, in the case when ${{\mathcal{A}}}$ is principally polarized (see [@Cre2 Theorem 2.1]). Furthermore the vanishing of $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[q])$ is a sufficient condition for an affirmative answer to Problem \[prob1\] (see [@DZ Proposition 2.1]).
Because of Čebotarev’s Density Theorem, the group $G_v$ varies over all cyclic subgroups of $G$ as $v$ varies in $\Sigma$, then in [@DZ] Dvornicich and Zannier gave the following equivalent definition of $H_{\textrm{loc}}^1(G,A[q])$.
\[loc\_cond\] A cocycle $\{Z_{\sigma}\}_{\sigma\in G}\in H^1(G,{{\mathcal{A}}}[q])$ satisfies the local conditions if, for every $\sigma\in G$, there exists $A_{\sigma}\in {{\mathcal{A}}}[q]$ such that $Z_{\sigma}=(\sigma-1)A_{\sigma}$. The subgroup of $H^1(G,{{\mathcal{A}}}[q])$ formed by all the cocycles satisfying the local conditions is called *first local cohomological group* of $G$ with values in ${{\mathcal{A}}}[q]$ and it is denoted by $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[q])$.
The description of $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[q])$ given in Definition \[loc\_cond\] is useful in proving its triviality and even in producing counterexamples to the local-global divisibility. We keep the notation $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[q])$ used in almost all previous papers about the topic, but it is worth to mention that in [@San] Sansuc already treated similar modified Tate-Shafarevich groups as in and introduced the notation $\Sha^1_{\Sigma}(k,{{\mathcal{A}}})$.
The vanishing of $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])$ is strongly related to the behaviour of $H^1_{\textrm{loc}}(G_p,{{\mathcal{A}}}[p^l])$, where $G_p$ is the $p$-Sylow subgroup of $G$ (see [@DZ]).
\[Sylow\] Let $G_p$ be the $p$-Sylow subgroup of $A$. An element of $H^1_{\textrm{loc}}({{\mathcal{A}}}, {{\mathcal{A}}}[p^l])$ is zero if and only if its restriction to $H^1_{\textrm{loc}}(G_p, {{\mathcal{A}}}[p^l])$ is zero.
In some cases, a quick way to show that both $H^1_{\textrm{loc}}(G, {{\mathcal{A}}}[p^l])$ and $H^1_{\textrm{loc}}(G_p, {{\mathcal{A}}}[p^l])$ are trivial is the use of Sah’s Theorem (see [@Lan Theorem 5.1]).
\[Sah\] Let $G$ be a group and let $M$ be a $G$-module. Let $\alpha$ be in the center of $G$. Then $H^1 (G, M )$ is annihilated by the map $x \rightarrow \alpha x - x$ on $M$. In particular, if this map is an automorphism of $M$, then $H^1 (G, M ) = 0$.
By Lemma \[Sah\], if $G$ is a subgroup of ${{\rm GL}}_n(p^l)$ that contains a non-trivial scalar matrix, then $H^1(G,{{\mathbb Z}}/q{{\mathbb Z}})=0$. Thus, in particular, the same holds for $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[q])=0$.
\[scalar\] Let $G\leq {{\rm GL}}_n(q)$, for some positive integers $n$ and $q$. If $\lambda\cdot I_n\in G$ is a nontrivial scalar matrix, then $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[q])=0$.
0.1cm
In our proof of Theorem \[P1\_bis\], a crucial tool is the use of Aschbacher’s Theorem on the classification of maximal subgroups of ${{\rm GL}}_n(q)$ (see [@Asc]). Aschbacher proved that the maximal subgroups of ${{\rm GL}}_n(q)$ could be divided into 9 specific classes ${{\mathcal{C}}}_i$, $1\leq i\leq 9$. For a big $n$, it is a very hard open problem to find the maximal subgroups of ${{\rm GL}}_n(q)$ of type ${{\mathcal{C}}}_9$. We have an explicit list of such groups only for $n\leq 12$ (see [@BHR]). On the contrary, the maximal subgroups of ${{\rm GL}}_n(q)$ of geometric type (i. e. of class ${{\mathcal{C}}}_i$, with $1\leq i\leq 8$) have been described for every $n$ (see [@KL]). We recall some notations in group theory and then we resume the description of the maximal subgroups of geometric type in the following Table 1 (see [@KL Table 1.2.A and $\S$ 3.5]). 0.2cm
Let $n,q$ be positive integers and let ${{\mathbb F}}_q$ be the finite field with $q$ elements. Let $\omega_q$ be a primitive element of ${{\mathbb F}}_q^*$. We use the standard notations for the special linear group ${{\rm SL}}_n(q)$, the projective special linear group ${{\rm PSL}}_n(q)$, the special orthogonal group $\textrm{SO}_n(q)$, the unitary group ${{\rm U}}_n(q)$, the symplectic group ${{\rm Sp}}_n(q)$, the simmetric group $S_n$ and the alternating group $A_n$. By $C_n$ we denote a cyclic group of order $n$, by $E_n$ an elementary abelian group of order $n$ and by $p^{1+2n}$ an extraspecial group of order $p^{1+2n}$. Furthermore, if $n$ is even and $q$ is odd we denote by (see [@BHR])
: ${{\rm GO}}_n^+(q)$ the stabilizer of the non-degenerate symmetric bilinear antidiagonal form (1,...,1);
: ${{\rm SO}}_n^+(q)$ the subgroup of ${{\rm GO}}_n^+(q)$ formed by the matrices with determinant 1;
: ${{\rm GO}}_n^-(q)$ the stabilizer of non-degenerate symmetric bilinear form $I_n$, when $n\equiv 2 (\textrm{mod 4})$ and $q\equiv 3 (\textrm{mod 4})$ and the stabilizer of non-degenerate symmetric bilinear diagonal form $(\omega_q,1,...,1)$, when $n\not\equiv 2 (\textrm{mod 4})$ and $q\not\equiv 3 (\textrm{mod 4})$;
: ${{\rm SO}}_n^-(q)$ the subgroup of ${{\rm GO}}_n^-(q)$ formed by the matrices with determinant 1.
0.2cm
Let $A, B$ be two groups. We denote by
: $A\rtimes B$, the semidirect product of $A$ with $B$ (where $A\trianglelefteq A\rtimes B$);
: $A\circ B$, the central product of $A$ and $B$;
: $A\wr B$, the wreath product of $A$ and $B$;
: $A.B$, a group $\Gamma$ that is an extension of its normal subgroup $A$ with the group $B$ (then $B\simeq \Gamma/A$), in the case when we do not know if it is a split extension or not;
: $A^{.}B$, a group $\Gamma$ that is a non-split extension of its normal subgroup $A$ with the group $B$ (then $B\simeq \Gamma/A$);
: $A:B$, a group $\Gamma$ that is a split extension of its normal subgroup $A$ with the group $B$ (then $B\simeq \Gamma/A$ and $\Gamma\simeq A\rtimes B$).
[|c|c|c|]{} type & description & structure\
${{\mathcal{C}}}_1$ &
stabilizers of a totally singular or non$–$singular subspace
& maximal parabolic group\
${{\mathcal{C}}}_2$ &
stabilizers of a direct sum decomposition
$V =\bigoplus_{i=1}^{r} V_i$, with each $V_i$ of dimension $t$
&
${{\rm GL}}_t(q)\wr S_r, n=rt$
\
${{\mathcal{C}}}_3$ &
stabilizers of an extension field of ${{\mathbb F}}_q$ of prime index $r$
& ${{\rm GL}}_t(q^r).C_r, n=rt, r$ prime\
${{\mathcal{C}}}_4$ &
stabilizers of tensor product decomposition $V=V_1\otimes V_2$
& ${{\rm GL}}_t(q)\circ {{\rm GL}}_r(q), n=rt$\
${{\mathcal{C}}}_5$ &
stabilizers of subfields of ${{\mathbb F}}_q$ of prime index $r$
& ${{\rm GL}}_n(q_0)$, $q=q_0^r$, $r$ prime\
0.6cm
${{\mathcal{C}}}_6$
&
normalizers of symplectic-type $r$-groups
($r$ prime) in absolutely irreducible
representations
&
$E_{r^{2t}}.{{\rm Sp}}_{2t}(r)$, $n=r^{t}, r$ prime
0.1cm
$2^{1+2t}.\textrm{O}^{-}_{2t}(r)$, $n=2^t$
0.1cm
$2_{+}^{1+2t}.\textrm{O}^{+}_{2t}(r)$, $n=2^{t}$
0.1cm
\
${{\mathcal{C}}}_7$ &
stabilizers of decompositions
$V=\bigotimes_{i=1}^t V_i, \textrm{dim}(V_i)=r$
& $ \underbrace{ ({{\rm GL}}_r(q)\circ ... \circ {{\rm GL}}_r(q)) }_{t} .S_r, n=r^t $\
0.2cm
${{\mathcal{C}}}_8$
&
0.2cm
classical subgroups
&
${{\rm Sp}}_n(q)$, $n$ even
$\textrm{O}_n^{\epsilon}(q)$, $q$ odd
${{\rm U}}_n(q^{\frac{1}{2}}),$ $q$ a square
\
\
Although we generally do not know explicitly the maximal subgroups of type ${{\mathcal{C}}}_9$, by Aschbacher’s Theorem, we have such a characterization of them:
“if $\Gamma$ is a maximal subgroup of ${{\rm GL}}_n(q)$ of class ${{\mathcal{C}}}_9$ and $Z$ denotes its center, then for some nonabelian simple group $T$, the group $\Gamma/(\Gamma \cap Z)$ is almost simple with socle $T$; in this case the normal subgroup $(\Gamma ∩ Z) .T$ acts absolutely irreducibly, preserves no nondegenerate classical form, is not a subfield group, and does not contain ${{\rm SL}}_n(q)$.”
For very small integers $n$ there are a few subsequent and more explicit versions of Aschbacher’s Theorem, that describe explicitly the subgroups of class ${{\mathcal{C}}}_9$. To prove Theorem \[P17\] we will use the classification of the maximal subgroups of ${{\rm SL}}_n(q)$ appearing in [@BHR], for $n\in \{4,8\}$.
Decomposable actions and products of elliptic curves
====================================================
First of all we investigate what happens when the group $G=\textrm{Gal}(k({{\mathcal{A}}}[q])/k,{{\mathcal{A}}}[q])$ acts decomposably on ${{\mathcal{A}}}[q]$, i. e., the representation of $G_k$ in ${{\rm GL}}_n(q)$ is a group of matrices with diagonal blocks. For instance, this is the case when ${{\mathcal{A}}}$ is a direct product of elliptic curves.
\[reducible\] Let $q$ be a positive integer. Suppose that $G$ acts decomposably on ${{\mathcal{A}}}[q]$, i. e. the representation of $G$ in ${{\rm GL}}_n({{\mathbb Z}}/q{{\mathbb Z}})$ is of the form
$$\label{diag_blocks} \left(
\begin{array}{ccccc}
B_1 & 0 & ... & & 0\\
0 & B_2 & 0 & ... & 0 \\
\vdots & & \ddots & &\vdots\\
& & & \ddots & 0\\
0 & ... & & 0 & B_s
\end{array}
\right)$$
where $B_i \in {{\rm GL}}_{n_i}$, for $i\in \{1, 2, ... ,s\}$ and $\sum_{i=1}^{s}n_i=n$. Let $G_i$ denote the subgroup of ${{\rm GL}}_{n_i}$ formed by the matrices $B_i$, for all $1\leq i\leq s$. Then $H^1_{{{\rm loc}}}(G,{{\mathbb Z}}/q{{\mathbb Z}}^n)=0$ if and only if $H^1_{{{\rm loc}}}(G_i,{{\mathbb Z}}/q{{\mathbb Z}}^{n_i})=0$, for all $1\leq i\leq s$.
The conclusion is a direct consequence of $H^1(G,-)$ being an additive functor and $H_{\textrm{loc}}^1(G,-)$ being a subfunctor of his (see for instance [@HS] and [@JL]); anyway we show a proof involving local cocycles. We prove the statement when $s=2$. When $s> 2$, the conclusion follows by induction. Assume that the representation of $G=\textrm{Gal}(k({{\mathcal{A}}}[q])/k,{{\mathcal{A}}}[q])$ in ${{\rm GL}}_n({{\mathbb Z}}/q{{\mathbb Z}})$ is of the form
$$\left(
\begin{array}{cc}
B_1 & 0 \\
0 & B_2 \\
\end{array}
\right),$$
where $B_1 \in {{\rm GL}}_{n_1}({{\mathbb Z}}/q{{\mathbb Z}})$, $B_2\in {{\rm GL}}_{n-n_1}({{\mathbb Z}}/q{{\mathbb Z}})$. Of course $G_1$ and $G_2$ can be identified with subgroups of $G$. If a cocycle of $G$ satisfies the local conditions, in particular its restriction to any subgroup of $G$ satisfies the local conditions too. Thus $H^1_{{{\rm loc}}}(G,{{\mathbb Z}}/q{{\mathbb Z}}^n)=0$ implies $H^1_{{{\rm loc}}}(G_i,{{\mathbb Z}}/q{{\mathbb Z}}^{n_i})=0$, for $i\in \{1,2\}$. Let $\{a_{i,j}\}_{1\leq i,j\leq n}$ denote a matrix in $G$; consequently $\{a_{i,j}\}_{1\leq i, j\leq n_1}$ and $\{a_{i,j}\}_{n_1+1\leq i, j\leq n}$ are matrices in $G_1$ and $G_2$, respectively. Suppose that there exists a cocycle $\{Z_{\sigma}\}_{\sigma\in G}$ of $G$ with values in $({{\mathbb Z}}/q{{\mathbb Z}})^{n}$ satisfying the local conditions, with $Z_{\sigma}=(z_{\sigma,1}, ..., z_{\sigma,n})$. We define two new cocycles, one of $G_1$ with values in $({{\mathbb Z}}/q{{\mathbb Z}})^{n_1}$ and the other of $G_2$ with values in $({{\mathbb Z}}/q{{\mathbb Z}})^{n-n_1}$, respectively by $Z_{\sigma,B_1}:=(z_{\sigma,1}, ..., z_{\sigma,n_1})\in ({{\mathbb Z}}/q{{\mathbb Z}})^{n_1}$ and $Z_{\sigma,B_2}:=(z_{\sigma,n_1+1}, ..., z_{\sigma,n})\in ({{\mathbb Z}}/q{{\mathbb Z}})^{n-n_1}$. Since $\{Z_{\sigma}\}_{\sigma\in G}$ satisfies the local conditions, then $\{Z_{\sigma,B_1}\}_{B_1\in G_1}$ and $\{Z_{\sigma,B_2}\}_{B_2\in G_2}$ satisfy the local conditions too. Because of our hypothesis that $H^1_{{{\rm loc}}}(G_1,({{\mathbb Z}}/q{{\mathbb Z}})^{n_1})=0$, there exists $W_1=(w_{1}, ..., w_{n_1})\in ({{\mathbb Z}}/q{{\mathbb Z}})^{n_1}$, such that $(B_1-I_{n_1})W_1=Z_{\sigma,B_1}$, for all $B_1 \in G_1$. In the same way, since $H^1_{{{\rm loc}}}(G_2,({{\mathbb Z}}/q{{\mathbb Z}})^{n-n_1})=0$, then there exists $W_2=(w_{n_1+1}, ..., w_n)\in ({{\mathbb Z}}/q{{\mathbb Z}})^{n-n_1}$, such that $(B_2-I_{n-n_{1}})W_2=Z_{\sigma,B_2}$, for all $B_2 \in G_2$. Let $W=(w_{1}, ..., w_{n_1}, w_{n_1+1}, ..., w_n)\in ({{\mathbb Z}}/q{{\mathbb Z}})^{n}$. Therefore $(G-I_{n})W=Z_{\sigma}$, for all $\sigma \in G$. We have proved that every cocycle of $G$ with values in $({{\mathbb Z}}/q{{\mathbb Z}})^{n}$ and satisfying the local conditions is a coboundary; thus $H^1_{{{\rm loc}}}(G,({{\mathbb Z}}/q{{\mathbb Z}})^{n})=0$.
\[rem\_red\] Observe that the conclusion of Lemma \[reducible\] holds even if we suppose that the image of the representation of $\textrm{Gal}(k({{\mathcal{A}}}[q])/k)$ in ${{\rm GL}}_n({{\mathbb Z}}/q{{\mathbb Z}})$ is isomorphic to a subgroup of ${{\rm GL}}_n(p^m)$ (for some prime $p$ and some positive integer $m$). We have such a technical assumption in the statement of Theorem \[P1\_bis\].
With Lemma \[reducible\] and a known answer to the problem for elliptic curves, the case of products of elliptic curves is quite obvious to solve. Anyway it is worth to be mentioned here for completeness.
\[product\_ell\_curv\] Let $k$ be a number field and let ${{\mathcal{E}}}_1, {{\mathcal{E}}}_2$ be elliptic curves with Weierstrass form respectively $y^2=x^3+b_ix+c_i$, for $i\in \{1,2\}$, where $b_i, c_i\in k$. Let $p$ be a prime number and $l$ be a positive integer. The local-global divisibility by $p^l$ holds in the product ${{\mathcal{E}}}_1\times {{\mathcal{E}}}_2$ over $k$ if and only if it holds in both ${{\mathcal{E}}}_1$ over $k$ and ${{\mathcal{E}}}_2$ over $k$.
To ease notation let ${{\mathcal{A}}}={{\mathcal{E}}}_1\times {{\mathcal{E}}}_2$. The fundamental observation is that the representation of the Galois group $\textrm{Gal}(k({{\mathcal{A}}}[p^l])/k)$ in ${{\rm GL}}_4({{\mathbb Z}}/p^l{{\mathbb Z}})$ is a group of matrices with two diagonal blocks (each of them with 2 rows and 2 columns). It is not true in general that the whole automorphism group of a product of elliptic curves is formed by matrices with diagonal blocks. Anyway, this is the exact situation when we restrict to automorphisms corresponding to actions of $\textrm{Gal}(k({{\mathcal{A}}}[p^l])/k)$. In fact, every automorphism of ${{\mathcal{A}}}$ in $\textrm{Gal}(k({{\mathcal{A}}}[p^l])/k)$ corresponds to a Galois homomorphism of the extension $k({{\mathcal{A}}}[p^l])/k$, whose action on the points of ${{\mathcal{A}}}$ can be viewed as two separate actions on the points of ${{\mathcal{E}}}_1$ and ${{\mathcal{E}}}_2$ (even when ${{\mathcal{E}}}_1={{\mathcal{E}}}_2$). We can apply Lemma \[reducible\] to get the conclusion.
The argument in the previous proof works also if we have an abelian variety of dimension $g$, that is the product of elliptic curves ${{\mathcal{E}}}_1, ..., {{\mathcal{E}}}_g$ satisfying the hypotheses of Theorem \[product\_ell\_curv\]. Then, more generally, we have the following statement.
\[product\_ell\_curv\_gen\] Let $k$ be a number field, let $g$ be a positive integer and let ${{\mathcal{E}}}_1, {{\mathcal{E}}}_2, ..., {{\mathcal{E}}}_g$ be elliptic curves with Weierstrass form respectively $y^2=x^3+b_ix+c_i$, for $i\in \{1,2, ..., g\}$, where $b_i, c_i\in k$. Let $p$ be a prime number and $l$ be a positive integer. The local-global divisibility by $p^l$ holds in the product ${{\mathcal{E}}}_1\times {{\mathcal{E}}}_2 ... \times {{\mathcal{E}}}_g$ over $k$ if and only if holds in every curve ${{\mathcal{E}}}_i$, over $k$, for all $1\leq i \leq g$.
By using Theorem \[product\_ell\_curv\_gen\] and [@PRV Corollary 2], we get the next result.
\[product\_cor1\] Let $k$ be a number field, let $g$ be a positive integer and let ${{\mathcal{E}}}_1, {{\mathcal{E}}}_2, ..., {{\mathcal{E}}}_g$ be elliptic curves with Weierstrass form respectively $y^2=x^3+b_ix+c_i$, for $i\in \{1,2, ..., g\}$, where $b_i, c_i\in k$. Let $p$ be a prime number. Then there exists a number $C([k:{{\mathbb Q}}])$ depending only on the degree $[k:{{\mathbb Q}}]$, such that, if $p>C([k:{{\mathbb Q}}])$, then the local-global divisibility by $p^l$ holds in the product ${{\mathcal{E}}}_1\times {{\mathcal{E}}}_2 ... \times {{\mathcal{E}}}_g$ over $k$, for every positive integer $l$.
Furthermore, if $k={{\mathbb Q}}$, we can combine Theorem \[product\_ell\_curv\] with the results appearing in [@DZ], [@PRV2], [@Pal3] and [@Cre2], to get a complete answer to the local-global divisibility in products of elliptic curves defined over the rationals.
\[product\_cor2\] Let $g$ be a positive integer and let ${{\mathcal{E}}}_1, {{\mathcal{E}}}_2, ..., {{\mathcal{E}}}_g$ be elliptic curves defined over ${{\mathbb Q}}$ with Weierstrass form respectively $y^2=x^3+b_ix+c_i$, for $i\in \{1,2, ..., g\}$, where $b_i, c_i\in {{\mathbb Q}}$. Let $p$ be a prime number. If $p\geq 5$, then the local-global divisibility by $p^l$ holds in the product ${{\mathcal{E}}}_1\times {{\mathcal{E}}}_2 ... \times {{\mathcal{E}}}_g$ over ${{\mathbb Q}}$, for every positive integer $l$. If $p\in \{2,3\}$, then the local-global divisibility by $p^l$ holds in the product ${{\mathcal{E}}}_1\times {{\mathcal{E}}}_2 ... \times {{\mathcal{E}}}_g $ over ${{\mathbb Q}}$ only when $l=1$; on the contrary, when $l\geq 2$, there are counterexamples.
**Counterexamples.** For powers of 2 (resp. powers of 3), the explicit counterexamples to the local-global divisibility appearing in [@Pal2] and [@Pal3] give also explicit counterexamples to the local-global divisibility by $2^l$ (resp. $3^l$), for every $l\geq 2$, in products of elliptic curves defined over ${{\mathbb Q}}$ (resp. over the cyclotomic field ${{\mathbb Q}}({{\zeta}}_3)$), for all $g$. It suffices to take the product of elliptic curves ${{\mathcal{E}}}_1, ..., {{\mathcal{E}}}_g$ with at least one of the ${{\mathcal{E}}}_i's$ being a curve giving a counterexample.
Proof of Theorem \[P1\_bis\]
============================
First note that if $\dim(A)=2^h$, then ${{\rm Gal}}(k({{\mathcal{A}}}[p])/k)$ is isomorphic to a subgroup of ${{\rm GL}}_{2^{h+1}}(p)$. As above, to ease notation we set $n=2^{h+1}$, so that we can simply refer to ${{\rm GL}}_n(p)$ and, more generally, to ${{\rm GL}}_n(p^m)$, $m\geq 1$.
Our assumption that $G$ is isomorphic to a subgroup of ${{\rm GL}}_{2^{h+1}}(p^m)$ (instead of simply ${{\rm GL}}_{2^{h+1}}(p)$) is just a technical one, since dealing with powers of $p$ in lieu of $p$ will be useful when $G$ is of type ${{\mathcal{C}}}_3$ (and it is isomorphic to a subgroup of ${{\rm GL}}_t(p^r).C_r$, with $n=tr$ and $r$ prime) or $G$ is of type ${{\mathcal{C}}}_5$ (and it is isomorphic to a subgroup of ${{\rm GL}}_n(p^r)$, with $r$ a prime dividing $m$).
For $h=0$ and $G< {{\rm GL}}_2(p^m)$, the following statement can be deduced from the proof of [@DZ3 Theorem 1] and from Remark \[rem\_red\].
\[caso\_n=2bis\] Let $k$ be a number field that does not contain ${{\mathbb Q}}({{\zeta}}_p+{{\zeta}}_p^{-1})$. Let ${{\mathcal{A}}}$ be an algebraic group defined over $k$, such that ${{\rm Gal}}(k({{\mathcal{A}}}[p^l])/k)\lesssim {{\rm GL}}_2(p^m)$, where $p>3$ is a prime number and $l,m$ are positive integers. If ${{\mathcal{A}}}[p^l])$ is either an irreducible or a decomposable $G_k$-module, then the local-global principle holds for divisibility by $p^l$ in ${{\mathcal{A}}}$ over $k$.
In particular that result holds when $l=m=1$. From now on, we may assume that $h\geq 1$ (i. e. $\dim(A)\geq 2$ and $n\geq 4$). Let $G$ be a subgroup of ${{\rm GL}}_{n}(p^m)$ and let $\widetilde{G}:=G\cap {{\rm SL}}_n(p^m)$. Since $|{{\rm GL}}_n(p^m)|=(p^m-1)|{{\rm SL}}_n(p^m)|$, then the $p$-Sylow subgroup of ${{\rm GL}}_n(p^m)$ coincides with the $p$-Sylow subgroup of ${{\rm SL}}_n(p^m)$. By Lemma \[Sylow\], we have $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[p^m])=0$ if and only if $H^1_{\textrm{loc}}(\widetilde{G},{{\mathcal{A}}}[p^m])=0$. Therefore we may assume $G\leq {{\rm SL}}_n(p^m)$. If $G ={{\rm SL}}_n(p^m)$, then, being $n=2^{h+1}$ and $p\neq 2$, the nontrivial scalar matrix $-I$ belongs to ${{\mathcal{A}}}$. By Corollary \[scalar\] we get the triviality of $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[p^m])$. From now on we will assume, without loss of generality, that $G$ is a proper subgroup of ${{\rm SL}}_n(p^m)$.
For $h=1$ we give a proof of Theorem \[P1\_bis\] based on a case by case analysis of the possible maximal subgroups of ${{\rm SL}}_4(p^m)$. Then we prove the statement for a general $h$, using the classification of the possible maximal subgroups of ${{\rm SL}}_n(q)$ resumed in Table 1, combined with induction for some of the classes ${{\mathcal{C}}}_i$ of groups. By the proof it will be clear that for subgroups of geometric type ${{\mathcal{C}}}_i$, with $i\neq 6$, everything works for every $p>3$ too. The class ${{\mathcal{C}}}_6$ is the hardest to be described explicitly between the ones of geometric type. Because of the groups $G$ in that class we possibly have to choose $p_h\neq 3$, for $h\geq 3$. Furthermore, for $h\geq 3$ a complete classification of the subgroups of type ${{\mathcal{C}}}_9$ is unknown, then for such integers we cannot show an explicit $p_h$, even if we can prove its existence. In the very last part of the proof, looking at the maximal subgroups of ${{\rm SL}}_8(p^m)$, we establish an explicit $p_2$.
The case of abelian varieties of dimension 2 {#sec4}
--------------------------------------------
In this subsection we prove Theorem \[P1\_bis\] for abelian varieties of dimension 2. Assume that ${{\rm Gal}}(k({{\mathcal{A}}}[p^l])/k)$ is isomorphic to a proper subgroup of $SL_4(p^m)$, for any positive integer $m$. First of all, we recall some notation and some group isomorphisms.
Let $s$ be a positive integer. The extraspecial 2-group of *minus type* $2_{-}^{1+2s}$ is a central product of a quaternion group of order 8 with one or more dihedral groups of order 8 (see for instance [@KL]). The extraspecial 2-group of *plus type* $2_{+}^{1+2s}$ is a central product of dihedral groups of order 8. The *symplectic type* is given by a central product of either type of extraspecial $2$-groups with a cyclic group of order $4$.
\[isom\] The following isomorphisms hold
1)
: ${{\rm SO}}_4^+(q)\cong {{\rm SL}}_2(q)\times {{\rm SL}}_2(q)$;
2)
: ${{\rm SO}}_4^-(q)\cong {{\rm SL}}_2(q^2)$.
0.5cm We also recall the classification of the maximal subgroups of ${{\rm SL}}_4(p^m)$ and the classification of the maximal subgroups of the sympleptic group $\textrm{Sp}_4(q)$ (when $q$ is odd) appearing in [@BHR].
\[M+B+L-lem\] Let $q=p^m$, where $p$ is an odd prime and $m$ is a positive integer. Let $d:=\gcd(q-1,4)$. The maximal subgroups of ${{\rm SL}}_4(q)$ are
(a)
: a group of type ${{\mathcal{C}}}_1$, the stabilizer of a projective point, i. e. the group $C_q^3:{{\rm GL}}_3(q)$, having order $q^6 (q^3-1)(q^2-1)(q-1)$;
(b)
: a group of type ${{\mathcal{C}}}_1$, the stabilizer of a projective line, having order $q^4|{{\rm SL}}_4(q)|^2(q-1)=q^6(q^2-1)^2(q-1)$;
(c)
: a group of type ${{\mathcal{C}}}_1$, the stabilizer of two distinct projective points and a projective line, having order $q^5|{{\rm GL}}_2(q)|(q-1)=q^6(q^2-1)(q-1)^3$;
(d)
: a group of type ${{\mathcal{C}}}_1$, a group isomorphic to ${{\rm GL}}_3(q)$, that stabilizes both a projective point and a projective plane, whose direct sum is ${{\mathbb F}}_q^4$, having order $q^3(q^3-1)(q^2-1)(q-1)$;
(e)
: a group of type ${{\mathcal{C}}}_2$, the stabilizer of a decomposition of four subspaces of dimension 1 whose direct sum is ${{\mathbb F}}_q^4$, i. e. a group of order $(q-1)^34!$;
(f)
: a group of type ${{\mathcal{C}}}_2$, the stabilizer of a decomposition of two subspaces of dimension 2 whose direct sum is ${{\mathbb F}}_q^4$, i. e. a group of order $2|{{\rm SL}}_2(q^2)|(q-1)=2q^2(q^2-1)^2(q-1)$;
(g)
: a group of type ${{\mathcal{C}}}_3$, a group of order $2q^2(q^4-1)(q+1)$, which has ${{\rm SL}}_2(q^2)$ as a normal subgroup;
(h)
: a group of type ${{\mathcal{C}}}_6$, the group $C_4\circ 2^{1+4}\hspace{0.1cm} ^{\cdot} S_6$;
(i)
: a group of type ${{\mathcal{C}}}_6$, the group $C_4\circ 2^{1+4}\hspace{0.1cm} _{\cdot} A_6$;
(j)
: a group of type ${{\mathcal{C}}}_8$, a group of order $d|{{\rm SO}}_4^+(q)|$, which has ${{\rm SO}}_4^+(q)$ as a normal subgroup;
(k)
: a group of type ${{\mathcal{C}}}_8$, a group of order $d|{{\rm SO}}_4^-(q)|$, which has ${{\rm SO}}_4^-(q)$ as a normal subgroup;
(l)
: a group of type ${{\mathcal{C}}}_8$, the group ${{\rm Sp}}_4(q).C_2$ of order $2q^4(q-1)(q^2-1)(q^4-1)$;
(m)
: a group of type ${{\mathcal{C}}}_9$, the group $A_7$ (only if $p=2$);
(n)
: a group of type ${{\mathcal{C}}}_9$, the group $C_d\circ C_2\hspace{0.1cm}^{\cdot}{{\rm SL}}_2(7)$;
(o)
: a group of type ${{\mathcal{C}}}_9$, the group $C_d\circ C_2\hspace{0.1cm}^{\cdot}A_7$;
(p)
: a group of type ${{\mathcal{C}}}_9$, the group $C_d\circ C_2\hspace{0.1cm}^{\cdot}\textrm{U}_4(2)$.
0.3cm
\[Sp4\_sub\] Let $q=p^m$, where $p$ is an odd prime and $m$ is a positive integer. The maximal subgroups of $\textrm{Sp}_4(q)$ are
(l.1)
: a group of type ${{\mathcal{C}}}_1$, the group $E_{q}\hspace{0.1cm}. E_q^2:((q-1)\times {{\rm Sp}}_2(q))$, of order $q^4(q-1)^2$;
(l.2)
: a group of type ${{\mathcal{C}}}_1$, the group $E_q^3:{{\rm GL}}_3(q)$, of order $q^(q^3-1)(q^2-1)(q-1)$;
(l.3)
: a group of type ${{\mathcal{C}}}_2$, the stabilizer of a decomposition of two subspace of dimension 2 whose direct sum is ${{\mathbb F}}_q^4$, i. e. $\textrm{Sp}_2(q)^2\rtimes C_2$;
(l.4)
: a group of type ${{\mathcal{C}}}_2$, the group ${{\rm GL}}_2(q).C_2$;
(l.5)
: a group of type ${{\mathcal{C}}}_3$, the group $\textrm{Sp}_2(q^2)\rtimes C_2$;
(l.6)
: a group of type ${{\mathcal{C}}}_6$, the group $2_{-}^{1+4} \hspace{0.1cm}_{\cdot}S_5$;
(l.7)
: a group of type ${{\mathcal{C}}}_6$, the group $2_{-}^{1+4} \hspace{0.1cm}_{\cdot}A_5$;
(l.8)
: a group of type ${{\mathcal{C}}}_9$, the group $C_2\hspace{0.1cm}^{\cdot}A_6$;
(l.9)
: a group of type ${{\mathcal{C}}}_9$, the group $C_2\hspace{0.1cm}^{\cdot}S_6$;
(l.10)
: a group of type ${{\mathcal{C}}}_9$, the group $C_2\hspace{0.1cm}^{\cdot}A_7$ (only for $p=7$);
(l.11)
: a group of type ${{\mathcal{C}}}_9$, the group ${{\rm SL}}_2(q)$.
**Proof of Theorem \[P1\_bis\] for ***h***=1.** Let $p>3$. Without loss of generality we assume that $G$ is contained in a proper subgroup of ${{\rm SL}}_4(q)$, where $q=p^l$, and we use Lemma \[M+B+L-lem\]. We furthermore assume that ${{\mathcal{A}}}[p^l]$ is either an irreducible or a decomposable $G$-module. In particular we are not in one of the cases **(a), (b), (c)**, unless we can apply Remark \[rem\_red\] and, because of Lemma \[caso\_n=2bis\], to get the vanishing of $H^1_{{{\rm loc}}}(G_p,{{\mathcal{A}}}[p^l])$.
If we are in case **(d)**, then $G$ is of type ${{\mathcal{C}}}_1$, but acts decomposably on ${{\mathcal{A}}}[p^l]$. Thus $H^1_{{{\rm loc}}}(G_p,{{\mathcal{A}}}[p^l])=0$, by Remark \[rem\_red\] and Lemma \[caso\_n=2bis\] again.
If we are in cases **(e)**, then $p\nmid |G|$, the $p$-Sylow subgroup of $G$ is trivial and $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[p^l])=0$.
In case **(f)**, the $p$-Sylow subgroup $G_p$ of $G$ has shape
$$\left(
\begin{array}{cccc}
1 & a & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & b \\
0 & 0 & 0 & 1 \\
\end{array}
\right)$$
where $a,b\in {{\mathbb Z}}/p{{\mathbb Z}}$. By Lemma \[caso\_n=2bis\] and Remark \[rem\_red\], the first local cohomology group $H^1_{{{\rm loc}}}(G_p,{{\mathcal{A}}}[p^l])$ is trivial. Thus $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[p^l])=0$.
In case **(g)**, we have that $G$ is contained in a group that has a normal subgroup isomorphic to ${{\rm SL}}_2(p^2)$, with index not divisible by $p$. Observe that the $p$-Sylow subgroup $G_p$ of $G$ is contained in $G':={{\rm SL}}_2(p^2)\cap G$ . We use Lemma \[caso\_n=2bis\] to get $H^1_{{{\rm loc}}}(G_p,{{\mathcal{A}}}[p^l])=0$, that is equivalent to $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[p^l])=0$.
If we are in case **(h)** (resp. case **(i)**) and $p>5$, then the $p$-Sylow subgroup of $G$ is trivial too and $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[p^l])=0$. If we are in case **(h)** (resp. case **(i)**) and $p=5$, then the $5$-Sylow subgroup of $G$ is cyclic and $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[p^l])=0$.
In cases **(j)** and **(k)** we use the group isomorphisms listed in Lemma \[isom\]. Then both cases are covered by Lemma \[caso\_n=2bis\].
Consider case **(l)**, i. e. $G$ is isomorphic to a subgroup of ${{\rm Sp}}_4(p^m).C_2$. Since $p\neq 2$, then $G_p$ is contained in ${{\rm Sp}}_4(p^m)$. To ease notation, without loss of generality, we may assume $G\lesssim {{\rm Sp}}_4(p^m)$. If $G={{\rm Sp}}_4(p^m)$, then $-I_4\in G$. Since $p\neq 2$, then $G$ contains a nontrivial scalar matrix and, by Corollary \[scalar\], we have $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[p^l])=0$. If $G=\langle I_4\rangle$, then $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[p^l])$ is trivial too. Therefore, we assume that $G$ is a proper subgroup of ${{\rm Sp}}_4(p^m)$ and we use Lemma \[Sp4\_sub\].
In cases **(l.1)** and **(l.2)**, the group $G$ acts reducibly over ${{\mathcal{A}}}[p^l]$.
In cases **(l.3)** and **(l.4)**, the group $G_p$ acts decomposably over ${{\mathcal{A}}}[p^l]$ and $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[p^l])=0$, by Remark \[rem\_red\] and Lemma \[Sylow\].
Consider case **(l.5)**. Then $G_p$ is isomorphic to a subgroup of the $p$-Sylow subgroup of $\textrm{Sp}_2(p^{2m})$. In particular $G_p$ is isomorphic to a subgroup of ${{\rm SL}}_2(p^{2m})$. By Lemma \[caso\_n=2bis\], we get $H^1_{{{\rm loc}}}(G_p,{{\mathcal{A}}}[p^l])=H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[p^l])=0$.
In cases **(l.6)** and **(l.7)**, if $p>5$, then $p\nmid |G|$, implying $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[p^l])=0$. If $p=5$, we have that the $5$-Sylow subgroup of $G$ is cyclic and $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[p^l])=0$ too.
In cases **(l.8)**, **(l.9)** and **(l.10)**, the $p$-Sylow subgroup of $G$ is either trivial or cyclic, for all $p>3$.
Case **(l.11)** is covered by Lemma \[caso\_n=2bis\] again.
We are left with the cases when $G$ is of type ${{\mathcal{C}}}_9$ and it is not contained in ${{\rm Sp}}_4(p)$. In all those cases **(m)**, **(n)**, **(o)** and **(p)** the $p$-Sylow subgroup $G_p$ of $G$ is either trivial or cyclic, for all $p>3$. Then $H^1_{{{\rm loc}}}(G,{{\mathcal{A}}}[p^l])=0$. [${\Box}$ ]{}
0.5cm
From the proof, it is clear that the conclusion holds not only for abelian varieties of dimension 2, but even for all algebraic groups ${{\mathcal{A}}}$ such that ${{\mathcal{A}}}[p]\simeq ({{\mathbb Z}}/p^l{{\mathbb Z}})^4$ and ${{\rm Gal}}(k({{\mathcal{A}}}[p^l])/k)\lesssim {{\rm GL}}_4(p^m)$, with $m\geq 1$.
General Case {#gen_case}
------------
We are going to prove Theorem \[P1\_bis\], for the general case of an abelian variety of dimension $2^h$, that corresponds to the proof of the following proposition. We will prove $p_2=3$ in next subsection.
\[P1\_prop1\] Let $p$ be a prime number and let $l,h$ be positive integers. Let $k$ be a number field that does not contain ${{\mathbb Q}}({{\zeta}}_p+{{\zeta}}_p^{-1})$. Let ${{\mathcal{A}}}$ be an abelian variety defined over $k$, of dimension $2^h$, where $h\geq 0$. Let $n=2^{h+1}$ and assume that ${{\rm Gal}}(k({{\mathcal{A}}}[p^l])/k)$ is isomorphic to a subgroup of ${{\rm GL}}_{n}(p^m)$, for some positive integer $m$. For every $h$, there exists a prime $p_h$, depending only on $h$, such that if $p>p_h$ and the local-global divisibility by $p$ fails in ${{\mathcal{A}}}(k)$, then ${{\rm Gal}}(k({{\mathcal{A}}}[p^l])/k)$ acts reducibly but not decomposably over ${{\mathcal{A}}}[p^l]$.
Let $p>3$ and, as above, let $n=2^h$. Suppose that $G$ acts either irreducibly or decomposably on ${{\mathcal{A}}}[q]$, where $q=p^l$. We use the description of the subgroups of ${{\rm GL}}_n(q)$ of geometric type given in Table 1. For some classes of groups we proceed by induction, having already proved the statement for $ h\in \{0,1\}$. Thus, assume that the proposition holds for all integers $h' <h$. We will prove it for $h$.
Because of our assumptions, the group $G$ is not of class ${{\mathcal{C}}}_1$, unless its action on ${{\mathcal{A}}}[q]$ is decomposable. For all $p>p_{h-1}$, the triviality of $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[q])$ follows from Remark \[rem\_red\] and induction.
Suppose that $G$ is of type ${{\mathcal{C}}}_2$. Then $G$ is the wreath product of a group $G'$ of matrices with $2^{\alpha}$ diagonal blocks by a symmetric group $S_{2^{\alpha}}$, where $\alpha\leq h$. Since $p> 2$, then the $p$-Sylow subgroup $G_p$ of $G$ is contained in $G'$. Thus, by Remark \[rem\_red\] and by induction, we get $H^1_{\textrm{loc}}(G_p,{{\mathcal{A}}}[p^l])=0$, for all $p>p_{h-1}$. Consequently $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])=0$, for all $p>p_{h-1}$, because of Lemma \[Sylow\].
Suppose now that $G$ is of type ${{\mathcal{C}}}_3$. Then $G$ is isomorphic to a subgroup of ${{\rm GL}}_t(p^{mr}).C_r$, where $r$ is a prime and $n=tr$. Since $n=2^{h+1}$, then $r=2$ and $t=2^{h}$. Furthermore $p$ does not divide $r$ and we may assume without loss of generality that $G$ is isomorphic to a subgroup of ${{\rm GL}}_t(p^{mr})$. Since $t|n$, $t\neq n$, we use induction to get $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])=0$, for every $p>p_{h-1}$.
Suppose that $G$ is of type ${{\mathcal{C}}}_4$. Then $G$ is isomorphic to a subgroup of a central product ${{\rm GL}}_t(p^m)\circ {{\rm GL}}_r(p^m)$ acting on a tensor product $V_1\otimes V_2={{\mathcal{A}}}[p^l]$, where $rt=n=2^h$ and $V_1$, $V_2$ are vectorial spaces over ${{\mathbb F}}_{p^m}$, with dimension respectively $t$ and $r$. A central product $\Gamma$ of two groups is a quotient of their direct product by a subgroup of its center. Then every subgroup of $\Gamma$ is a central product too. So let $G=G_t\circ G_r$, with $G_t$ acting on $V_1$ and $G_r$ acting on $V_2$. Consider $Z_{\sigma\otimes \tau}$, with $\sigma\otimes \tau\in G_t\circ G_r$, representing a cocycle of $G$ with values in ${{\mathcal{A}}}[p^l] =V_1\otimes V_2$. If $Z_{\sigma\otimes \tau}$ satisfies the local conditions, then there exists $A_{\sigma\otimes\tau}\in V_1\otimes V_2$ such that $Z_{\sigma\otimes \tau}=(\sigma\otimes \tau - 1\otimes 1)A_{\sigma\otimes\tau}$, for all $\sigma\otimes \tau\in G_t\circ G_r$. Observe that $A_{\sigma\otimes\tau}=A_{\sigma\otimes\tau,1}\otimes A_{\sigma\otimes\tau,2}$, for some $A_{\sigma\otimes\tau,1}\in V_1$ and $A_{\sigma\otimes\tau,2}\in V_2$. We have two separated actions of $G_t$ on $V_1$ and $G_r$ on $V_2$. Then we can construct a cocycle $Z_{\sigma}:=(\sigma-1)A_{\sigma}$, with $\sigma\in G_t$, by choosing $A_{\sigma}$ among the possible $A_{\sigma\otimes\tau,1}\in V_1$. In the same way we can construct a cocycle $Z_{\tau}:=(\tau-1)A_{\tau}$, with $\tau\in G_r$, by choosing $A_{\tau}$ among the possible $A_{\sigma\otimes\tau,2}\in V_2$. For the tensor product construction, a priori we could have more than one choice of $A_{\sigma}$ (respectively $A_{\tau}$) for each $\sigma$ (resp. $\tau$). Anyway, we choose just one $A_{\sigma}$ (resp. $A_{\tau}$). We will have no problems about this choice, because of the two separated actions of $G_t$ and $G_r$ respectively on $V_1$ and $V_2$. Observe that even in the general case of Definition \[loc\_cond\], when a cocycle satisfies the local conditions, there could exist various $A_{\sigma}$ giving the equality $Z_{\sigma}=(\sigma-1)A_{\sigma}$. Anyway we make just one choice for $A_{\sigma}\in {{\mathcal{A}}}[q]$, for each $\sigma\in G$. Since $r=2^{\alpha}$, with $\alpha< h$, by induction $H^{1}_{\textrm{loc}}(G_r,V_1)=0$, for every $p>p_{\alpha}$, unless $G_r$ acts reducibly but not decomposably over $V_1$. Observe that if $G_r$ acts reducibly over $V_1$, then also $G_r\otimes G_t$ is a parabolic group and $G$ acts reducibly over ${{\mathcal{A}}}[p^l]$ too. That is a contradiction with our assumptions. Then $H^{1}_{\textrm{loc}}(G_r,V_1)=0$ and there exists $A\in V_1$, such that $Z_{\sigma}=(\sigma-1)A$, for all $\sigma\in G_r$. In the same way, by induction, since $t=2^{\beta}$, with $\beta<h$ (and $\beta+\alpha=h$), by induction, for all $p>p_{\beta}$, we have $H^{1}_{\textrm{loc}}(G_t,V_2)=0$, unless $G_t$ acts reducibly but not decomposably over $V_2$. As above, if $G_t$ acts reducibly over $V_2$, then also $G_r\otimes G_t$ is a parabolic group and $G$ acts reducibly over ${{\mathcal{A}}}[p^l]$ too. Since this contradicts our assumptions, then there exists $B\in V_2$, such that $Z_{\tau}=(\tau-1)B$, for all $\tau\in G_t$. Therefore $Z_{\sigma\otimes \tau}=(\sigma\otimes \tau - 1\otimes 1)A\otimes B$, for all $\sigma\otimes \tau\in G_t\circ G_r$ and $H^{1}_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])=0$. Since $2^h$ is the greatest proper divisor of $n$, then for every $p>p_{h-1}$, we have $H^{1}_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])=0$.
If $G$ is of class ${{\mathcal{C}}}_5$, then $G$ is isomorphic to a subgroup of ${{\rm GL}}_n(p^t)$, where $m=tr$, with $t$ a positive integer and $r$ a prime. If $G$ is the whole group ${{\rm GL}}_n(p^t)$, then $G$ contains $-I$ and $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])=0$. If $G$ is trivial, then $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])$ is trivial too. Suppose that $G$ is a proper subgroup of ${{\rm GL}}_n(p^t)$. If $G$ is still of class ${{\mathcal{C}}}_5$, then $G$ is isomorphic to a subgroup of ${{\rm GL}}_n(p^{t_2})$, for some integer $t_2$, such that $t=r_2t_2$, with $r_2$ prime. Again, if $G={{\rm GL}}_n(p^{t_2})$, then $-I\in G$ and $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])=0$ and we may assume that $G$ is a proper subgroup of ${{\rm GL}}_n(p^{t_2})$. And so on. Since $m$ is finite and we are assuming that $G$ is not trivial, then $G$ is isomorphic to a subgroup of ${{\rm GL}}_n(p^{t_j})$ (for some positive integer $t_j$ dividing $m$) of class ${{\mathcal{C}}}_i$, with $i\neq 5$. We may then repeat the arguments used for other classes ${{\mathcal{C}}}_i$, with $i\notin \{1,5\}$, to get $H^{1}_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])=0$.
Suppose that $G$ is of class ${{\mathcal{C}}}_6$, i. e. $G$ lies in the normalizer of an extraspecial group. This is possible only when $m=1$, which is the case of main interest for us. When $n=2^h$, we have the following possible types of maximal subgroups of class ${{\mathcal{C}}}_6$ (see [@KL §3.5]): $E_{2^{2h}}.{{\rm Sp}}_{2h}(2)$; $2^{1+2h}.{{\cal O}}^{-}_{2h}(2)$; $2_{+}^{1+2h}.{{\cal O}}^{+}_{2h}(2)$. If $p$ does not divide $\prod_{i=1}^{h}(2^{2i}-1)$, then it does not divide neither $|{{\rm Sp}}_{2h}(2)|$, nor $|{{\cal O}}^{\epsilon}_{2h}(2)|$, for every $\epsilon \in \{+,-\}$. Let $p_{\bar{h}}$ be the greatest prime dividing $\prod_{i=1}^{h}(2^{2i}-1)$. If $p> p_{\bar{h}}$, then $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])=0$.
Assume that $G$ is of class ${{\mathcal{C}}}_7$. Thus $G$ is the stabilizer of a tensor product decomposition $\bigotimes_{i=1}^{t}V_r$, with $n=r^t$ and $\textrm{dim}(V_i)=r$, for every $1\leq i\leq t$. By using induction on $t$ and the argument given in the case when $G$ is of class ${{\mathcal{C}}}_4$ as the base of the induction, we get $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])=0$, for every $p>p_{h-1}$.
Suppose that $G$ is of class ${{\mathcal{C}}}_8$. Since $p^m$ is odd and $n=2^h$ is even, then $G$ is contained either in the group ${{\rm Sp}}_n(p^m)$, or in a group $\textrm{O}_n^{\epsilon}(p^m)$, for any $\epsilon\in\{+,-\}$, or in the group $U_n(p^{\frac{m}{2}})$, with $m$ even too. If $G$ is one of the whole groups ${{\rm Sp}}_n(p^m)$ or $\textrm{O}_n^{\epsilon}(p^m)$ or $U_n(p^{\frac{m}{2}})$, then it contains a scalar multiple of the identity and $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])=0$. Suppose that $G$ is a proper subgroup of one of those three groups. From the classification of the maximal subgroups of ${{\rm Sp}}_n(p^m)$ and $\textrm{O}_n^{\epsilon}(p^m)$ and $U_n(p^{\frac{m}{2}})$ (see [@KL], in particular Table 3.5B, Table 3.5C, Table 3.5D and Table 3.5E), we have that $\textrm{O}_n^{\epsilon}(p^m)$ and $U_n(p^{\frac{m}{2}})$ do not contain groups of class ${{\mathcal{C}}}_8$ and that the subgroups of ${{\rm Sp}}_n(p^m)$ of class ${{\mathcal{C}}}_8$ are $\textrm{O}_n^{\epsilon}(p^m)$ themselves, where $\epsilon \in \{+,-\}$ (and $n\geq 4$). Since we are assuming that $G$ is strictly contained in one of those three groups, then it is a subgroup of class ${{\mathcal{C}}}_i$, for some $i\neq 8$. We get the conclusion by the same arguments used for those classes of groups.
For every $n$, there is a finite number of subgroups of ${{\rm GL}}_n(p^m)$ of type ${{\mathcal{C}}}_9$. Unfortunately, as said above, for $n\>12$ an explicit classification of those groups is not known. Anyway, there exists a prime $p_{h'}$, that is the greatest prime dividing the order of at least one of those subgroups of type ${{\mathcal{C}}}_9$. If $G$ is of class ${{\mathcal{C}}}_9$, then $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[q])=0$, for all $p>p_{h'}$.
Let $p_{h}:=\textrm{max}\{p_{\bar{h}},p_{h-1},p_{h'}\}$. Then $H^1_{\textrm{loc}}(G,{{\mathcal{A}}}[p^l])=0$, for all $p>p_h$.
The case of abelian varieties of dimension 4
--------------------------------------------
To complete the proof of Theorem \[P1\_bis\] (and consequently of Theorem \[P17\_gal\], Corollary \[P17\] and Theorem \[P\_Sha\]), we have to show $p_2=3$. By the proof of Proposition \[P1\_prop1\], it is clear that $p_h$ depends only on $p_{h-1}$ and on the subgroups of ${{\rm SL}}_{2^{h+1}}(q)$ of class ${{\mathcal{C}}}_6$ and of class ${{\mathcal{C}}}_9$. In the next lemma we recall the classification of the maximal subgroups of ${{\rm SL}}_8(q)$ of those two classes (see [@BHR]).
\[M+B+L-lem\] Let $q=p^m$, where $p$ is an odd prime and $m$ is a positive integer. Let $d:=\gcd(q-1,4)$. Let $d:=\gcd(q-1,8)$. The only maximal subgroup of ${{\rm SL}}_8(q)$ of type ${{\mathcal{C}}}_6$ is $(C_d\circ 2^{1+6})^{\cdot} {{\rm Sp}}_6(2)$. The maximal subgroups of ${{\rm SL}}_8(q)$ of type ${{\mathcal{C}}}_9$ are isomorphic to the following groups
(a)
: $C_4\hspace{0.1cm}^{\cdot}{{\rm PSL}}_3(4)$, for $q=p=5$;
(b)
: $C_d\circ C_4\hspace{0.1cm}^{\cdot}{{\rm PSL}}_3(4)$, for $q=p\equiv 9,21,29,41,61,69 {{\textrm{ (mod) }}}80$;
(c)
: $C_8\circ C_4\hspace{0.1cm}^{\cdot}{{\rm PSL}}_3(4).C_2$, for $q=p\equiv 1,49 {{\textrm{ (mod) }}}80$;
(d)
: $C_8\circ C_4\hspace{0.1cm}^{\cdot}{{\rm PSL}}_3(4)$, for $q=p^2$, $p\equiv \pm 3,\pm 13,\pm 27,\pm 37 {{\textrm{ (mod) }}}80$;
(e)
: $C_8\circ C_4\hspace{0.1cm}^{\cdot}{{\rm PSL}}_3(4).C_2$, for $q=p^2$, $p\equiv \pm 7,\pm 17,\pm 23,\pm 33 {{\textrm{ (mod) }}}80$.
**Proof of Theorem \[P17\] for $\mathbf{h=2}$.** As noted above, by Theorem \[P1\_bis\], if the group $G$ lies in one of the classes ${{\mathcal{C}}}_i$, for $i\notin \{1,6,9\}$, then $H^1_{\textrm{loc}}(G, {{\mathcal{A}}}[p^l])=0$, for all $p>3$. Assume that $G$ is of class ${{\mathcal{C}}}_6$. By Lemma \[M+B+L-lem\], we have that $G$ is a subgroup of $(C_d\circ 2^{1+6})^{\cdot} {{\rm Sp}}_6(2)$, where $d=\gcd(p^l-1,8)$. Therefore the cardinality of $|G|$ divides $2^{19}\cdot 3\cdot 7$. For every prime $p>3$ the $p$-Sylow subgroup of $G$ is either trivial or cyclic (this last case occurs only if $p=7$). In both cases $H^1_{\textrm{loc}}(G, {{\mathcal{A}}}[q])=0$. If $G$ is a group of class ${{\mathcal{C}}}_9$, then, by Lemma \[M+B+L-lem\], the cardinality of $G$ divides $2^6|{{\rm PSL}}_3(4)|$. Since ${{\rm SL}}_3(4)$ has a trivial center, then ${{\rm PSL}}_3(4)={{\rm SL}}_3(4)$ and the cardinality of $G$ divides $ 2^6 \cdot 3\cdot 5 \cdot 7$. Again, for all $p>3$ the $p$-Sylow subgroup of $G$ is either trivial or cyclic (this last case occurs only if $p=5$ or $p=7$). We have $H^1_{\textrm{loc}}(G, {{\mathcal{A}}}[q])=0$. Thus $p_2=3$. $\Box$
*Acknowledgments*. I am grateful to John van Bon, Roberto Dvornicich and Gabriele Ranieri for useful discussions. I wrote the last part of this paper at the Max Planck Institute for Mathematics in Bonn. I would like to thank all people there for their kind hospitality.
[Pal]{}
<span style="font-variant:small-caps;">Aschbacher</span>, *On the maximal subgroups of the finite classical groups*, Invent. Math., [**7**6]{} (1984), 469-514.
<span style="font-variant:small-caps;">Bašmakov M. I.</span>, *The cohomology of abelian varieties over a number field*, Russian Math. Surveys., **27** (1972) (English Translation), 25-70.
<span style="font-variant:small-caps;">Bray J. N., Holt D. F., Roney-Dougal C. M.</span>, *The maximal subgroups of the low-dimensional finite classical groups*, Cambridge University Press, Cambridge, 2013.
<span style="font-variant:small-caps;">Cassels J. W. S.</span>, *Arithmetic on curves of genus 1. III. The Tate-Šafarevič and Selmer groups.*, Proc. London Math. Soc., **12** (1962), 259-296.
<span style="font-variant:small-caps;">Cassels J. W. S.</span>, *Arithmetic on curves of genus 1. IV. Proof of the Hauptvermutung.*, J. reine angew. Math. **211** (1962), 95-112.
<span style="font-variant:small-caps;">Çiperiani M., Stix J.</span>, *Weil-Châtelet divisible elements in Tate-Shafarevich groups II: On a question of Cassels.*, J. Reine Angew. Math., **700** (2015), 175-207.
<span style="font-variant:small-caps;">Creutz B.</span>, *Locally trivial torsors that are not Weil-Châtelet divisible*, Bull. London Math. Soc., **45** (2013), 935-942.
<span style="font-variant:small-caps;">Creutz B.</span>, *On the local-global principle for divisibility in the cohomology of elliptic curve*, Math. Res. Lett., **23** no. 2 (2016), 377-387.
<span style="font-variant:small-caps;">Dvornicich R., Zannier U.</span>, *Local-global divisibility of rational points in some commutative algebraic groups*, Bull. Soc. Math. France, **129** (2001), 317-338.
<span style="font-variant:small-caps;">Dvornicich R., Zannier U.</span>, *An analogue for elliptic curves of the Grunwald-Wang example*, C. R. Acad. Sci. Paris, Ser. I **338** (2004), 47-50.
<span style="font-variant:small-caps;">Dvornicich R.</span>, <span style="font-variant:small-caps;">Zannier U.</span>, *On local-global principle for the divisibility of a rational point by a positive integer*, Bull. Lon. Math. Soc., no. **39** (2007), 27-34.
<span style="font-variant:small-caps;">Hilton P. J., Stammbach U.</span>, *A course in homological algebra*, GTM 4, Springer-Verlag, New York, 1971.
<span style="font-variant:small-caps;">Jensen C. U., Lenzing H.</span>, *Model theoretic algebra, with particular emphasis on fields, rings, modules*, Algebra, Logic and Applications Series, vol. 2, Gordon and Breach Science Publishers, London, 1989.
<span style="font-variant:small-caps;">Kleidman P. B., Liebeck M. W.</span>, *The subgroups structure of the finite classical groups*, London Math. Soc. Lecture Note Ser., 129, Cambridge University Press, Cambridge, 1990. <span style="font-variant:small-caps;">Lang S.</span>, *Elliptic curves: diophantine analysis*, Grundlehren der Mathemathischen Wissenschaften 231, Springer, 1978.
<span style="font-variant:small-caps;">Milne J. S.</span>, [*Abelian Varieties*]{}, Arithmetic Geometry (Storrs, Conn., 1984), Springer, New York, 1986, 103-150.
<span style="font-variant:small-caps;">Paladino L.</span>, *Local-global divisibility by $4$ in elliptic curves defined over ${{\mathbb Q}}$*, Annali di Matematica Pura e Applicata, no. [**189.1**]{}, (2010), 17-23.
<span style="font-variant:small-caps;">Paladino L.</span>, *Elliptic curves with ${{\mathbb Q}}({\mathcal{E}}[3])={{\mathbb Q}}(\zeta_3)$ and counterexamples to local-global divisibility by 9*, Le Journal de Théorie des Nombres de Bordeaux, Vol. **22**, n. 1 (2010), 138-160.
<span style="font-variant:small-caps;">Paladino L.</span>, *On counterexamples to local-global divisibility in commutative algebraic groups*, Acta Arithmetica, **148** no. 1, (2011), 21-29.
<span style="font-variant:small-caps;">Paladino L., Ranieri G., Viada E.</span>, *On Local-Global Divisibility by $p^n$ in elliptic curves*, Bulletin of the London Mathematical Society, **44** no. 5 (2012), 789-802.
<span style="font-variant:small-caps;">Paladino L., Ranieri G., Viada E.,</span> *On minimal set for counterexamples to the local-global principle*, Journal of Algebra, **415** (2014), 290-304.
<span style="font-variant:small-caps;">Sansuc J.-J.</span>, [Groupe de Brauer et arithmétique des groupes algébriques linéaires sur un corps de nombres. (French) \[The Brauer group and arithmetic of linear algebraic groups on a number field\]]{}, J. Reine Angew. Math. , **327** (1981), 12-80.
<span style="font-variant:small-caps;">Wong S.</span>, *Power residues on abelian variety*, Manuscripta Math., no. [**102**]{} (2000), 129-137.
0.5cm Laura Paladino
Max Planck Institute for Mathematics
Vivatgasse, 7
53111 Bonn
Germany
e-mail address: lpaladino@mpim-bonn.mpg.de
[^1]: Partially supported by Istituto Nazionale di Alta Matematica F. Saveri with grant “Assegno di ricerca Ing. Giorgio Schirillo”
|
---
abstract: 'The properties of low-mass galaxies hosting central black holes provide clues about the formation and evolution of the progenitors of supermassive black holes. In this letter, we present HSC-XD 52, a spectroscopically confirmed low-mass active galactic nucleus (AGN) at an intermediate redshift of $z\sim0.56$. We detect this object as a very luminous X-ray source coincident with a galaxy observed by the Hyper Suprime-Cam (HSC) as part of a broader search for low-mass AGN. We constrain its stellar mass through spectral energy distribution modeling to be LMC-like at $M_\star \approx 3 \times 10^9 M_\odot$, placing it in the dwarf regime. We estimate a central black hole mass of $M_\mathrm{BH} \sim 10^{6} M_\odot$. With an average X-ray luminosity of $L_X \approx 3.5 \times 10^{43}~\mathrm{erg}~\mathrm{s}^{-1}$, HSC-XD 52 is among the most luminous X-ray selected AGN in dwarf galaxies. The spectroscopic and photometric properties of HSC-XD 52 indicate that it is an intermediate redshift counterpart to local low-mass AGN.'
author:
- Goni Halevi
- Andy Goulding
- Jenny Greene
- Jean Coupon
- Anneya Golob
- Stephen Gwyn
- 'Sean D. Johnson'
- Thibaud Moutard
- Marcin Sawicki
- Hyewon Suh
- Yoshiki Toba
bibliography:
- 'dwarfAGN.bib'
title: 'HSC-XD 52: An X-ray detected AGN in a low-mass galaxy at $z\sim0.56$'
---
Introduction {#sec:intro}
============
Supermassive black holes (SMBHs) are thought to play a crucial role in galaxy evolution. SMBHs are ubiquitous in present-day massive galaxies and their properties correlate with those of their hosts . Observations of central black holes in dwarf galaxies over cosmic time may provide insight into the birth and growth of SMBHs. Simulations predict that these low-mass black holes experience relatively little growth via mergers or accretion [@2011ApJ...742...13B], allowing them to serve as indirect probes of the black holes that seed the SMBHs observed in massive galaxies today.
At present, we know little about the demographics of central black holes in low-mass galaxies, even in the local universe. The dynamical signatures of $\sim 10^5~M_\odot$ massive black holes (MBH) are found in some, but not all, galaxies within $3.5$ Mpc with $M_\star \approx 10^9-10^{10} M_\odot$ [@2019ApJ...872..104N]. The dearth of information about black holes in this regime not only limits our understanding of SMBH seeds but also impedes predictions of gravitational wave events detectable by LISA and their rates [e.g., @2019MNRAS.482.2913B].
Statistical constraints on the black hole occupation fraction require a large survey of black holes in low-mass ($M_\star \lesssim 10^{10} M_\odot$) hosts. Searches with optical emission lines [e.g., @1997ApJS..112..315H; @2007ApJ...670...92G; @2012ApJ...755..167D; @2013ApJ...775..116R; @2014AJ....148..136M], mid-infrared (IR) spectroscopy [e.g., @2009ApJ...704..439S], and radio continuum [e.g., @2014ApJ...787L..30R] have all yielded interesting samples. We focus on X-ray searches, which provide an unbiased measure of the accretion luminosity, and are relatively insensitive to obscuration .
X-ray observations are a powerful tool to study MBH demographics locally [e.g., @2009ApJ...690..267D; @2009ApJ...700.1759G; @2011ApJ...728...25M; @2012ApJ...753...38S; @2012ApJ...757..179A; @2015ApJ...805...12L; @2017ApJ...842..131S]; they place the only accretion-based constraint on the occupation fraction [$>20\%$; @2015ApJ...799...98M]. Pushing to higher redshift is enabled by ongoing deep X-ray observations [e.g., @2013ApJ...773..150S; @2014ApJ...782...22B], but it is very challenging to understand the completeness of spectroscopic samples [@2016ApJ...831..203P] and the purity of photometric samples [@2018MNRAS.478.2576M].
In this letter, we present one tantalizing object from our new search for such sources, which uses the relatively wide survey area and sensitivity of the Deep Layer of the Hyper Suprime-Cam Subaru Strategic Program [HSC-D; @2018PASJ...70S...8A] and the complementary CFHT Large Area U-band Deep Survey [CLAUDS; @Sawicki2019] to find faint low-mass galaxies (G. Halevi et al. 2019, in preparation). We conducted our search in the XMM-Newton Large-Scale Structure (XMM-LSS) field, where we used the [SExtractor]{}-based $u^*grizy$ catalog produced by the CLAUDS team [A. Golob et al., in preparation; see also § 3.1.2 in @Sawicki2019]. This catalog extends the HSC photometric sampling, enabling improved estimates of the redshift, stellar mass ($M_\star$), and star formation rate (SFR) for each source (T. Moutard et al., in preparation). To identify candidates for our sample of HSC X-ray Dwarfs (HSC-XDs), we cross-matched candidate dwarf galaxies in this catalog with the XMM-SERVS source catalog [@2018MNRAS.478.2132C].
The source presented in this letter, hereafter referred to as HSC-XD 52, is an intermediate redshift low-mass AGN observed at multiple epochs in both X-ray imaging and optical spectroscopy. The galaxy has a dwarf-like stellar mass and hosts a luminous X-ray detected AGN which is seen to be declining in activity with time. Its position, basic properties, and derived best-fit parameters are provided in Table \[tab:props\].
A luminous X-ray source in a dwarf galaxy {#sec:props}
=========================================
HSC-XD 52 is of particular interest because the available data not only confirms its (time-evolving) AGN nature but also strongly implies a low stellar mass ($<10^{10} M_\odot$; assuming a Chabrier initial mass function). The HSC imaging shows a marginally extended red galaxy (top left panel of Fig. \[fig:ims+spec\]; Table \[tab:props\]). The XMM-SERVS data reveals a luminous X-ray source (top right panel of Fig. \[fig:ims+spec\]) spatially coincident with the optically detected galaxy. Its X-ray properties are described in §\[sec:BH\]. The three epochs of XMM observations spanning 2006 to 2017 show temporal variability indicative of a fall in X-ray luminosity.
The Sloan Digital Sky Survey [SDSS; @2017AJ....154...28B] obtained a spectrum of HSC-XD 52 despite its relative optical faintness ($i= 21.5$ mag) because it was targeted as part of the XMM-XXL follow-up program. We additionally acquired a spectrum on July 27 2019 with the Magellan Echellette (MagE) Spectrograph, a moderate-resolution ($R\sim4100$ for a $1''$ slit) optical echellette mounted on the Clay Magellan II telescope. While the SDSS spectrum from 2013 suggests the presence of broad lines indicative of accretion onto an MBH, this evidence is not present in the 2019 spectrum despite its higher resolution and signal-to-noise ratio (SNR). The broadband spectral energy distribution (SED) presented in Fig. \[fig:sed\] and discussed in §\[sec:Mstar\] supports the classification of this source as an AGN in a dwarf galaxy, as do its early epoch X-ray properties and the ratios of its narrow emission lines. The combination of these properties points toward the characterization of HSC-XD 52 as a higher redshift analog of the $z\sim0$ low-mass AGN POX 52 [@2004ApJ...607...90B] and NGC 4395 [@2003ApJ...588L..13F], though one with a accretion rate that is decreasing with time.
![HSC-D $gri$ composite image (top left), XMM full-band image (top right), and smoothed MagE spectrum (bottom) of HSC-XD 52. The images span 25$''$ on each side. The overlaid grid has spacings of 5$''$. The XMM image uses an inverted logarithmic color scale. The white cross marks the XMM centroid, whereas the red circle with a radius of 1.3$''$ is centered on the HSC position. We note that the H$\beta$ emission line is blended with a strong sky line; its apparent broadness and asymmetry are not intrinsic.[]{data-label="fig:ims+spec"}](hsc_im_coords.png "fig:"){width="0.48\linewidth"} ![HSC-D $gri$ composite image (top left), XMM full-band image (top right), and smoothed MagE spectrum (bottom) of HSC-XD 52. The images span 25$''$ on each side. The overlaid grid has spacings of 5$''$. The XMM image uses an inverted logarithmic color scale. The white cross marks the XMM centroid, whereas the red circle with a radius of 1.3$''$ is centered on the HSC position. We note that the H$\beta$ emission line is blended with a strong sky line; its apparent broadness and asymmetry are not intrinsic.[]{data-label="fig:ims+spec"}](xmm_im_inv_scale.png "fig:"){width="0.48\linewidth"} ![HSC-D $gri$ composite image (top left), XMM full-band image (top right), and smoothed MagE spectrum (bottom) of HSC-XD 52. The images span 25$''$ on each side. The overlaid grid has spacings of 5$''$. The XMM image uses an inverted logarithmic color scale. The white cross marks the XMM centroid, whereas the red circle with a radius of 1.3$''$ is centered on the HSC position. We note that the H$\beta$ emission line is blended with a strong sky line; its apparent broadness and asymmetry are not intrinsic.[]{data-label="fig:ims+spec"}](magespectrum_morelines.png "fig:"){width="\linewidth"}
[cccc]{} 02:24:15.76 & -05:27:20.02 & XMM03471 & 0.561\
& & &\
& & &\
21.479 & 0.542 & $(4.5\pm0.6)$ & $(2.8\pm0.2)$\
$\pm0.008$ & $\pm0.011$ & $\times 10^{-15}$ & $\times 10^{-14}$\
& & &\
& & &\
$3.0\pm0.7$ & $5.76\pm0.56$ & 40$^{\circ}$ & $9.56\pm0.48$\
& & &\
& & &\
-0.75 & 1.04 & $1.1\times 10^{-15}$ & $5.4\times 10^{-16}$\
Constraining the Stellar Mass {#sec:Mstar}
=============================
Our initial estimate of the stellar mass, $M_\star \approx 4 \times 10^9 M_\odot$, was derived from photometric template-fitting of the HSC $grizy$, CLAUDS $u^\star$, and GALEX data (see T. Moutard et al., in preparation), following the procedure described by . We utilized additional photometry from [*Spitzer*]{} IRAC and MIPS, [*Herschel*]{} SPIRE , and the VISTA Deep Extragalactic Observations survey [see @2018PASJ...70S...4A and references therein] to further constrain $M_\star$ using a multi-wavelength SED fit with the Code Investigating GALaxy Emission . In calculating the SED model, we used the stellar population models of @2003MNRAS.344.1000B, the @2003PASP..115..763C initial mass function, an exponentially declining delayed star-formation history, a dusty star-forming template from @2014ApJ...784...83D, and the AGN torus models of @2006MNRAS.366..767F allowing for a range of optical depths and inclinations. We fixed the extinction using the Balmer decrement from the H$\alpha$/H$\beta$ ratio ($A_V=1.3$; see §\[sec:BH\]), though we recover a similar $A_V$ when leaving it as a free parameter. The observed SED and CIGALE models are shown in Fig. \[fig:sed\]. The best-fit model, for which we also show the residuals (middle panel), has $\chi^{2}_{n-1} = 2.87$ indicating a formally poor fit. However, this is expected because we exclude nebular/AGN emission lines in the templates. The photometric points that contribute most to raising the $\chi^2$ value are the HSC-$i$ and HSC-$Y$ bands, which fall precisely at the wavelengths including the high-equivalent width \[OIII\] and H$\alpha$ lines. These are under-estimated by our SED model.
The best-fit stellar model has $M_\star \approx (3.0\pm 0.7) \times 10^{9} M_\odot$, consistent with our initial estimate from CLAUDS, stellar population age $\sim0.5$ Gyr, and SFR $\approx (5.76\pm0.56)~ M_\odot~\mathrm{yr}^{-1}$. The AGN component dominates the rest-frame optical emission (see bottom panel of Fig. \[fig:sed\]), favoring an intermediate inclination of $\Psi \approx 40^{\circ}$, placing HSC-XD 52 closer to the Type 1 regime (i.e. mostly unobscured).
To obtain a conservative $M_\star$ upper limit, we ran CIGALE with the same input ranges but required a fixed single stellar population (SSP) of 8 Gyr (the age of the Universe at $z\approx 0.5$). The upper limit of $M_\star < 1.1 \times 10^{10} M_\odot$ is shown as a gray dashed line in Fig. \[fig:sed\]. Furthermore, we found a fixed SSP of 30 Myr (i.e., a very young stellar population; dotted gray line in Fig. \[fig:sed\]) fails to reproduce the observed photometry.
{width="\linewidth"}
Our case for the low stellar mass of the host galaxy is strengthened by spectral indicators of its low metallicity. In particular, both the SDSS spectrum and the higher resolution, higher SNR MagE spectrum show a complete lack of evidence for \[NII\] $\lambda$6548, $\lambda$6583 lines (see top panels of Fig. \[fig:fits\]). We can put a limit on the line strength compared to H$\alpha$ from the MagE spectrum of \[NII\]/H$\alpha$ $\lesssim -1.8$. From the models of @2006MNRAS.371.1559G, we then place the metallicity at $\lesssim 0.25Z_\odot$, where $Z_\odot$ represents solar metallicity. Given the well-known stellar mass-metallicity relation, we can conclude by this entirely complementary line of evidence that HSC-XD 52 indeed qualifies as a low-mass galaxy.
Evidence for a central black hole {#sec:BH}
=================================
X-ray detection
---------------
Combining observations from three separate epochs, with the first dominating the signal, HSC-XD 52 is strongly detected in the XMM soft (SB; $0.5-2$ keV), hard (HB; $2-10$ keV) and full (FB; $0.5-10$ keV) bands, with $\approx$ $109$, $133$, and $245$ photon counts (in the PN+MOS1+MOS2 detectors), respectively [@2018MNRAS.478.2132C]. It has a FB luminosity of $L_X \approx 3.5 \times 10^{43}~\mathrm{erg}~\mathrm{s}^{-1}$ and an SNR of 185 (143) in the PN (M1) detector (see Table \[tab:props\]). The derived hardness ratio of $$\mathrm{HR} \equiv \frac{H-S}{H+S} = 0.17\pm 0.07,$$ where $H$ ($S$) is the total (all three detectors) net counts divided by the total exposure time in the HB (SB), is consistent with a Type I AGN at the redshift of HSC-XD 52. This XMM source is spatially coincident with the HSC optical source. Fig. \[fig:ims+spec\] provides the XMM image with the X-ray and HSC centroids marked as a black cross and a red circle ($1.3''$), respectively.
X-ray emission produced by stellar processes, such as that from X-ray binaries (XRBs), could mimic the accretion signatures of AGN. From @2010ApJ...724..559L, the relation between XRB $L_X$, $M_\star$, and SFR is $$L_{X,\mathrm{XRB}} = (\alpha M_\star + \beta \mathrm{SFR})~ \mathrm{erg}~\mathrm{s}^{-1},$$ with $\alpha = (9.05 \pm 0.37) \times 10^{28}~M_\odot^{-1}$ and $\beta = (1.62 \pm 0.22) \times 10^{39}~(M_\odot~\mathrm{yr}^{-1})^{-1}$. We have focused on high-mass XRBs because these dominate at $L_X \gtrsim 10^{39}~\mathrm{erg}~\mathrm{s}^{-1}$ [@2010ApJ...724..559L]. Our best estimates of $M_\star$ and SFR (Table \[tab:props\]) generate an expected luminosity due to XRBs of $L_{X,\mathrm{XRB}} \approx 9.6 \times 10^{39}~\mathrm{erg}~\mathrm{s}^{-1}$, several orders of magnitude lower than that observed. Even adopting conservative upper limit values for SFR and $M_\star$ results in $L_{X,\mathrm{XRB}} \ll 10^{43}~\mathrm{erg}~\mathrm{s}^{-1}$.
HSC-XD 52 was observed with XMM three separate times spanning over a decade: first on July 9 2006, next on January 1 2001, and finally on January 13 2017. The first two observations yielded fluxes (in cgs) of $1.35\pm0.28 \times 10^{-14}$ and $5.92\pm1.45 \times 10^{-15}$ from the PN detector alone. During the final epoch of observation, the source was detected only by the MOS2 detector because it fell on a dead chip of the MOS1 detector, and in a chip gap on the PN detector. Thus, this final observation yields only a weak upper limit on the flux (in cgs) of $<1.10 \times 10^{-14}$. In Fig. \[fig:LIRLx\], we compare HSC-XD 52 at the three different epochs to other X-ray luminous AGN observed with high spatial resolution ground-based mid-IR imaging [@2015MNRAS.454..766A] and low-mass AGN from @2017ApJ...838...26H. HSC-XD 52 falls on the empirical correlation (within the uncertainties) measured by @2015MNRAS.454..766A at least for the first two epochs. We also find the properties of HSC-XD 52 at these epochs to be consistent with other observed correlations (e.g. $L_X$–\[OIII\]), further bolstering our confidence in its characterization as an AGN. Comparing HSC-XD 52 with itself over the three epochs, it is clear that its X-ray luminosity is fading over time, suggesting a corresponding fall in activity.
![Observed correlation between IR luminosity at $\lambda = 12~\mu \mathrm{m}$ and X-ray luminosity at $2-10$ keV with the best-fit from @2015MNRAS.454..766A (dashed blue line). $L(12~\mu \mathrm{m})$ is not measured directly but extrapolated using a power-law fit to the IRAC and MIPS data. We include data compiled from @2015MNRAS.454..766A (all types of AGN; grey open symbols) and @2017ApJ...838...26H (low-mass AGN; black crosses). Triangles indicate upper limits. Cyan symbols represent low-redshift analogs to HSC-XD 52, which is shown at early, intermediate, and late epochs as a red circle, square, and triangle (upper limit on X-ray luminosity), respectively.[]{data-label="fig:LIRLx"}](LIR_vs_Lx_evol.png){width="\linewidth"}
To derive a lower limit on $M_{\rm BH}$, we can assume HSC-XD 52 is radiating at the Eddington luminosity when activity level is highest. We assume a bolometric luminosity as determined by the best-fit SED model of $L_\mathrm{AGN} = 1.20 \times 10^{44}~\mathrm{erg}~\mathrm{s}^{-1}$, which is also consistent with applying a typical bolometric correction to the observed $L_X$ at this earliest epoch. This yields a lower limit of $$\begin{aligned}
\nonumber M_\mathrm{BH} &\gtrsim \left(\frac{L_\mathrm{AGN}}{1.26 \times 10^{38}~\mathrm{erg}~\mathrm{s}^{-1}} \right) M_\odot \\
&\gtrsim 9.5 \times 10^{5} M_\odot. \label{eq:Medd}\end{aligned}$$
Spectral diagnostics
--------------------
An SDSS spectrum from November 9 2013 is publicly available for this source. In addition, we acquired a spectrum with MagE on July 27 2019. The SDSS spectrum is dominated by strong \[OIII\] and Balmer emission lines. Our modeling of these lines, especially H$\alpha$ and H$\beta$, favors a broad-line component, suggesting the presence of an AGN contributing to the observed flux. However, the more recently acquired MagE spectrum, which has better resolution and higher SNR, does not exhibit broad lines. In Fig. \[fig:fits\], we show zoom-ins of the \[OIII 4959,5007\] doublet from the MagE spectrum (bottom panel) and the H$\alpha$ lines (top panel) from both the SDSS (left) and MagE (right) spectra. In red, we show our best-fit models for these lines, with dashed lines representing single-component Gaussians and solid lines representing composite Gaussians consistenting of both broad and narrow components. While the SDSS spectrum favors some broad component in the H$\alpha$ line, the MagE H$\alpha$ line is consistent with the \[OIII\] doublet. We note that the H$\beta$ emission in the MagE is blended with a strong sky line in the MagE spectrum, making it unreliable for an accurate line profile measurement.
Using the SDSS spectrum, for which we find broad H$\alpha$ and H$\beta$ alone, we measure a FWHM of $\mathrm{FWHM}_{\mathrm{H}\alpha} = \mathrm{FWHM}_{\mathrm{H}\beta}\approx 1076~\mathrm{km}~\mathrm{s}^{-1} $. From this spectrum, we also measure the continuum luminosity at $\lambda=5100$ Å$~$of $L_{5100} \approx 1.8 \times 10^{43}~\mathrm{erg}~\mathrm{s}^{-1}$. Our SED modeling suggests that at this wavelength in the continuum, the AGN emission dominates compared to the starlight by a factor of $\approx 5$ (bottom panel of Fig. \[fig:sed\]). We can then apply the virial formula presented as eqn. (5) of @2005ApJ...630..122G to estimate an effective upper limit on the black hole mass of $M_{\rm BH} \lesssim 1.7 \times 10^{6} M_\odot$. While this mass estimate is consistent with that based on the Eddington argument, we emphasize that it is based on a somewhat equivocal broad line detection.
As an additional verification of HSC-XD 52’s AGN nature, we also measured the \[OI 6300\] emission line. The ratios of \[OI 6300\]/H$\alpha$ and \[OIII 5007\]/H$\beta$ allows us to place the source on an emission line diagnostic diagram, where it falls securely within the Seyfert region [e.g. @2006MNRAS.372..961K] during the epoch of the SDSS spectrum.
![Zoom-ins of the SDSS (top left) and MagE (top right and bottom) spectra (black lines) with fitted models of important emission lines H$\alpha$ (top panels) and the \[OIII 4959,5007\] doublet (bottom). We model each line (pair of lines, in the case of \[OIII\] as a two-component Gaussian (solid red line) or as single narrow component Gaussian (dashed red line). In the H$\alpha$ zoom-ins (top), we also label where \[NII\] lines would fall if they were present. For the SDSS spectrum (top left), we overplot the sky spectrum scaled down by a factor of 10 (cyan line). None of these lines is highly contaminated, while H$\alpha$ in particular falls conveniently between sky lines.[]{data-label="fig:fits"}](sdss_spec_Ha_withsky.png "fig:"){height="3.7289cm"} ![Zoom-ins of the SDSS (top left) and MagE (top right and bottom) spectra (black lines) with fitted models of important emission lines H$\alpha$ (top panels) and the \[OIII 4959,5007\] doublet (bottom). We model each line (pair of lines, in the case of \[OIII\] as a two-component Gaussian (solid red line) or as single narrow component Gaussian (dashed red line). In the H$\alpha$ zoom-ins (top), we also label where \[NII\] lines would fall if they were present. For the SDSS spectrum (top left), we overplot the sky spectrum scaled down by a factor of 10 (cyan line). None of these lines is highly contaminated, while H$\alpha$ in particular falls conveniently between sky lines.[]{data-label="fig:fits"}](mage_spec_Ha_final.png "fig:"){height="3.7289cm"} ![Zoom-ins of the SDSS (top left) and MagE (top right and bottom) spectra (black lines) with fitted models of important emission lines H$\alpha$ (top panels) and the \[OIII 4959,5007\] doublet (bottom). We model each line (pair of lines, in the case of \[OIII\] as a two-component Gaussian (solid red line) or as single narrow component Gaussian (dashed red line). In the H$\alpha$ zoom-ins (top), we also label where \[NII\] lines would fall if they were present. For the SDSS spectrum (top left), we overplot the sky spectrum scaled down by a factor of 10 (cyan line). None of these lines is highly contaminated, while H$\alpha$ in particular falls conveniently between sky lines.[]{data-label="fig:fits"}](mage_spec_OIII.png "fig:"){width="\linewidth"}
Discussion and conclusion {#sec:end}
=========================
We have presented photometric and spectroscopic observations of HSC-XD 52, an object identified as part of a new search for X-ray selected low-mass AGN in the HSC survey. Our analysis suggests that HSC-XD 52 has a stellar mass of $M_\star \approx (3.0\pm 0.7) \times 10^{9} M_\odot$ and hosts a luminous accreting black hole with $M_\mathrm{BH} \approx 10^{6} M_\odot$ and $L_X \approx 3.5 \times 10^{43}~\mathrm{erg}~\mathrm{s}^{-1}$, though its X-ray luminosity (and thus accretion rate) is variable and appears to be decreasing with time. Its properties resemble those of known low-mass AGN hosts in the local Universe, including POX 52 and NGC 4395. The detection of HSC-XD 52 provides convincing evidence for the existence of luminous active MBHs in low-mass galaxies at this higher redshift. Further observations, particularly temporal monitoring in rest-frame optical, could reveal continuum variability and thus shed more light on the evolution of AGN activity.
By requiring an unambiguous XMM detection, our methodology selects for the most luminous sources at intermediate redshifts. This is clear when we compare HSC-XD 52 to other, more local low-mass AGN including POX 52 and NGC 4395. As a direct consequence of our search method and the source’s higher redshift ($z\approx 0.56$), HSC-XD 52 lies on the brighter end of observed correlations (e.g. between mid-IR and X-ray luminosities; Fig. \[fig:LIRLx\]). Outside of the very local Universe, it is only for sources like this one that we can obtain a robust determination of the galaxy’s low stellar mass, a confirmation of its AGN nature, and a well-constrained black hole mass.
The best-fit model to the observed correlation between $M_\star$ and $M_\mathrm{BH}$ for local broad-line AGN of @2015ApJ...813...82R predicts $M_\mathrm{BH} \approx 7.1 \times 10^{5} M_\odot$ for HSC-XD 52, which is consistent with our estimate of the BH mass given the 0.55 dex scatter. With a larger sample of similar objects, we can begin to explore the relationship between $M_\mathrm{BH}$ and $M_\star$ for low-mass galaxies at intermediate redshifts in earnest.
To place meaningful constraints on statistical properties such as the occupation fraction as a function of redshift, we require a large sample with well-understood selection effects and biases, but significant challenges remain. While spectroscopic redshifts are cleaner, they impose a selection function on the samples used in past studies. Photometric redshifts are more inclusive, but often include undesirable contaminants as a repercussion, among other drawbacks (see details in G. Halevi et al. 2019, in preparation). Furthermore, probing the regime of lower luminosity X-ray sources requires unambiguously discriminating between stellar processes (i.e. XRBs) and accretion onto an MBH as the origin of the X-rays. This becomes increasingly difficult at $L_X \lesssim 10^{40}~\mathrm{erg}~\mathrm{s}^{-1}$, which presents a significant obstacle to developing a clean sample of low-mass AGN that is representative of the underlying population while still covering a sufficiently large volume.
In the 2030s, the launch of the next-generation X-ray observatory Lynx [@2018arXiv180909642T] would enable high SNR X-ray spectra of AGN in dwarf galaxies, and with its high spatial resolution, localization of the X-ray emission, thus breaking the degeneracy between emission from AGN and stellar processes. Additionally, its sensitivity will facilitate X-ray detections of SMBH seeds at cosmological distances. Meanwhile, the space-based gravitational wave detector LISA [@2017arXiv170200786A] will open the door to multi-messenger studies of early SMBHs. These future missions imply a promising future for the quest to illuminate the mysteries of how MBHs form and grow, and in turn, how their host galaxies evolve.
We thank the anonymous referee for their helpful comments. The HSC collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University. This work is based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l’Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This research uses data obtained through the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories, Chinese Academy of Sciences, and the Special Fund for Astronomy from the Ministry of Finance. This work uses data products from TERAPIX and the Canadian Astronomy Data Centre. It was carried out using resources from Compute Canada and Canadian Advanced Network For Astrophysical Research (CANFAR) infrastructure. Based in part on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory as part of the VISTA Deep Extragalactic Observations [VIDEO @2013MNRAS.428.1281J] survey, under program ID 179.A-2006 (PI: Jarvis). Support for the design and construction of the Magellan Echellette Specrograph was received from the Observatories of the Carnegie Institution of Washington, the School of Science of the Massachusetts Institute of Technology, and the National Science Foundation in the form of a collaborative Major Research Instrument grant to Carnegie and MIT (AST0215989). Based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Funding for SDSS IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. We recognize the cultural role and reverence of the Maunakea summit within the indigenous Hawaiian community; we are grateful for the access that enables observations crucial to this work. *Software:* `Astropy` [^1]
[^1]: http://www.astropy.org
|
---
abstract: 'We introduce an alternative approach to the third order helicity of a volume preserving vector field $B$, which leads us to a lower bound for the $L^2$-energy of $B$. The proposed approach exploits correspondence between the Milnor $\bar{\mu}_{123}$-invariant for 3-component links and the homotopy invariants of maps to configuration spaces, and we provide a simple geometric proof of this fact in the case of Borromean links. Based on these connections we develop a formulation for the third order helicity of $B$ on invariant *unlinked* domains of $B$, and provide Arnold’s style ergodic interpretation of this invariant as an average asymptotic $\bar{\mu}_{123}$-invariant of orbits of $B$.'
author:
- 'R. Komendarczyk[^1] [^2]'
title: |
The third order helicity of magnetic fields\
via link maps.
---
Introduction
============
A purpose of this paper is to develop a particular formula for the third order helicity on certain invariant sets of a volume preserving vector field $B$. The third order helicity, [@Khesin98], is an invariant of $B$ under the action of volumorphisms isotopic to the identity (denoted here by $\text{\rm SDiff}_0(M)$). Importance of such invariants stems from the basic fact that the evolution of the vorticity in the ideal hydrodynamics or of the magnetic field $B_0$ in the ideal magnetohydrodynamics (MHD), occurs along a path $t\longrightarrow g(t)\in \text{\rm SDiff}_0(M)$, [@Khesin98 p. 176]. Namely, $B(t)=g_\ast(t)B_0$ which is a direct consequence of Euler’s equations: $$\label{eq:Eulers-eq}
\frac{d}{dt} B+[v, B]=0,\qquad \frac{d}{dt} g(t)=v .$$ One often says that the magnetic field $B$ is *frozen in* the velocity field $v$ of plasma, and the action by $\text{\rm SDiff}_0(M)$ is frequently referred to as *frozen-in-field* deformations. A fundamental example of such invariant defined for a general class in $\text{\rm SVect}(M)$ is the *helicity* $\mathsf{H}_{12}(B_1,B_2)$ defined for a pair of vector fields $B_1$ and $B_2$ on $M=S^3$ or a homology sphere. Helicity has been first introduced by Woltjer, [@Woltjer58], in the context of magnetic fields, and is a measure of how orbits of $B_1$, and $B_2$ link with each other. This topological interpretation of helicity has been made precise by Arnold, who introduced the concept of the average asymptotic linking number of a volume preserving vector field $B$ on $M$, [@Arnold86]. The subject has been further investigated in [@Akhmetiev05; @Berger90; @Laurence-Stredulinsky00b; @Hornig04] (see [@Khesin98] for additional references), where authors approach *higher helicities* via the Massey products under various assumptions about the vector fields or their domains (work in [@Gambaudo-Ghys97; @Spera06; @Verjovsky94] concerns yet other approaches to the problem). Extensions of the helicity concept to higher dimensional foliations can be found in [@Khesin92; @Riviere02; @Kotschick-Vogel03] and recently in [@Cantarella-Parsley09].
In this paper we present an alternative to these approaches, which is a natural extension of the notion of the linking number as a degree of a map, and exploits relations to the homotopy theory of certain maps associated to the link. The paper has two parts.
In the first part we show how the Milnor $\bar{\mu}_{123}$-invariant: $\bar{\mu}_{123}(L)$ of a parametrized Borromean link $L$ in $S^3$ can be obtained as a Hopf degree of an associated map to the configuration space of three points in $S^3$. The presented approach to link homotopy invariants of 3-component links has been proposed in [@Kohno02] for $n$-component links in ${\mbox{\bbb R}}^3$, and conveniently simplified for 3-component links in $S^3$ in the joint work [@Deturck-Gluck-Melvin-Shonkwiler08], where the full correspondence between $\bar{\mu}_{123}(L)$ and the Pontryagin-Hopf degree is proved. In Section \[sec:mu-and-hopf\] we present the original proof of this correspondence in the Borromean case, which is sufficient for our purposes.
The second part of the paper is discusses a new definition of the third order helicity denoted here by $\mathsf{H}_{123}(B;\mathcal{T})$, where $B$ is a volume preserving vector field having an invariant unlinked domain $\mathcal{T}\subset S^3$. The simplest of such domains are three invariant handlebodies in $S^3$ which have pairwise unlinked cycles in the first homology. This includes the case of Borromean flux tubes already investigated in [@Berger90; @Laurence-Stredulinsky00b]. In Section \[sec:ergodic\] we develop an ergodic formulation of $\mathsf{H}_{123}(B;\mathcal{T})$ as an average asymptotic $\bar{\mu}_{123}$-invariant, in the spirit of Arnold’s average asymptotic linking number, which allows us to extend the definition of the invariant to topologically more complicated unlinked domains. We also derive, in Section \[sec:energy\], a lower bound for the $L^2$-energy of $B$ in terms of $\mathsf{H}_{123}(B;\mathcal{T})$.
*Acknowledgements:* The inspiration for the presented approach to the link homotopy invariants comes from the paper of Toshitake Kohno [@Kohno02], and I am grateful to him for the valuable e-mail correspondence. I have enjoyed conversations with many colleagues at the University of Pennsylvania, who have influenced this work. I wish to thank Herman Gluck for weekly meetings and his interest in this project, Frederic Cohen, Dennis DeTurck, Charlie Epstein, Paul Melvin, Tristan Rivi$\grave{\rm e}$re, Clay Shonkwiler, Jim Stasheff, David Shea Vick for the valuable input. I am also grateful to my advisor Robert Ghrist who introduced me to the subject long time ago. After posting recent joint results in [@Deturck-Gluck-Melvin-Shonkwiler08] we were informed by Paul Kirk about related works of Urlich Koschorke in [@Koschorke97; @Koschorke04], on homotopy invariants of link maps and Milnor $\bar{\mu}$-invariants. The author acknowledges financial support of DARPA, \#FA9550-08-1-0386.
The Milnor mu-123-invariant and the Hopf degree. {#sec:mu-and-hopf}
================================================
The $\bar{\mu}$-invariants of $n$-component links in $S^3$ have been introduced by Milnor in [@Milnor54; @Milnor57] as invariants of links up to *link homotopy*. Recall that the link homotopy is a deformation of a link in $S^3$ which allows each component to pass through itself but not through a different component. Clearly, this is a weaker equivalence than the equivalence of links up to *isotopy* where components are not allowed to pass through themselves at all. The fundamental example of a $\bar{\mu}$-invariant is the linking number (denoted by $\bar{\mu}_{12}$) which is a complete invariant of the 2-component links up to link homotopy. In the realm of 3-component links the relevant invariants are the pairwise linking numbers $\bar{\mu}_{12}$, $\bar{\mu}_{23}$, $\bar{\mu}_{32}$, and the third invariant $\bar{\mu}_{123}$ in $\mathbb{Z}_{\text{gcd}(\bar{\mu}_{12}, \bar{\mu}_{23}, \bar{\mu}_{32})}$, which is a well defined integer, if and only if, $\bar{\mu}_{12}=\bar{\mu}_{23}=\bar{\mu}_{32}=0$. In the second part of the paper we will interpret this statement as a topological condition on the invariant set of a vector field. A precise definition of $\bar{\mu}$-invariants is algebraic and involves the Magnus expansion of the lower central series of the fundamental group: $\pi_1(S^3-L)$ of the link complement. We refer the interested reader to the works in [@Milnor54; @Milnor57]. In the remaining part of this section we will prove that $\bar{\mu}_{123}(L)$ is a Hopf degree for an appropriate map associated to the link $L$, provided that the link is *Borromean*, i.e. the pairwise linking numbers are zero (note that the Borromean links are more general then Brunnian links, [@Milnor54]).
Let us review basic facts about the Hopf degree $\mathscr{H}(f)$ of a map $f:S^3\longrightarrow S^2$, (see e.g. [@Bott82]). A well known property of the Hopf degree is that $\mathscr{H}: f\longrightarrow \mathscr{H}(f)$ provides an isomorphism between $\pi_3(S^2)$ and ${\mbox{\bbb Z}}$. Recall that up to a constant multiple we may express $\mathscr{H}(f)$ as ($M=S^3$) $$\begin{gathered}
\label{eq:hopf-integral}
\mathscr{H}(f)= \int_{M} \alpha\wedge f^\ast\nu=\int_{M} \alpha\wedge \omega=\int_{M} \alpha\wedge d\alpha,\end{gathered}$$ where $\nu$ is the area 2-form on $S^2$, and $\alpha$ satisfies $\omega=f^\ast\nu=d\alpha$. Notice that $f^\ast\nu$ is always exact since the cohomology of $S^3$ in dimension $2$ vanishes. We may also interpret $\mathscr{H}(f)$ as an intersection number, [@Bott82; @Milnor97]. Namely, consider two regular values $p_1$ and $p_2\in S^2$ of the map $f$, then $l_1=f^{-1}(p_1)$ and $l_2=f^{-1}(p_2)$ form a link in $S^3$, and the integral formula can be interpreted, as the intersection number of $l_1$ with the Seifert surface spanning $l_2$: $$\begin{gathered}
\label{eq:hopf-intersection}
\mathscr{H}(f)=\text{lk}(l_1,l_2)\ .\end{gathered}$$ If we replace $S^3$ with an arbitrary closed compact orientable 3-dimensional manifold $M$, we may still obtain an invariant of $f:M\longrightarrow S^2$ this way, provided that the condition $f^\ast\nu=d\alpha$ holds.
\[th:hopf-degree-general\] Let $M$ be a closed Riemannian manifold, and $\nu\in \Omega^2(S^2)$ the area form on $S^2$. The formula provides a homotopy invariant for a map $f:M\longrightarrow S^2$, if the 2-form $f^\ast\nu$ is exact. Up to a constant multiple this invariant can be calculated as an intersection number defined in , where $l_1=f^{-1}(p_1)$ and $l_2=f^{-1}(p_2)$ form a link in $M$, where both $l_1$ and $l_2$ are null-homologous.
Given a homotopy $F:I\times M\mapsto
S^2$, $f_1=F(1,\,\cdot\,)$, $f_0=F(0,\,\cdot\,)$, we define $\hat{\omega}=F^\ast\nu$. We have $$\hat{\omega} = F^\ast\nu=d\,\hat{\alpha},\qquad
\omega_1 = f_1^\ast\nu=i^\ast_1\,F^\ast\nu=d\,\alpha_1,\qquad
\omega_0 = f_0^\ast\nu=i^\ast_0\,F^\ast\nu=d\,\alpha_0,$$ where $i_0:M\hookrightarrow M\times I$, $i_0(x)=(x,0)$, $i_1:M\hookrightarrow M\times I$, $i_1(x)=(x,1)$ are appropriate inclusions. Potentials: $\hat{\alpha}\bigl|_{\{0\}\times M}$, $\alpha_0$, and $\hat{\alpha}\bigl|_{\{1\}\times M}$, $\alpha_1$ differ by a closed form $$\hat{\alpha}\bigl|_{\{0\}\times M}-\alpha_0=\beta_0,\quad \hat{\alpha}\bigl|_{\{1\}\times M}-\alpha_1=\beta_1,\quad d\beta_0=d\beta_1=0,$$ therefore the Stokes Theorem immediately implies that Formula is independent of the choice of the potential. For the proof of invariance under homotopies we revoke the standard argument in [@Bott82 p. 228] $$\begin{split}
0 & = \int_{F(M\times I)} \nu\wedge\nu
=\int_{M\times I} \hat{\omega}\wedge \hat{\omega} = \int_{M\times I} \hat{\omega}\wedge d\,\hat{\alpha}\\
& = \int_{M\times I} d(\hat{\omega}\wedge \hat{\alpha}) = \int_{\partial(M\times I)} \hat{\omega}\wedge
\hat{\alpha} = \int_{M} \omega_1\wedge \alpha_1- \int_{M}
\omega_0\wedge \alpha_0\\
& = \int_{M} d\alpha_1 \wedge \alpha_1-\int_{M}
d\alpha_0\wedge \alpha_0 = \mathscr{H}(f_1)-\mathscr{H}(f_0).
\end{split}$$ The interpretation of $\mathscr{H}(f)$ as the intersection number is the same as in [@Bott82 p. 230].
Given a 3-component parametrized link $L=\{L_1,L_2,L_3\}$ in $S^3$ we wish to associate a certain map $F_L:S^1\times S^1\times S^1\mapsto S^2$ to it, and interpret its Hopf degree as the Milnor $\bar{\mu}_{123}$-invariant. Recall the definition of the configuration space of $k$ points in $M$: $$\begin{gathered}
{\text{\rm Conf}}_k(M):=\{(x_1,x_2,\ldots, x_k)\in (M)^k\,|\,x_i\neq x_j,\text{for}\ i\neq j\}.\end{gathered}$$ As an introduction to the method we review the Gauss formula for the linking number of a 2-component link $L=\{L_1, L_2\}$ in ${\mbox{\bbb R}}^3$. Denote parameterizations of components by $L_1=\{x(s)\}$, $L_2=\{y(t)\}$ and consider the map $$\begin{gathered}
F_L:S^1\times S^1\stackrel{L}{\longrightarrow} {\text{\rm Conf}}_2({\mbox{\bbb R}}^3) \stackrel{r}{\longrightarrow} S^2,\qquad L(s,t)=(x(s), y(t))\ .\end{gathered}$$ where $r(x,y)=\frac{x-y}{\|x-y\|}$ is a retraction of ${\text{\rm Conf}}_2({\mbox{\bbb R}}^3)$ onto $S^2$. It yields the classical Gauss linking number formula: $$\begin{gathered}
\bar{\mu}_{12}(L)=\text{lk}(L_1,L_2)=\text{deg}(F_L),\qquad\quad \text{deg}(F_L)=\int_{S^1\times S^1} F^\ast_L(\nu),\end{gathered}$$ where $\nu\in \Omega^2(S^2)$ is the area form on $S^2$. Consequently, the linking number $\text{lk}(L_1,L_2)$, also known as the Milnor $\bar{\mu}_{12}$-invariant, can be obtained as the homotopy invariant of the map $F_L$ associated to $L$. Observe that homotopy classes $[S^1\times S^1,S^2]$ are isomorphic to ${\mbox{\bbb Z}}$ and $\text{deg}:F\rightarrow \text{deg}(F)$ provides the isomorphism, we also point out that as sets: $[S^k,{\text{\rm Conf}}_n({\mbox{\bbb R}}^3)]=\pi_k({\text{\rm Conf}}_n(R^3))$, and $[S^k,{\text{\rm Conf}}_n(S^3)]=\pi_k({\text{\rm Conf}}_n(S^3))$ (c.f. [@Hatcher02 p. 421]). Thus considering the based homotopies, in context of the link homotopy of Borromean links, and base point free homotopies is equivalent in this setting.
In [@Kohno02; @Koschorke97], authors consider a natural extension of this approach to $n$-component parame-trized links $L$ in ${\mbox{\bbb R}}^3$ by considering maps $F_L:S^1\times\ldots\times S^1\longrightarrow {\text{\rm Conf}}_n({\mbox{\bbb R}}^3)$ and their homotopy classes, we refer to this type of maps loosely as *link maps*, (c.f. [@Koschorke97]). In particular, Kohno [@Kohno02] proposed specific representatives of cohomology classes of the based loop space of ${\text{\rm Conf}}_n({\mbox{\bbb R}}^3)$ as candidates for appropriate link homotopy invariants of $L$. It has been observed, in [@Deturck-Gluck-Melvin-Shonkwiler08], that in the $3$-component case it is beneficial to consider ${\text{\rm Conf}}_3(S^3)$, and $L\subset S^3$, since the topology of ${\text{\rm Conf}}_3(S^3)$ simplifies dramatically (in comparison to ${\text{\rm Conf}}_3({\mbox{\bbb R}}^3)$). We review this simplification in the following paragraph as it is essential for the proof of the main theorem in this section.
(380, 150) (0,0)[![left: $\bar{\mu}_{123}=\pm 1$ and right: $\bar{\mu}_{123}=\pm n$.[]{data-label="fig:n-borromean"}](borr "fig:"){width="5.2in"}]{} (10,155)[$L_1$]{} (155,155)[$L_2$]{} (35,10)[$L_3$]{} (280,7)[$\mathbf{\Bigl\}}$]{} (285,22)[$n$]{}
Consider a 3-component link $L=\{L_1, L_2, L_3\}$ in $S^3$ parametrized by $\{x(s),y(t),z(u)\}$ and the following map $$\begin{gathered}
\label{eq:F}
F_L:S^1\times S^1\times S^1\stackrel{L}{\longrightarrow} {\text{\rm Conf}}_3(S^3)\stackrel{H}{\longrightarrow} S^2,\qquad L(s,t,u)=(x(s), y(t), z(u)),\end{gathered}$$ where we denote by $H:S^3\times {\mbox{\bbb R}}^3\times ({\mbox{\bbb R}}^3\setminus\{0\})\mapsto S^2$ the projection on the $S^2$ factor, first concluding that ${\text{\rm Conf}}_3(S^3)\subset S^3\times S^3\times S^3$ is diffeomorphic to $S^3\times {\text{\rm Conf}}_2({\mbox{\bbb R}}^3)=S^3\times
{\mbox{\bbb R}}^3\times ({\mbox{\bbb R}}^3\setminus\{0\})$, and consequently deformation retracts onto $S^3\times S^2$. Considering $S^3$ as unit quaternions, the map $H$ can be expressed explicitly by the formula, [@Deturck-Gluck-Melvin-Shonkwiler08]: $$\begin{gathered}
\label{eq:hermans-map}
{\text{\rm Conf}}_3(S^3) \ni (x,y,z) \stackrel{H}{\longrightarrow}
\frac{\text{pr}(x^{-1}\cdot y)-\text{pr}(x^{-1}\cdot z)}{\|\text{pr}(x^{-1}\cdot y)-\text{pr}(x^{-1}\cdot z)\|}\in S^2,\end{gathered}$$ where $\cdot$ stands for the quaternionic multiplication, $\ ^{-1}$ is the quaternionic inverse, and $\text{pr}:S^3\longrightarrow {\mbox{\bbb R}}^3$ the stereographic projection from $1$. As a result one has the following particular expression for $F_L$: $$\begin{gathered}
\label{eq:F_L}
F_L(s,t,u)=\frac{\text{pr}(x(s)^{-1}\cdot z(u))-\text{pr}(x(s)^{-1}\cdot y(t))}{\|\text{pr}(x(s)^{-1}\cdot z(u))-\text{pr}(x(s)^{-1}\cdot
y(t))\|}.\end{gathered}$$
At this point we note that one has a freedom in choosing the deformation retraction $H$ in , but the above particular formula makes the proof of the main theorem of this section possible. Let $\mathbb{T}=S^1\times S^1\times S^1$ denote the domain of $F_L$, notice that, thanks to , restricting $F_L$ to the subtorus $\mathbb{T}_{23}$ in the second and third coordinate $(t,u)$ of $\mathbb{T}$, we obtain the usual Gauss map of the 2-component link $\{x^{-1}\cdot L_2,x^{-1} \cdot L_3\}$. Since the diffeomorphism $x^{-1}\cdot$ of $S^3$ is orientation preserving we conclude $\text{\rm deg}(F_L|_{\mathbb{T}_{23}})=\text{lk}(L_2,L_3)$. We claim that for any 2-component sublink $\{L_i,L_j\}$ of $L$: $$\label{eq:linking-numbers}
\text{\rm deg}(F_L|_{\mathbb{T}_{ij}})=\pm\text{lk}(L_i,L_j),\qquad 1\leq i<j\leq 3,$$ where $i,j$ index the coordinates of $\mathbb{T}$. Indeed, since already true for $i=2$ and $j=3$, the general case follows by applying a permutation $\sigma\in \Sigma_3$ of coordinate factors in ${\text{\rm Conf}}_3(S^3)\subset (S^3)^3$. Notice that $\sigma$ is a diffeomorphism of ${\text{\rm Conf}}_3(S^3)\subset (S^3)^3$ either preserving or reversing the orientation (which explains the sign in ). We infer because $\sigma$ induces an isomorphism on homotopy groups of ${\text{\rm Conf}}_3(S^3)$.
(414,140) (30,0)[![The model of $L_\text{Borr}$ parametrized by $\{L_1=\text{pr}(x(s)),L_2=\text{pr}(y(t)),L_3=\text{pr}(z(u))\}$. Arc $A_1$ is a part of the unit circle on $xy$-plane, $A_2$ is a part of circle of radius $1+\epsilon$.[]{data-label="fig:conf1"}](borromean3 "fig:")]{} (93,120)[$L_1$]{} (47,70)[$L_2$]{} (75,5)[$L_3$]{} (330,80)[$A_2$]{} (255,60)[$A_1$]{}
The main theorem of this section is
\[th:milnor-hopf\] Let $L=\{L_1,L_2,L_3\}$ be a 3-component Borromean link in $S^3$, consider the associated map $F_L$ defined in . The Hopf degree of this map satisfies $$\begin{gathered}
\mathscr{H}(F_L)=\pm 2\,\bar{\mu}_{123},
\end{gathered}$$ where the sign depends on the choice of orientations of components of $L$.
By the *Borromean rings* we understand any $3$-component link with $\bar{\mu}_{123}=\pm 1$. Every such link is link homotopic to the diagram presented on Figure \[fig:n-borromean\] (where the sign can be determined from the orientation of components). The proof of the theorem can be reduced to the case of Borromean rings $L^\text{Borr}$, as follows: if $L$ and $L'$ are link-homotopic then $F_{L}$ and $F_{L'}$ are homotopic maps. By the Milnor classification of 3-component links up to link homotopy (see [@Milnor54]), every 3-component link $L$ with zero pairwise linking numbers and $\bar{\mu}_{123}=\pm n$ is represented by the right diagram on Figure \[fig:n-borromean\]. Consequently, up to homotopy, the associated map $F_L$ can be obtained from $F_{L_\text{Borr}}$ by covering one of the $S^1$ factors in $\mathbb{T}$, $n$-times. Therefore, in order to prove the claim it suffices to show $$\label{eq:hopf=2}
\mathscr{H}(F_{L^\text{Borr}})=\pm 2\ .$$ According to Proposition \[th:hopf-degree-general\], $\mathscr{H}(F_L)$ is well defined for a link $L\subset S^3$ provided $F_L^\ast\nu\in \Omega^2(\mathbb{T})$ is trivial in $H^2(\mathbb{T})$, which is true thanks to and because the pairwise linking numbers of $L$ are zero. The method of proof relies on a direct calculation of $\mathscr{H}(F_{L^\text{Bor}})$, for a carefully chosen parametrization of $L^\text{Bor}$ in $S^3$. This calculation is achieved by visualization of the link $l_{S,N}=l_S\cup l_N$ in $\mathbb{T}$, and application of Formula , where $$l_S:=F^{-1}_{L^\text{Bor}}(S),\qquad l_N:=F^{-1}_{L^\text{Bor}}(N)$$ are preimages of the North pole $N=(0,0,1)$ and South pole $S=(0,0,-1)$ in $S^2\subset {\mbox{\bbb R}}^3$. Notice that $[F_L^\ast\nu]=0$ in $H^2(\mathbb{T})$, if and only if, $[l_S]=0$ and $[l_N]=0$ in $H_1(\mathbb{T})$.
We begin by identifying $S^3$ with the set of unit quaternions in ${\mbox{\bbb R}}^4$ with standard coordinates: $$(w,x,y,z)=w+x\,\mathbf{i}+y\,\mathbf{j}+z\,\mathbf{k},$$ and choosing a specific parametrization of the Borromean rings $L^\text{Bor}$ in $S^3$. That is, define the $L_1$ component of $L^\text{Bor}$ to be the great circle in $S^3$ through $1$ and $\mathbf{k}$, parametrized as $$x(s)=\cos(s)+\sin(s)\,\mathbf{k},\quad
\bigl(\,x(s)^{-1}=\cos(s)-\sin(s)\,\mathbf{k}\,\bigr).$$
(300, 120) (0,0)[![Four positions corresponding to angles: $s=0,\frac{\pi}{2}, \pi, \frac{3\pi}{2}$, small arrows next to $N$ and $S$ indicate a motion as $s \nearrow$.[]{data-label="fig:4-pos"}](4-pos-reduced "fig:"){height="1.8in" width="\textwidth"}]{} (250,50)[$s=\frac{3\pi}{2}$]{} (250,120)[$s=\pi$]{} (0,60)[$s=\frac{\pi}{2}$]{} (0,120)[$s=0$]{}
Observe that $\text{pr}(x(s))$ parameterizes the $z$-axis in ${\mbox{\bbb R}}^3$. Figure \[fig:conf1\] shows how to define the second and the third component $\{L_2, L_3\}$ of the Borromean rings $L^\text{Borr}$ in ${\mbox{\bbb R}}^3$ considered as an image of $S^3-\{1\}$ under the stereographic projection $\text{pr}:S^3\subset {\mbox{\bbb R}}^4\longrightarrow {\mbox{\bbb R}}^3$ from $1\in S^3$. $L_2$ will bound the annuli with a rounded wedge removed, i.e. an arc $A_1$ of the circle of radius $1$. The arc $A_2$ belongs to the circle of radius $r_\epsilon=(1+\epsilon)$ in the $(x,y)$-plane. The component $L_3$ is chosen to be a vertical ellipse linking with $L_2$.
(390, 250)(-5,0) (0,0)[![Projection of $l_{S,N}$ on the $su$-face and $st$-face of $\mathbb{T}$. The strands $l_S$ (solid line) and $l_N$ (dashed line) are oppositely oriented since $l_S$ and $l_N$ are null homologous in $\mathbb{T}$.[]{data-label="fig:framed-link"}](proj-red-green-dashed-reduced "fig:"){width="\textwidth"}]{} (-6,182)[$\frac{\pi}{2}$]{} (147,182)[$\frac{\pi}{2}$]{} (110,215)[(A)]{} (263,215)[(B)]{} (110,65)[(C)]{} (263,65)[(D)]{}
Next we focus on the Formula , observe that multiplication by $x(s)^{-1}$ has an effect of a rotation by angle $s$ in $(w,z)$-plane and $(x,y)$-plane of ${\mbox{\bbb R}}^4$, which can be directly calculated: $$\begin{split}
x(s)^{-1}\cdot(w,x,y,z) & = \bigl(\cos(s) w+\sin(s) z, \cos(s) x +\sin(s) y,\\
& \qquad \cos(s) y-\sin(s) x, \cos(s) z-\sin(s) w\bigr)\ .
\end{split}$$
The flow defined by this $S^1$-action is tangent to the great circles of $S^3$, thus the projected flow on ${\mbox{\bbb R}}^3$, via the stereographic projection $\text{pr}$, presents the standard picture of the Hopf fibration. Let us call an invariant Hopf torus an $r$-torus, if and only if, it contains a circle of radius $r$ in the $(x,y)$-plane. Without loss of generality we assume that $L_2$ on Figure \[fig:conf1\] belongs to the $r_{\epsilon/2}$-Hopf torus. Every point on a $r$-torus traces a $(1,1)$-curve under the $S^1$-action. For sufficiently small $\epsilon$, this motion can be regarded as a composition of the rotation by angle $s$ in both the direction of the meridian and the longitude of a $r$-torus. Therefore, for different values of $s$ the $S^1$-action “rotates” the components $L_2$ and $L_3$, by sliding along the Hopf tori by angle $s$ in the meridian and the longitudinal direction. We denote resulting link components by $$L^s_2(t)=\text{pr}(x^{-1}(s)\cdot y(t)),\qquad \text{and}\qquad L^s_3(u)=\text{pr}(x^{-1}(s)\cdot z(u)).$$ The unit circle on $(x,y)$-plane is left invariant under this action and therefore can be considered as the “axis of the rotation”. This justifies the choice of the particular shape of $L^\text{Borr}$ pictured on Figure \[fig:conf1\]. Next, we seek to visualize the projection of $l_{S,N}=l_S\cup l_N$ on the $su$-face and $st$-face of the domain $\mathbb{T}$ of $F_{L^\text{Bor}}$ parameterized by $(s,t,u)\in \mathbb{T}$, (it is convenient to think about $\mathbb{T}$ as a cube in $(s,t,u)$-coordinates, see Figure \[fig:framed-link\]). For example when $s=0$, ($x(0)=1$), a point $(0,t_0,u_0)$ belongs to $l_N$, if and only if, the vector $\mathsf{v}_0=\text{pr}(y(t_0))-\text{pr}(z(u_0))$ points in the direction of $N=(0,0,1)$, analogous condition holds for direction $S=(0,0,-1)$, and $l_S$. In order to determine a diagram of $l_{S,N}$, we must keep track of the “head” and “tail” of the vector $\mathsf{v}_s=L^s_2(t)-L^s_3(u)$, for various values of $s$ and record values of $t$ and $u$ for which $\mathsf{v}_s$ points “North” and “South” (Figure \[fig:4-pos\]). This reads as the following condition $$\label{eq:arrow-eq}
(s,t,u)\in l_{S,N},\qquad\text{if and only if},\qquad \mathsf{v}_s \parallel S\ \text{or}\ N.$$ Without loss of generality we assume that $L^{s}_2$ is parametrized by the unit $t$-interval, and $L^{s}_3$ is parametrized by the unit $u$-interval. The process of recording values of $u$ and $t$ such that holds is self-explanatory and is shown on Figure \[fig:conf1\] for values $s=0, \frac{\pi}{2}, \pi,
\frac{3\pi}{2}$, which is sufficient to draw projections of $l_S$ and $l_N$ on $st$- and $tu$-faces of $\mathbb{T}$. Collecting the information on Figure \[fig:4-pos\], we draw the projection of $l_{S,N}$ on the $su$-face of $\mathbb{T}$ represented by square (A) in Figure \[fig:framed-link\]. Analogously, the projection of $l_{S,N}$ on the $st$-face of $\mathbb{T}$ is obtained and pictured in square (B). In order to obtain the diagram of $l_{S,N}$ we resolve the double points of Diagram (A) into crossings. For example, let us resolve the “circled” double point on (A), which occurs at $s=\frac{\pi}{2}$ in the *left* two stands of $l_{S,N}$. It suffices to determine the value of the $t$-coordinate at this point. Diagram (B) tells us that $l_S$ is below $l_N$, because $\mathbb{T}$ is oriented so that the $t$-axis points above the $su$-face (see Figure \[fig:framed-link\]). Resolving the remaining crossings in a similar fashion leads to a diagrams of $l_{S,N}$ presented in squares (C) and (D). Clearly, the linking number of $l_S$ and $l_N$ is equal to $\pm 2$ in Diagram (C), (as the intersection number of e.g. $l_S$ with the obvious annulus on Diagram (C)). This justifies , and ends the proof.
Results of Theorem \[th:milnor-hopf\] and Proposition \[th:hopf-degree-general\] combined with Formula allow us to express $\bar{\mu}_{123}(L)$ of a Borromean link $L$ as $$\label{eq:mu-formula}
\begin{split}
\bar{\mu}_{123}(L) & = \int_{\mathbb{T}} F^\ast_L\nu\wedge \alpha=\int_{\mathbb{T}} L^\ast\omega\wedge \alpha,\\
& \text{for} \quad F^\ast_L\nu=d\alpha,\quad \omega=H^\ast\nu\in \Omega^2({\text{\rm Conf}}_3(S^3)),
\end{split}$$ where $\nu$ is the area form on $S^2$, and $H:{\text{\rm Conf}}_3(S^3)\longrightarrow S^2$ is the deformation retraction (as e.g. in ). Alternatively, we may view $\omega$ as a 2-form on $(S^3)^3$ which is singular along the diagonals $\mathbf{\Delta}\subset (S^3)^3$, and the singularity is of order $O(r^2)$, where $r$ is a distance to $\mathbf{\Delta}$. Consequently, $\omega$ is integrable but not square integrable on $(S^3)^3$.
\[rem:anti-symmetry\] Notice that the integral formula exhibits the following property of $\bar{\mu}_{123}$: $$\bar{\mu}_{123}(L_1,L_2,L_3)=\text{\rm sign}(\sigma)\bar{\mu}_{123}(L_{\sigma(1)},L_{\sigma(2)},L_{\sigma(3)}),\qquad \sigma\in\Sigma_3\ .$$
Invariants of volume preserving flows. Helicities. {#sec:invariants-flows}
==================================================
Given finitely many volume preserving vector fields $B_1$, $B_2$, $\ldots B_k\in \text{\rm SVect}(M)$ on $M=S^3$ or a homology 3-sphere one seeks quantities $\mathsf{I}(B_1,B_2,\ldots,B_k)$ invariant under the action of volumorphisms isotopic to the identity $g\in \text{\rm SDiff}_0(M)$, commonly known as *helicities* or *higher helicities*: $$\begin{gathered}
\label{eq:invariants}
\mathsf{I}(B_1,B_2,\ldots,B_k)=\mathsf{I}(g_\ast B_1,g_\ast B_2,\ldots,g_\ast B_k),\qquad\quad \text{for all }g\in \text{\rm SDiff}_0(M), \end{gathered}$$ where $g_\ast$ is a push-forward by a diffeomorphism $g$. To distinguish the case of a single vector field $B$ (i.e. $B=B_1=\ldots=B_k$) we often refer to $\mathsf{I}(B)=\mathsf{I}(B,B,\ldots,B)$ as *self helicity*. We elucidated in the introduction a fundamental example of such invariant is the ordinary helicity $\mathsf{H}(B_1,B_2)$ of a pair of vector fields. In the remaining part of this section we review well known formulations of the helicity, which will later help us to point out analogies to the proposed formulation of the 3rd order helicity. Let $\mathcal{T}=\mathcal{T}_1\cup \mathcal{T}_2$ represent two invariant subdomains (not necessarily disjoint) under flows of $B_1$ and $B_2$ in $S^3$ and let $$\mathcal{T}=\mathcal{T}_1\times\mathcal{T}_2\subset {\text{\rm Conf}}_2(M)\subset M\times M.$$ Recall that the formula for $\mathsf{H}(B_1,B_2)$, from [@Khesin05; @Vogel03], specialized to invariant subdomains $\mathcal{T}=\mathcal{T}_1\cup \mathcal{T}_2$ may be expressed as $$\label{eq:helicity}
\mathsf{H}_{12}(B_1,B_2)=\int_{\mathcal{T}_1\times\mathcal{T}_2} \omega\wedge\iota_{B_1}\mu\wedge\iota_{B_2}\mu,$$ where $\omega$ is known as the linking form on $M\times M$. When $\mathcal{T}=M\times M$ this formula is equivalent to a more commonly known expression: $\mathsf{H}(B_1,B_2)=\int_M \iota_{B_1}\mu\wedge d^{-1}(\iota_{B_2}\mu)$ (because $\omega$ also represents the integral kernel of $d^{-1}$). Philosophically, $\mathsf{H}(B_1,B_2)$ can be derived (c.f. [@Arnold86]) from the linking number of a pair of closed curves, which is expressed by Arnold’s Helicity Theorem. For orbits $\{\mathscr{O}^1(x),\mathscr{O}^2(y)\}$ of $B_1$ and $B_2$ through $x, y\in M$, we introduce the following notation for the long pieces of closed up orbits $$\label{eq:orbits}
\begin{split}
\mathscr{O}^{B_i}_T(x) & = \{\Phi^i_t(x)\ |\ 0\leq t\leq T\}\subset \mathscr{O}^i(x),\qquad i=1,2\\
\bar{\mathscr{O}}^i_T(x) & := \mathscr{O}^i_T(x)\cup \sigma(x,\Phi^i(x,T)).
\end{split}$$ where $\sigma(x,y)$ denotes a short path, [@Vogel03], connecting $x$ and $y$ in $M$ (see Section \[sec:ergodic\]). Paraphrasing [@Arnold86] we state (for the proof also see [@Vogel03]),
Given $B_1, B_2\in\text{\rm SVect}(M)$, The following limit exists almost everywhere on $M\times M$: $$\begin{gathered}
\label{eq:linking-function}
\bar{m}_{B_1 B_2}(x,y)=\lim_{T\to \infty} \frac{1}{T^2}\, \text{lk}(\bar{\mathscr{O}}^1_T(x)\times\bar{\mathscr{O}}^2_T(y))\ .
\end{gathered}$$ Moreover, $\bar{m}_{B_1 B_2}$ is in $L^1(M\times M)$, and $$\label{eq:helicity-asymp}
\mathsf{H}_{12}(B_1,B_2)=\int_{M\times M} \bar{m}_{B_1 B_2}(x,y)\, \mu(x)\wedge\mu(y)\ .$$
The function $\bar{m}_{B_1 B_2}$ represents an asymptotic linking number of orbits $\{\mathscr{O}^1(x),\mathscr{O}^2(y)\}$, and the identity tells us that the helicity $\mathsf{H}_{12}(B_1,B_2)$ is equal to the average asymptotic linking number. In coming paragraphs, we will demonstrate, how this philosophy is applied to obtain the asymptotic $\bar{\mu}_{123}$-invariant for 3-component links and the third order helicity.
Definition of “mu-123-helicity” on invariant unlinked handlebodies. {#sec:handlebody}
===================================================================
In this section we apply the formulation of the $\bar{\mu}_{123}$-invariant for the 3-component links in $S^3$, obtained in Section \[sec:mu-and-hopf\], to define the third order helicity of a volume preserving vector field $B$ on certain invariant sets $\mathcal{T}$ of $B$ in $S^3$. In the following paragraphs as a “warm-up” to a more general case treated in Section \[sec:ergodic\], we consider the case of three disjoint unlinked handlebodies: $\mathcal{T}=\mathcal{T}_1\cup \mathcal{T}_2\cup \mathcal{T}_3$ in $S^3$ each of genus $g(\mathcal{T}_i)$. Henceforth, we use “unlinked” to mean “with pairwise unlinked connected components”. When $\mathcal{T}$ represents three unlinked tubes (also known as *flux tubes* [@Khesin98]) in ${\mbox{\bbb R}}^3$, the third order helicity has been developed by several authors [@Berger90; @Mayer03; @Laurence-Stredulinsky00b] via Massey product formula for the $\bar{\mu}_{123}$-invariant, we compare our approach to these known works in Section \[sec:massey\].
(400,190) (10,30)[![The simplest unlinked handlebodies: flux tubes $\mathcal{T}^\text{Borr}$ modeled on the Borromean rings (left), and unlinked genus $2$ handlebodies (right).[]{data-label="fig:invariant-sets"}](T-borr-S "fig:"){width="2in"}]{} (180,0)[![The simplest unlinked handlebodies: flux tubes $\mathcal{T}^\text{Borr}$ modeled on the Borromean rings (left), and unlinked genus $2$ handlebodies (right).[]{data-label="fig:invariant-sets"}](T-borr-2-S "fig:"){width="4in"}]{} (20,30)[$\mathcal{T}^\text{Borr}$]{} (60,140)[$\mathcal{T}_2$]{} (25,80)[$\mathcal{T}_3$]{} (125,70)[$\mathcal{T}_1$]{}
Assume $\mathcal{T}_i$s have smooth boundary and $B$ to be tangent to $\partial \mathcal{T}_i$, we set $$B_i:=B\bigl|_{\mathcal{T}_i},\qquad i=1,2,3,$$ and denote the flow of $B$ on $S^3$ by $\Phi$, and flows of restrictions $B_i$ by $\Phi^i$. Clearly, such $\mathcal{T}$ is an invariant set of $B$. Given any domain $\mathcal{T}$ with three connected components $\{\mathcal{T}_i\}$ we may always associate a product domain in ${\text{\rm Conf}}_3(S^3)$ as follows $$\label{eq:admissible-domain}
\mathcal{T}:=\mathcal{T}_1\times \mathcal{T}_2\times \mathcal{T}_3\subset {\text{\rm Conf}}_3(S^3)\subset S^3\times S^3\times S^3\ .$$ Notice that $\mathcal{T}$ is a domain with corners in ${\text{\rm Conf}}_3(S^3)$, and we use the same notation for the product of $\mathcal{T}_i$ as for the union in $S^3$. Wherever needed, we also assume that $(S^3)^3$ is equipped with a product Riemannian metric. Let a domain $\mathcal{T}$ defined in , where $\mathcal{T}_i\cap \mathcal{T}_j=$Ø, $i\neq j$ and each $\mathcal{T}_i$ is a handlebody in $S^3$ be called *unlinked handlebody*, if and only if, the 2-form $\omega\in\Omega^2({\text{\rm Conf}}_3(S^3))$ defined in Equation is exact on $\mathcal{T}$, i.e. $\omega$ has a local *potential* $\alpha_\omega\in \Omega^1(\mathcal{T})$: $$\label{eq:omega-exact}
\omega=d\alpha_\omega\ .$$ Denote a volume preserving vector field $B$ and an unlinked handlebody $\mathcal{T}$ as a pair $(B;\mathcal{T})$.
Since $\omega$ is a dual cohomology class to the $S^2$ factor in ${\text{\rm Conf}}_3(S^3)\cong S^3\times S^2$, $\omega$ does not admit a global potential.
Because each handlebody $\mathcal{T}_i$ has a homotopy type of a bouquet of circles, there is a natural choice of the basis for $H_1(\mathcal{T}_i)$ which consists of cycles $\{L^k_i\}_{k=1,\ldots,g(\partial\mathcal{T}_i)}$ corresponding to the circles. We have the following practical characterization of unlinked handlebodies:
\[lem:unlinked-handlebody\] $\mathcal{T}$ is an unlinked handlebody, if and only if, $$\label{eq:admissible-condition}
\text{lk}(L^k_i,L^r_j) = 0,\qquad \text{for all\ }\, i,j,\qquad i\neq j$$
The standard integral pairing, [@Bott82], $H^2(\mathcal{T})\times H_2(\mathcal{T})\longrightarrow {\mbox{\bbb R}}$ implies that a closed $k$-form is exact, if and only if, it evaluates to zero on all $k$-cycles of the domain. By the K$\ddot{\rm u}$nneth formula $H_2(\mathcal{T})=H_2(\mathcal{T}_1\times \mathcal{T}_2\times \mathcal{T}_3)$ is generated by $L^k_i\otimes L^r_j$, and by : $$\text{lk}(L^k_i,L^r_j) = \int_{L^k_i\times L^r_j} \omega\qquad i\neq j.$$ Therefore the condition is necessary and sufficient for $\omega$ to be exact on $\mathcal{T}$.
We define the $\bar{\mu}_{123}$-helicity of $(B;\mathcal{T})$ denoted by $\mathsf{H}_{123}(B;\mathcal{T})$ or $\mathsf{H}_{123}(B_1,B_2,B_3)$ as follows: $$\label{eq:mu_123-helicity}
\boxed{\mathsf{H}_{123}(B;\mathcal{T})=\mathsf{H}_{123}(B_1,B_2,B_3)\stackrel{\text{def.}}{=}\int_{\mathcal{T}} \bigl(\alpha_\omega\wedge d\alpha_\omega\bigr)\wedge \iota_{B_1}\mu_1\wedge \iota_{B_2}\mu_2\wedge \iota_{B_3}\mu_3, }$$ where $\mu_i$ denotes the pull-back of the volume form $\mu$ on $S^3$ under the projection $$\label{eq:projection-i}
\pi_i:S^3\times S^3\times S^3\longrightarrow S^3,\qquad \pi_i(x_1,x_2,x_3)=x_i,$$ and $\iota_{B_i}$ is a contraction by a vector field $B_i$. Our notational convention is to denote by $\iota_{B_i}\mu_i$ both the forms on the base of $\pi_i$ and the pullbacks: $\pi^\ast_i\bigl(\iota_{B_i}\mu_i\bigr)$. Notice that $\mu=\mu_1\wedge\mu_2\wedge\mu_3$ is a volume form on the product: $S^3\times S^3\times S^3$. There are obvious analogies between Formula above, Formula for $\mathsf{H}_{12}(B;\mathcal{T})$, and the integral formula for the $\bar{\mu}_{123}$-invariant. The $3$-form: $$\gamma_\omega:=\alpha_\omega\wedge d\alpha_\omega=\alpha_\omega\wedge \omega,$$ plays a role of the linking form as $\omega$ in Formula . The main motivation behind definition is the ergodic interpretation of $\mathsf{H}_{123}(B;\mathcal{T})$ as an average asymptotic $\bar{\mu}_{123}$-invariant of orbits of $B$, which will become apparent in Section \[sec:ergodic\]. Formula can be also regarded as the third order helicity of three distinct vector fields $B_i$, supported on the handlebodies $\mathcal{T}_i$. In Section \[sec:energy\], we indicate how to construct the potential $\alpha_\omega$ from the basic elliptic theory of differential forms.
\[th:invariance-th\] On every unlinked invariant handlebody $\mathcal{T}$ in $S^3$, $\mathsf{H}_{123}(B;\mathcal{T})$ is
- independent of a choice of the potential $\alpha_\omega$,
- invariant under the action of $\text{\rm SDiff}_0(S^3)$, i.e. for every $g\in \text{\rm SDiff}_0(S^3)$: $$\begin{gathered}
\label{eq:invariance}
\mathsf{H}_{123}(B;\mathcal{T})=\mathsf{H}_{123}(g_\ast B;g(\mathcal{T}))\ .\end{gathered}$$
To prove $(i)$ observe for every other potential $\alpha'_\omega$ of $\omega$, the difference $\beta=\alpha_\omega-\alpha'_\omega$ is a closed $1$-form on $\mathcal{T}$ (since $\omega=d\alpha_\omega=d\alpha'_\omega$). Therefore, $$\begin{aligned}
\mathsf{H}_{123}(B;\mathcal{T})-\mathsf{H}_{123}'(B;\mathcal{T}) & = & \int_{\mathcal{T}} (\beta\wedge\omega)\wedge\iota_{B_1}\mu_1\wedge \iota_{B_2}\mu_2\wedge \iota_{B_3}\mu_3\\
& \stackrel{(1)}{=} & \int_{\mathcal{T}} d\bigl(\beta\wedge \alpha_\omega\wedge\iota_{B_1}\mu_1\wedge \iota_{B_2}\mu_2\wedge \iota_{B_3}\mu_3\bigr)\\
& = & \int_{\partial\mathcal{T}} \beta\wedge \alpha_\omega\wedge\iota_{B_1}\mu_1\wedge \iota_{B_2}\mu_2\wedge \iota_{B_3}\mu_3 \stackrel{(2)}{=} 0,\end{aligned}$$ where in (1) we applied $d(\iota_{B_i}\mu_i)=0$ (since $B_i$’s are divergence free), and in (2): $$\iota_{B_i}\mu_i\bigl|_{\partial\mathcal{T}_i}=0,$$ (because each vector field $B_i$ is tangent to the boundary $\partial \mathcal{T}_i$), where $$\partial \mathcal{T}=\bigl(\partial \mathcal{T}_1\times \mathcal{T}_2\times \mathcal{T}_3\bigr)\cup \bigl(\mathcal{T}_1\times \partial\mathcal{T}_2\times \mathcal{T}_3\bigr)\cup \bigl(\mathcal{T}_1\times \mathcal{T}_2\times \partial\mathcal{T}_3\bigr).$$
The proof of $(ii)$ is in the style of [@Berger90; @Mayer03], but adapted to our setting. For any given $g\in \text{\rm SDiff}(S^3)$, by definition, there exists a path $t\longrightarrow g(t)\in \text{\rm SDiff}_0(S^3)$, such that $$g(0)=\text{id}_{S^3},\qquad g(1)=g\ .$$ Denote by $V$ the divergence free vector field on $S^3$, given by $V(x)=\frac{d}{dt} g(t,x)|_{t=0}$, i.e. $g(t)$ is a flow of $V$, and push-forward fields $B_i$ by $$B^t_i:=g(t)_\ast B_i\ .$$ It is well known (see Appendix \[apx:A\], or [@Freedman91-2 p. 224]) that 2-forms: $\iota_{B^t_i}\mu$ are frozen in the flow of $V$, i.e. $$\label{eq:forms-frozen-in}
\frac{d}{dt}\bigl(g(t)^\ast\iota_{B^t_i}\mu\bigr) =(\partial_t+\mathcal{L}_V) \iota_{B^t_i}\mu = 0.$$ We also have a path $\hat{g}(t)=(g(t),g(t),g(t))$ in $\text{\rm SDiff}_0(S^3\times S^3\times S^3)$, which analogously leads to the vector field $\hat{V}=(V,V,V)$. (Recall that a tangent bundle $T(S^3)^3$ has a natural product structure). Equation implies $$\label{eq:forms-frozen-in2}
(\partial_t+\mathcal{L}_{\hat{V}}) \bigl(\pi^\ast_i\iota_{B^t_i}\mu\bigr)=(\partial_t+\mathcal{L}_{\hat{V}}) \iota_{B^t_i}\mu_i= 0\ .$$ (In the second equation we merely revoke our notational conventions: $\iota_{B^t_i}\mu_i\equiv\pi^\ast_i\bigl(\iota_{g(t)^\ast B_i}\mu\bigr)$).
Let $\mathcal{T}(t)=\hat{g}(t)(\mathcal{T}(0))\subset {\text{\rm Conf}}_3(S^3)$, we must show $\frac{d}{dt}
\mathsf{H}_{123}(B^t_1,B^t_2,B^t_3)=0$. Notice that for small enough $\epsilon$ and $t\in (t_0-\epsilon,t_0+\epsilon)$ we can assume, by $(i)$, that $\alpha_\omega$ is a time independent potential obtained from slightly bigger domain $\widetilde{\mathcal{T}}$ which deformation retracts on $\mathcal{T}(t_0)$, and satisfies $$\mathcal{T}(t)\subset\widetilde{\mathcal{T}},\qquad \text{\rm for}\quad t\in (t_0-\epsilon,t_0+\epsilon)\ .$$ Without loss of generality set $t_0=0$, and $\hat{g}(0)=\text{id}_{(S^3)^3}$, at $t_0$ we calculate: $$\begin{aligned}
\frac{d}{dt}
\Bigl(\mathsf{H}_{123}(g(t)^\ast B_1,g(t)^\ast B_2,g(t)^\ast B_3)\Bigr) & = &
\frac{d}{dt} \int_{\mathcal{T}(t)} \alpha_\omega\wedge d\alpha_\omega\wedge \iota_{B^t_1}\mu_1\wedge \iota_{B^t_2}\mu_2\wedge \iota_{B^t_3}\mu_3\\
& = & \int_{\mathcal{T}(0)} \frac{d}{dt} \hat{g}(t)^\ast\bigl(\alpha_\omega\wedge d\alpha_\omega\wedge \iota_{B^t_1}\mu_1\wedge \iota_{B^t_2}\mu_2\wedge \iota_{B^t_3}\mu_3\bigr)\\
& = & \int_{\mathcal{T}(0)} \bigl(\mathcal{L}_{\partial_t+\hat{V}}(\alpha_\omega\wedge d\alpha_\omega)\bigr)\wedge \iota_{B^t_1}\mu_1\wedge \iota_{B^t_2}\mu_2\wedge \iota_{B^t_3}\mu_3,\end{aligned}$$ where in the last identity we applied and the product rule for the Lie derivative. Now because $\omega\wedge\alpha_\omega$ is time independent (for $t\in (t_0-\epsilon,t_0+\epsilon)$), Cartan magic formula yields $$\begin{aligned}
\mathcal{L}_{\partial_t+\hat{V}}(\alpha_\omega\wedge d\alpha_\omega) & = & \mathcal{L}_{\hat{V}}(\alpha_\omega\wedge d\alpha_\omega) = \iota_{\hat{V}} d(\alpha_\omega\wedge d\alpha_\omega)+ d(\iota_{\hat{V}} (\alpha_\omega\wedge d\alpha_\omega))\\
& = & d(\iota_{\hat{V}} (\alpha_\omega\wedge d\alpha_\omega)),\end{aligned}$$ where $d(\alpha_\omega\wedge d\alpha_\omega)=\omega\wedge\omega=0$. Since $B^t_i$ are tangent to the boundary of $\mathcal{T}_i(t)$, the same argument as in the proof of $(i)$ shows that the right hand side of the previous equation vanishes.
Notice that the above argument indicates that if we replace $\alpha_{\omega}\wedge d\alpha_\omega$ by virtually any closed 3-form $\eta$ on ${\text{\rm Conf}}_3(S^3)$ (or $(S^3)^3$) we obtain some invariant under frozen-in-field deformations. If $\eta$ is exact we obtain trivial invariants, therefore the only sensible candidates here are cohomology classes of ${\text{\rm Conf}}_3(S^3)\cong S^3\times S^2$. In dimension $3$ it leaves us with a dual to the $S^3$ factor in ${\text{\rm Conf}}_3(S^3)$. Based on the considerations in Section \[sec:mu-and-hopf\] one may argue that an invariant obtained this way is trivial. Indeed, the cohomology class $\eta$ evaluated on any 3-torus obtained from a 3-component link in $S^3$ via the map $L$ in is zero. Therefore, one could apply the ergodic approach of Section \[sec:ergodic\] to show that $\int \eta\wedge\iota_{B}\mu_1\wedge \iota_{B}\mu_2\wedge \iota_{B}\mu_3$ defines a trivial invariant. The crucial obstacle in extending the formula in to encompass the whole $(S^3)^3$ is the fact that the potential $\alpha_\omega$ cannot be globally defined on ${\text{\rm Conf}}_3(S^3)$.
The ergodic interpretation of H-123 {#sec:ergodic}
===================================
The following statement is often seen in literature [@Cantarella-DeTurck-Gluck01; @Cantarella-DeTurck-Gluck-Teytel00]:
> *Helicity measures the extent to which vector fields twist and coil around each other.*
A beauty of Arnold’s ergodic approach to the helicity $\mathsf{H}_{12}(B)$ is that it makes this statement precise, by interpreting $\mathsf{H}_{12}(B)$ as an average asymptotic linking number of orbits of $B$. But, it also has a practical application as it allows us to extend our approach to certain invariant sets of $B$. In this section we apply this philosophy to our newly defined invariant $\mathsf{H}_{123}(B;\mathcal{T})$, and interpret it as the average asymptotic $\bar{\mu}_{123}$-invariant of orbits of $B$ in $\mathcal{T}$. Moreover, this ergodic interpretation leads us to an alternative, more intuitive proof of Helicity Invariance Theorem \[th:invariance-th\].
We begin by observing that given a volume preserving vector field $B$ on $M$ and its flow $\Phi_t$, we may regard $B$ as three vector fields on $(M)^3$. Thus, $(\Phi,\Phi,\Phi)$ induces a natural ${\mbox{\bbb R}}^3$ action defined as follows: $$\label{eq:R3-action}
\mathbf{\Phi}:{\mbox{\bbb R}}^3\times (M)^3\longrightarrow (M)^3,\qquad \bigl((s,t,u),x,y,z\bigr)\stackrel{\mathbf{\Phi}}{\longrightarrow} (\Phi(s,x),\Phi(t,y), \Phi(u,z)).$$ Observe that $\mathbf{\Phi}$ is a volume preserving action on $(M)^3$. Our analysis is rooted in techniques developed in [@Arnold86; @Laurence-Avellaneda93; @Laurence-Stredulinsky00b; @Vogel03], the main tool is the following
\[th:multi-ergodic\] For any real valued $L^1$-function $F$ on $(M)^3$, the time averages under the action in : $$\begin{gathered}
\bar{F}(x,y,z)=\lim_{T\to \infty}\frac{1}{T^3}\int^T_0\int^T_0\int^T_0
F(\Phi(x,s),\Phi(y,t),\Phi(z,u))\,d s\,d t\,d u\end{gathered}$$ converge almost everywhere. In addition, the limit function $\bar{F}$ satisfies
- $\|\bar{F}\|_{L^1((M)^3)}\leq \|F\|_{L^1((M)^3)}$,
- $\bar{F}$ is invariant under the $\mathbf{\Phi}$-action,
- if $(M)^3$ is of finite volume then $$\begin{gathered}
\label{eq:ergodic-integrals-equal}
\int_{(M)^3}\bar{F} =\int_{(M)^3} F\ .\end{gathered}$$
Define *invariant unlinked domain* $\mathcal{T}$ of $B$ as an arbitrary $\mathbf{\Phi}$-invariant set, with topological closure $\overline{\mathcal{T}}$ which belongs to a larger product of open sets $\widetilde{\mathcal{T}}=\widetilde{\mathcal{T}}_1\times \widetilde{\mathcal{T}}_2\times \widetilde{\mathcal{T}}_3$ in ${\text{\rm Conf}}_3(S^3)$, satisfying the following
- $\widetilde{\mathcal{T}}$ admits a *short path system* $\mathcal{S}$,
- Equation holds on $\widetilde{\mathcal{T}}$,
where by a system of short paths on $\widetilde{\mathcal{T}}$, [@Vogel03; @Khesin98], we understand a collection of curves $\mathcal{S}=\{\sigma_i(x,y)\}$ on each open set $\widetilde{\mathcal{T}}_i$ such that
- for every pair of points $x,y \in \widetilde{\mathcal{T}}_i$ there is a connecting curve $\sigma_i(x,y):I\mapsto \widetilde{\mathcal{T}}_i$ in $\mathcal{S}$, $\sigma_i(0)=x$ and $\sigma_i(1)=y$,
- the lengths of paths in $\mathcal{S}$ are uniformly bounded above by a common constant.
Topologically, every $\mathbf{\Phi}$-invariant set is a union of products of orbits of $B$ in $(S^3)^3$. It is often convenient to think of the orbits $\mathbf{\Phi}$-action as a foliation of $(S^3)^3$. Then $\mathbf{\Phi}$-invariant sets are just union of leaves of this foliation. A fundamental example of an invariant unlinked domain is the case of $\mathbf{\Phi}$-invariant set $\mathcal{T}$ contained in the product $\widetilde{\mathcal{T}}=\widetilde{\mathcal{T}}_1\times \widetilde{\mathcal{T}}_2\times \widetilde{\mathcal{T}}_3$ of disjoint open unlinked handlebodies $\widetilde{\mathcal{T}}_i$. Note that in this case we do not require $B$ to be tangent to $\partial\widetilde{\mathcal{T}}_i$, and $\widetilde{\mathcal{T}}$ always admits a short path system as we describe in the following
In [@Vogel03], Vogel shows that on a closed manifold $M$, geodesics always provide a short path system. When $\mathcal{T}$ is contained in the product of unlinked handlebodies $\widetilde{\mathcal{T}}$ we may easily construct such system on $\widetilde{\mathcal{T}}$ as follows Because $\mathcal{\mathcal{T}}_i$ are proper subsets of $S^3$ we generally do not want to use ambient geodesics from $S^3$ as they may not lie entirely in $\mathcal{T}_i$. To obtain $\mathcal{S}$ one puts an artificial Riemannian metric on each $\widetilde{\mathcal{T}}_i$ which makes $\partial\widetilde{\mathcal{T}}_i$ totally geodesic, and choose $\mathcal{S}$ to be geodesics on such Riemannian manifold. Observe that applying a diffeomorphism $g\in \text{Diff}(\widetilde{\mathcal{T}})$ to $\mathcal{S}$ results in the system $g\mathcal{S}$ on $g(\widetilde{\mathcal{T}})$.
The following result is an analog of Arnold’s Helicity Theorem in our setting,
\[th:ergodic-mu\] Given $(B;\mathcal{T})$, the following limit (asymptotic $\bar{\mu}_{123}$-invariant of orbits) exists for almost all $(x,y,z)\in \mathcal{T}$: $$\label{eq:bar-m_B}
\bar{m}_B(x,y,z)=\lim_{T\to\infty} \frac{1}{T^3}\bar{\mu}_{123}\bigl(\bar{\mathscr{O}}^{B_1}_T(x),\bar{\mathscr{O}}^{B_2}_T(y),\bar{\mathscr{O}}^{B_3}_T(y)\bigr).$$ Moreover, $$\label{eq:H_123-ergodic}
\mathsf{H}_{123}(B;\mathcal{T})=\int_{\mathcal{T}} \bar{m}_B(x,y,z)\,\mu(x)\wedge\mu(y)\wedge\mu(z)\ .$$
The proof is similar to the one in e.g. [@Vogel03]. Before we start, we must point out the following identity (valid for any 3-form $\beta$ on $M\times M\times M$ and vector fields $B_1,B_2,B_3$ on $M$)
$$\label{eq:iota-volume}
\begin{split}
(\iota_{B_3}\iota_{B_2}\iota_{B_1}\beta)\wedge\mu_1\wedge\mu_2\wedge\mu_3 & =
\beta(B_1,B_2,B_3)\,\mu_1\wedge\mu_2\wedge\mu_3\\
& =\beta\wedge\iota_{B_1}\mu_1\wedge\iota_{B_2}\mu_2\wedge\iota_{B_3}\mu_3\ .
\end{split}$$
The first equation follows from the definition, the second one is a consequence of the fact that $\iota_B$ is an antiderivation i.e. $$\label{eq:iota-antiderivation}
\iota_B(\alpha\wedge\beta)=(\iota_B\alpha)\wedge\beta+(-1)^{|\alpha|}\alpha\wedge(\iota_B\beta),$$ and $\iota_{B_i}\mu_j=0$, for $i\neq j$ (see Appendix \[apx:A\]). As a result, $$\begin{aligned}
\notag \mathsf{H}_{123}(B_1,B_2,B_3) & = & \int_{\mathcal{T}} \bigl(\alpha_\omega\wedge d\alpha_\omega\bigr)\wedge \iota_{B_1}\mu_1\wedge \iota_{B_2}\mu_2\wedge \iota_{B_3}\mu_3\\
& = & \int_\mathcal{T} \bigl(\iota_{B_3}\iota_{B_2}\iota_{B_1}\bigl(\alpha_\omega\wedge d\alpha_\omega\bigr)\bigr)\,\mu_1\wedge\mu_2\wedge\mu_3\\
\label{eq:m-function} & = & \int_\mathcal{T} m_{B_1,B_2,B_3}(x,y,z)\,\mu_1\wedge \mu_2\wedge \mu_3,\end{aligned}$$ where $m_{B_1,B_2,B_3}:=\alpha_\omega\wedge d\alpha_\omega(B_1,B_2,B_3)$. For convenience, let us set (see ): $$\bar{\mathscr{O}}(x,y,z;T):=\bar{\mathscr{O}}^{B_1}_T(x)\times \bar{\mathscr{O}}^{B_2}_T(y)\times\bar{\mathscr{O}}^{B_3}_T(y)\ .$$ Observe that if the orbit $\bar{\mathscr{O}}(x,y,z;T)$ is nondegenerate, it represents a Borromean link and we may apply Formula to get $$\begin{aligned}
\bar{\mu}_{123}\bigl(\bar{\mathscr{O}}^{B_1}_T(x),\bar{\mathscr{O}}^{B_2}_T(y),\bar{\mathscr{O}}^{B_3}_T(y)\big)
& = & \int_{\bar{\mathscr{O}}(x,y,z;T)}
\alpha_\omega\wedge d\alpha_\omega\\
& = & \int_{\mathscr{O}(x,y,z;T)}
\alpha_\omega\wedge d\alpha_\omega+(I),\end{aligned}$$ where the term $(I)$ involves integrals over short paths in $\mathcal{S}$, (see Appendix \[apx:B\]). For degenerate orbits (such as fixed points etc.), the above formula still makes sense because $\bar{\mu}_{123}$ is a homotopy invariant of the associated map as proven in Section \[sec:mu-and-hopf\]. For every $(x,y,z)$ : $$D\mathbf{\Phi}_{(x,y,z)}[\partial_i]=B_i,\quad i=1,2,3,$$ where $\partial_1:=\partial_s$,$\partial_2:=\partial_t$,$\partial_3:=\partial_u$, thus we obtain $$\begin{aligned}
\mathbf{\Phi}^\ast_{(x,y,z)}(\alpha_\omega\wedge d\alpha_\omega) & = & (\alpha_\omega\wedge d\alpha_\omega(B_1,B_2,B_3)(\Phi^1(x,s),\Phi^2(y,t),\Phi^3(z,u))\,ds\wedge dt\wedge du\\
& = & m_{B_1,B_2,B_3}(\Phi^1(x,s),\Phi^2(y,t),\Phi^3(z,u))\,ds\wedge dt\wedge du\ .\end{aligned}$$ Therefore, $$\int_{\mathscr{O}(x,y,z;T)}
\alpha_\omega\wedge d\alpha_\omega=\int^T_0\int^T_0\int^T_0 m_{B_1,B_2,B_3}(\Phi^1(x,s),\Phi^2(y,t),\Phi^3(z,u))\,ds\wedge dt\wedge du\ .$$ Function $m_{B_1,B_2,B_3}$ is smooth bounded on $\mathcal{T}$ and hence $L^1$. Because short paths do not contribute to the time average (see Appendix \[apx:B\]): $$\lim_{T\to\infty} \frac{1}{T^3}(I)=0.$$ Theorem \[th:multi-ergodic\] applied to the function $m_{B_1,B_2,B_3}$ yields almost everywhere existence of the limit . Hence we obtain the invariant $L^1$-function $\bar{m}_B:=\bar{m}_{B_1,B_2,B_3}$ on $\mathcal{T}$. The identity follows from $(iii)$ of Theorem \[th:multi-ergodic\].
\[th:invariance-th-ergodic\] On every unlinked domain $\mathcal{T}$ in $S^3$, (i) and (ii) of Theorem \[th:invariance-th\] hold for $\mathsf{H}_{123}(B;\mathcal{T})$.
The proof of $(i)$ immediately follows from independence of the limit of the choice of the potential $\alpha_\omega$. For the proof of $(ii)$ we must show the following $$\int_\mathcal{T} m_{B_1,B_2,B_3}\,\mu_1\wedge \mu_2\wedge \mu_3=\int_{g(\mathcal{T})} m_{g_\ast B_1,g_\ast B_2, g_\ast B_3}\,\mu_1\wedge \mu_2\wedge \mu_3\ .$$ Theorem \[th:multi-ergodic\] tells us that $m_{B_1,B_2,B_3}$ and $m_{g_\ast B_1,g_\ast B_2, g_\ast B_3}$ admit $L^1$-averages $\bar{m}_{B_1,B_2,B_3}$ and $\bar{m}_{g_\ast B_1,g_\ast B_2, g_\ast B_3}$, under actions of $B_i$ and $g_\ast B_i$ respectively. It suffices to show the following identity $$\label{eq:m=gm}
\bar{m}_{B_1,B_2,B_3}(x,y,z)=\bar{m}_{g_\ast B_1,g_\ast B_2, g_\ast B_3}(g(x),g(y),g(z)),\qquad \text{a.e.}$$ then is an immediate consequence of Equation , change of variables for integrals, and the fact that $g$ preserves volume (i.e. $\mu_i=g^\ast\mu_i$). Borrowing notation from the previous theorem set $$\begin{aligned}
g\bar{\mathscr{O}}(x,y,z;T) & := & g(\bar{\mathscr{O}}^{B_1}_T(x))\times g(\bar{\mathscr{O}}^{B_2}_T(y))\times g(\bar{\mathscr{O}}^{B_3}_T(z))\\
& = & \bar{\mathscr{O}}^{g_\ast B_1}_T(g(x))\times \bar{\mathscr{O}}^{g_\ast B_2}_T(g(y))\times \bar{\mathscr{O}}^{g_\ast B_3}_T(g(z)),\end{aligned}$$ where the second identity is a consequence of the fact that the flow of $g_\ast B_i$ is obtained as a composition of $g$ and the flow of $B_i$. Since $g$ is isotopic to the identity, $\bar{\mathscr{O}}(x,y,z;T)$ and $g\bar{\mathscr{O}}(x,y,z;T)$ are homotopic as link maps (for nondegenerate orbits they are in fact isotopic Borromean links in $S^3$) and by theorems of Section \[sec:mu-and-hopf\], we have $$\bar{\mu}_{123}(\bar{\mathscr{O}}(x,y,z;T))=\bar{\mu}_{123}(g\bar{\mathscr{O}}(x,y,z;T)).$$ As a result of the above identity and we derive a.e. $$\begin{aligned}
\bar{m}_{B_1,B_2,B_3}(x,y,z) & = & \lim_{T\to\infty}\frac{1}{T^3}\bar{\mu}_{123}(\bar{\mathscr{O}}(x,y,z;T))\\
& = & \lim_{T\to\infty}\frac{1}{T^3}\bar{\mu}_{123}(g\bar{\mathscr{O}}(x,y,z;T))=\bar{m}_{g_\ast B_1,g_\ast B_2, g_\ast B_3}(g(x),g(y),g(z)).\end{aligned}$$ Notice that in the last equation we used the “pushed forward” short paths system: $g\mathcal{S}$. Since the lengths of paths in $g\mathcal{S}$ are bounded as well, they do not contribute to the limit. This proves the identity , and consequently .
Notice that the above argument does not require the Stokes Theorem, and as such may lead to further generalizations. Clearly, for $\mathsf{H}_{123}(B;\mathcal{T})$ to be nontrivial $\mathcal{T}$ must be of nonzero measure.
Flux formula for H-123.
=======================
The following formula is a well known property of the ordinary helicity $\mathsf{H}_{12}(B;\mathcal{T})$ of the flux tubes $\mathcal{T}$ modeled on a 2-component link $L=\{L_1,L_2\}$ (see e.g. [@Laurence-Avellaneda93; @Cantarella00])
$$\label{eq:helicity12-flux-tubes}
\mathsf{H}_{12}(B;\mathcal{T})=\mathsf{H}_{12}(B_1,B_2)=\text{lk}(L_1,L_2)\,\text{Flux}(B_1)\,\text{Flux}(B_2).$$
Here we show an analogous property for $\mathsf{H}_{123}(B;\mathcal{T})$, when $\mathcal{T}$ is an invariant unlinked handlebody. Recall that $\{L^k_i\}_{k=1,\ldots,g(\partial\mathcal{T}_i)}$ denotes the basis of $H_1(\mathcal{T})$ defined in Lemma \[lem:unlinked-handlebody\].
$\mathsf{H}_{123}(B_1,B_2,B_3)$ on invariant unlinked handlebodies $\mathcal{T}$ satisfies the following formula $$\label{eq:helicity-fluxes}
\mathsf{H}_{123}(B_1,B_2,B_3)=\sum_{i, j, k} \bar{\mu}_{123}(L^i_1, L^j_2, L^k_3)\,\text{Flux}_{\Sigma_i}(B_1)\,\text{Flux}_{\Sigma_j}(B_2)\,\text{Flux}_{\Sigma_k}(B_3),$$ where $\{L^i_1\otimes L^j_2\otimes L^k_3\}$ is a basis of $H_3(\mathcal{T})$, $\text{Flux}(B_i)$ stands for the flux of $B_i$ through a cross sectional surface $\Sigma_k$ of $\mathcal{T}_i$, which represents the homology Poincar$\acute{\rm e}$ dual of $L^k_i$ in $H_2(\mathcal{T}_i,\partial\mathcal{T}_i)$.
Recall that the flux $\text{Flux}_{\Sigma_k}(B_i)$ of a vector field $B_i$ though a cross-sectional surface $\Sigma_k$ in $\mathcal{T}_i$ is given by: $$\label{eq:flux}
\begin{split}
\text{Flux}_{\Sigma_k}(B_i) & = \int_{\Sigma_k} \iota_{B_i}\mu=\int_{\mathcal{T}_i} h_k\wedge\iota_{B_i}\mu\\
& = \int_{\mathcal{T}_i} \iota_{B_i}h_k\wedge\mu =\int_{\mathcal{T}_i} h_k(B_i),
\end{split}$$ where 1-forms $h_k$ represent cohomology Poincar$\acute{\rm e}$ duals of $\Sigma_k$, and we applied in the third equation. For every closed curve $\gamma\subset \mathcal{T}$ $h_k$ satisfies $$\int_{\gamma} h_k=\#(\gamma,\Sigma_k)=\text{deg}(\gamma, L_k),$$ where $\text{deg}(\gamma, L^j_k)$ measures how many times $\gamma$ “wraps around” the cycle $L^j_k$.
For simplicity, we first assume that $\mathcal{T}$ is modeled on a Borromean link $L=\{L_1,L_2,L_3\}$ (such as $\mathcal{T}^\text{Borr}$ on Figure \[fig:invariant-sets\]). Then, $h=h_1\wedge h_2\wedge h_3\in \Omega^3(\mathcal{T})$ is a cohomology class dual to the cycle $L_1\otimes L_2\otimes L_3$ in $H_3(\mathcal{T})$. Define $$H:=\iota_{B_3}\iota_{B_2}\iota_{B_1} h=h(B_1,B_2,B_3),$$ which is a smooth function, and let $\bar{H}$ be the time average of $H$ as in Theorem \[th:multi-ergodic\]. It suffices to show $$\label{eq:H-equation}
\bar{m}_{B_1,B_2,B_3}=\bar{\mu}_{123}(L)\,\bar{H},\qquad \text{a.e.}$$ Analogously, as in the proof of Theorem \[th:invariance-th\], Equation immediately implies Formula . Assume the notation of Theorem \[th:invariance-th\], for given $T$ consider $\bar{\mathscr{O}}(x,y,z;T)$. Thanks to Theorem \[th:milnor-hopf\] (see the first paragraph of the proof) we have $$\begin{aligned}
\bar{\mu}_{123}(\bar{\mathscr{O}}(x,y,z;T)) & = & \bar{\mu}_{123}(L)\,\bigl(\text{deg}(\bar{\mathscr{O}}^{B_1}_T(x),L_1)\,\text{deg}(\bar{\mathscr{O}}^{B_2}_T(y),L_2)\,\text{deg}(\bar{\mathscr{O}}^{B_3}_T(x),L_3)\bigr)\\
& = & \bar{\mu}_{123}(L)\int_{\bar{\mathscr{O}}(x,y,z;T)} h\ .
\end{aligned}$$ Therefore, $$\begin{aligned}
\bar{m}_{B_1,B_2, B_3}(x,y,z) & = & \lim_{T\to \infty} \frac{1}{T^3}\bar{\mu}_{123}(\bar{\mathscr{O}}(x,y,z;T))\\
& = & \bar{\mu}_{123}(L)\lim_{T\to \infty} \frac{1}{T^3}\int_{\bar{\mathscr{O}}(x,y,z;T)} H\\
& = &\bar{\mu}_{123}(L) \bar{H}(x,y,z),
\end{aligned}$$ where the last equality is again the consequence of short paths not contributing to the limit. From the product structure of $\mathcal{T}$ and $(iii)$ of Theorem \[th:multi-ergodic\] we get $$\int_\mathcal{T} \bar{H} =\int_\mathcal{T} H=\bigl(\int_{\mathcal{T}_1} h_1(B_1) \bigr)\bigl(\int_{\mathcal{T}_2} h_2(B_2) \bigr)\bigl(\int_{\mathcal{T}_3} h_3(B_3) \bigr),$$ which combined with Equation for fluxes concludes the proof in the case of Borromean flux tubes. The proof in the case of a general handlebody is analogous, once we show the following
Let $\mathcal{O}=\{\mathcal{O}_1,\mathcal{O}_2, \mathcal{O}_3\}$ be a 3-component link in $S^3$ such that $\mathcal{O}_i\subset \mathcal{T}_i$ for each $i$. Then $\mathcal{O}$ is Borromean and $$\label{eq:mu-in-T}
\bar{\mu}_{123}(\mathcal{O})=\sum_{i, j, k} \bar{\mu}_{123}(L^i_1, L^j_2, L^k_3)\,\text{deg}(\mathcal{O}_1, L^i_1)\,\text{deg}(\mathcal{O}_2, L^j_2)\,\text{deg}(\mathcal{O}_3, L^k_1)\ .$$
Thanks to the interpretation of the $\bar{\mu}_{123}$-invariant in Section \[sec:mu-and-hopf\], it is not only a link homotopy invariant, but also a homotopy invariant of the associated map $F_{\mathcal{O}}$ defined in . Observe that each component $\mathcal{O}_i$ can be homotoped inside of its handlebody $\mathcal{T}_i$ to become a bouquet of circles $\widehat{\mathcal{O}}_i\cong S^1\vee S^1\vee\ldots\vee S^1$ so that each factor in $\widehat{\mathcal{O}}_i$ is a multiple of the cycle represented by $\{L^j_i\}$in $H_1(\mathcal{T}_i)$. As a result, we obtain the associated map $F_{\widehat{\mathcal{O}}}$, and $$\bar{\mu}_{123}(\mathcal{O})=\frac{1}{2}\mathscr{H}(F_{\widehat{\mathcal{O}}}).$$ Interpreting $\mathscr{H}(F_{\widehat{\mathcal{O}}})$ as the intersection number and summing up intersection numbers we conclude .
In the case of Borromean flux tubes, Formula reduces to $$\label{eq:helicity12-borro-flux-tubes}
\mathsf{H}_{123}(B_1,B_2,B_3)=\bar{\mu}_{123}(L)\,\text{Flux}_{\Sigma_1}(B_1)
\text{Flux}_{\Sigma_2}(B_2)\text{Flux}_{\Sigma_3}(B_3),$$ where $\Sigma_i$ denotes homology Poincar$\acute{\rm e}$ duals to $L_i$ in $H_2(\mathcal{T}_i,\partial\mathcal{T}_i)$.
Since the fluxes are invariant under frozen-in-field deformations, Formula is yet another proof of Theorem \[th:invariance-th\] in the setting of invariant unlinked handlebodies. In [@Laurence-Stredulinsky00b] the authors develop the same formula for the Borromean flux tubes. This clearly must be the case, as we work with the same topological invariants of links via a different approach. Additional advantage of our formulation is that we do not have to separately deal with null points of vector fields as in [@Laurence-Stredulinsky00b].
Energy bound. {#sec:energy}
=============
In this section we indicate how the quantity $\mathsf{H}_{123}(B;\mathcal{T})$ invariant under frozen-in-field deformations provides a lower bound for the $L^2$-energy $E_2(B)$ of a volume preserving field $B$ on $M=S^3$. We restrict our considerations to the case of an invariant unlinked handlebody $\mathcal{T}$, defined in Section \[sec:handlebody\]. For the notation used in this section see Appendix \[apx:C\].
Recall the definition $$\label{eq:energy}
E_2(B)=\int_M |B|^2 = \|B\|^2_{L^2(M)}\ .$$ The ordinary helicity $\mathsf{H}_{12}(B)$ provides a well known lower bound (see [@Khesin98 p. 123]): $$\frac{1}{\lambda_1}|\mathsf{H}_{12}(B)|\leq E_2(B),$$ where $\lambda_1$ is the first eigenvalue of the elliptic self adjoint operator $\ast\,d:\Omega^1(M)\mapsto \Omega^1(M)$ (known as the curl operator), $\ast$ denotes the Hodge star. Importance of such lower energy bounds, stems from an area of interest in the ideal magnetohydrodynamics, [@Priest84], as this constrains the phenomenon of “magnetic relaxation”, [@Freedman99]. A need for higher helicities can be justified by the fact that one may easily produce examples of vector fields $B$ for which $\mathsf{H}_{12}(B)$ vanishes, but the energy of the field $B$ still cannot be relaxed. For example, consider a classical case of Borromean flux tubes $\mathcal{T}^\text{Borr}$ with $B$ smooth and vanishing outside the tubes. Furthermore, assume that orbits of $B$ are just “parallel circles” inside each tube. By taking $B=B_1+B_2+B_3$, bilinearity of $\mathsf{H}_{12}(\,\cdot\,,\,\cdot\,)$ on $\text{\rm SVect}(S^3)$ and disjoint supports of $B_i$ yield $$\mathsf{H}_{12}(B)=\mathsf{H}_{12}(B,B)=\sum^3_{i=1}\mathsf{H}_{12}(B_i)+\sum_{i\neq j} \mathsf{H}_{12}(B_i,B_j)=0,$$ where “cross-helicities” $\mathsf{H}_{12}(B_i,B_j)$, $i\neq j$ vanish by Formula , and self helicities $\mathsf{H}_{12}(B_i)$ vanish because the average linking number of orbits is zero (orbits are just parallel circles). Nevertheless, Formula tells us $$\mathsf{H}_{123}(B;\mathcal{T}^\text{Borr})\neq 0,$$ as a result we may regard $\mathsf{H}_{123}(B;\mathcal{T})$ as a possible “higher obstruction” to the energy relaxation or the third order cross-helicity of $B$ on $\mathcal{T}$.
To obtain a lower bound for $E_2(B)$ in such situations we notice that $\mathsf{H}_{123}(B;\mathcal{T})$ is the $L^2$-inner product of the $6$-forms: $\ast(\omega\wedge\alpha_\omega)$ and $\iota_{B_1}\mu_1\wedge \iota_{B_2}\mu_2\wedge \iota_{B_3}\mu_3$. Indeed, from we have $$\mathsf{H}_{123}(B;\mathcal{T})=\bigl(\ast(\alpha_\omega\wedge \omega),\iota_{B_1}\mu_1\wedge \iota_{B_2}\mu_2\wedge \iota_{B_3}\mu_3\bigr)_{L^2\Omega^6((S^3)^3)}\ .$$ Let $C_\mathcal{T}=\|\alpha_\omega\wedge \omega\|_{L^2}$, for a fixed Riemannian product metric on $(S^3)^3$ this constant depends on the domain $\mathcal{T}$ in $(S^3)^3$. We estimate using the Cauchy-Schwarz inequality $$\label{eq:energy-est}
\begin{split}
|\mathsf{H}_{123}(B;\mathcal{T})| & \leq \|\alpha_\omega\wedge \omega\|_{L^2}\|\iota_{B_1}\mu_1\wedge \iota_{B_2}\mu_2\wedge \iota_{B_3}\mu_3\|_{L^2}\\
& = C_\mathcal{T}\Bigl(\int_{\mathcal{T}}(\iota_{B_1}\mu_1\wedge \iota_{B_2}\mu_2\wedge \iota_{B_3}\mu_3)\wedge \ast(\iota_{B_1}\mu_1\wedge \iota_{B_2}\mu_2\wedge \iota_{B_3}\mu_3)\Bigr)^\frac{1}{2}\\
& \stackrel{(1)}{=} C_\mathcal{T}\Bigl(\int_{\mathcal{T}}|B_1|^2|B_2|^2|B_3|^2\,\mu\Bigr)^\frac{1}{2}\\
& \stackrel{(2)}{=} C_\mathcal{T}\Bigl(\int_{\mathcal{T}_1}|B_1|^2\Bigr)^\frac{1}{2}\Bigl(\int_{\mathcal{T}_2}|B_2|^2\Bigr)^\frac{1}{2}\Bigl(\int_{\mathcal{T}_3}|B_3|^2\Bigr)^\frac{1}{2}\\
& \leq C_\mathcal{T} E_2(B)^\frac{3}{2},
\end{split}$$ where $E_2(B)=\int_{S^3} |B|^2$. To observe (1) first note that for any pair of forms: $\alpha\in \Omega^k(M)$, and $\beta\in \Omega^j(N)$, on Riemannian manifolds $M$ and $N$, on the product $M\times N$ we have $$\label{eq:hodge-product}
\ast_{M\times N}(\pi^\ast_M(\alpha)\wedge \pi^\ast_N(\beta))=(\ast_M \pi^\ast_M \alpha)\wedge (\ast_N \pi^\ast_N \beta),$$ where $\pi_M:M\times N\longrightarrow M$ and $\pi_N:M\times N\longrightarrow N$ are the natural projections, (the proof is a simple calculation in an orthogonal frame of the product and is left to the reader). Now, step (1) in follows by applying to the integrand, and observing in the coframe $\{\eta^i_k\}$: $$\begin{aligned}
\iota_{B_i}\mu_i\wedge \ast \iota_{B_i}\mu_i & = & (a_1\eta^i_2\wedge\eta^i_3-a_2\eta^i_1\wedge\eta^i_3+a_3\eta^i_1\wedge\eta^i_2)\wedge (a_1\eta^i_1-a_2\eta^i_2+a_3\eta^i_3)\\
& = & (a^2_1+a^2_2+a^2_3)\, \eta^i_1\wedge\eta^i_2\wedge\eta^i_3 = |B_i |^2 \eta^i_1\wedge\eta^i_2\wedge\eta^i_3=|B_i |^2 \mu_i,\end{aligned}$$ where $B_i=(a_1,a_2,a_3)$. Step (2) in follows from Fubini Theorem.
Next, we aim to provide an estimate for $C_\mathcal{T}$. For this purpose we review some basic $L^2$-theory of the operator $d^{-1}$ (i.e. inverse of the exterior derivative $d$). The main goal is to estimate an $L^2$-norm of the potential $\alpha_\omega$ of $\omega$ in . Following the standard elliptic theory of differential forms, [@Schwarz95], the potential $\alpha_\omega$ in can be obtained via a solution to the Neumann problem for $2$-forms on $\widetilde{\mathcal{T}}$ (see Appendix \[apx:C\]) $$\label{eq:neumann-problem}
\begin{cases}
\Delta \phi_N=\omega,\qquad \text{in } \widetilde{\mathcal{T}}\\
\mathbf{n}\,\phi_N=\mathbf{n}\,d\phi_N=0,\qquad \text{on } \partial \widetilde{\mathcal{T}},
\end{cases}$$ where $\mathbf{n}$ stands for the normal component of a differential form, and $\Delta=d\delta+\delta d$, $\delta=\pm\ast d\ast$, (c.f. [@Schwarz95]). As $\mathcal{T}$ is a domain with corners we replace it by a slightly larger domain $\widetilde{\mathcal{T}}$ in $(S^3)^3$ with the same topology (i.e. $\mathcal{T}$ is a deformation retract of $\widetilde{\mathcal{T}}$) but with smooth boundary $\partial\widetilde{\mathcal{T}}$. (One may argue that it is not really necessary, since $\mathcal{T}$ is Lipschitz and elliptic problems, such as are well posed on Lipschitz domains, [@Mitrea-Mitrea-Taylor01]). Because of $(i)$ in Theorem \[th:invariance-th\], we may use the restriction of $$\label{eq:potential-neumann}
\alpha_\omega=\delta\phi_N,$$ to $\mathcal{T}$ (see Appendix \[apx:C\] for justification of ). Associated to is the Neumann Laplacian $$\Delta_N:H^2\Omega^2_N(\widetilde{\mathcal{T}})\longrightarrow L^2\Omega^2(\widetilde{\mathcal{T}}),$$ which has a discrete positive spectrum $\{\lambda_{i,N}\}$ and eigenvalues satisfy the variational principle called *Rayleigh-Ritz quotient*, [@Chavel84]. The first (principal) eigenvalue $\lambda_{1,N}$ may be expressed as $$\lambda_{1,N}=\inf\left( \frac{\|d\varphi\|^2_{L^2}+\|\delta\varphi\|^2_{L^2}}{\|\varphi\|^2_{L^2}}\ \Bigl|\ \varphi\in H^1\Omega^2_N(M)\cap \mathcal{H}^2_N(M)^\perp\right)$$ We denote the inverse of $\Delta_N$ by $$G_N:L^2\Omega^2(\widetilde{\mathcal{T}})\longrightarrow H^2\Omega^2_N(\widetilde{\mathcal{T}}),$$ which restricts to a compact, self-adjoint operator on $L^2$. As a result the spectrum of $G_N$ is discrete and given as $\{1/\lambda_{i,N}\}$. Note that based on these considerations we may define $d^{-1}:=\delta G_N$.
For every volume preserving vector field $B$ which has an invariant unlinked domain $\mathcal{T}$, the $L^2$-energy of $B$ on $S^3$ is bounded below by the third order helicity $\mathsf{H}_{123}(B;\mathcal{T})$, as follows $$\label{eq:H123-estimate}
|\mathsf{H}_{123}(B;\mathcal{T})|\leq C_\mathcal{T} (E_2(B))^\frac{3}{2}.$$ Also, we may estimate the constant $C_\mathcal{T}$: $$\label{eq:C_T-estimate}
C_\mathcal{T}\leq \frac{1}{\sqrt{\lambda_{1,N}}}\|\omega\|^2_{L^\infty\Omega^2(\widetilde{\mathcal{T}})}(\text{Vol}(S^3))^3,$$ where $\lambda_{1,N}$ is the first eigenvalue of the Neumann Laplacian on $\Omega^2(\widetilde{\mathcal{T}})$.
We estimate $$\begin{aligned}
\|\alpha_\omega\|^2_{L^2} = \|\delta \phi_N\|^2_{L^2} & \leq & \|d\phi_N\|^2_{L^2}+\|\delta\phi_N\|^2_{L^2}\\
& = & |(\Delta \phi_N,\phi_N)| \leq \|\omega\|_{L^2}\|\phi_N\|_{L^2},\end{aligned}$$ where we used the Green’s formula [@Schwarz95 p. 60] and boundary conditions of in the second identity. Now, because $\phi_N=G_N\omega$, and it is a well known fact that $\|G_N\|_{L^2}=\frac{1}{\lambda_{1,N}}$, ($G_N$ is compact self adjoint on $L^2$) we obtain: $$\|\alpha_\omega\|_{L^2}\leq \frac{1}{\sqrt{\lambda_{1, N}}} \|\omega\|_{L^2}\ .$$ As a result we estimate $C_\mathcal{T}$: $$\begin{aligned}
C_\mathcal{T}=\|\alpha_\omega\wedge\omega\|_{L^2} & \leq & \|\omega\|_{L^\infty}\|\alpha_\omega\|_{L^2}\\
& \leq & \frac{1}{\sqrt{\lambda_{1, N}}} \|\omega\|_{L^2}\|\omega\|_{L^\infty}.\end{aligned}$$
Notably, the best energy estimate so far has been obtained by Freedman and He, [@Freedman91-2], for the $L^{3/2}$-energy of $B$, in the case when $B$ admits an invariant domain $\mathcal{T}$ modeled on an $n$-component link $L=\{L_1,\ldots,L_n\}$ in ${\mbox{\bbb R}}^3$. Their estimate is based on the asymptotic crossing number and reads $$\label{eq:energy-free-he}
E_{3/2}(B)\geq \Bigl(\frac{\pi}{16}\Bigr)^{1/4}\left(\sum^n_{k=1} \text{ac}(L_k,L)|\text{Flux}(B_k)|\right)\cdot \min_{1\leq k\leq n}\{|\text{Flux}(B_k)|\},$$ where the asymptotic crossing numbers $\text{ac}(L_k,L)$ for Borromean links can be estimated below by a smallest genus among surfaces in ${\mbox{\bbb R}}^3\setminus \{L_1\cup\ldots\cup\hat{L}_k\cup\ldots\cup L_n\}$ with a single boundary component $L_i$. Since $L^{3/2}$-energy of $B$ bounds the $L^2$-energy, inequality leads to a lower estimate purely in terms of fluxes and topological data. It is not clear to the author if this approach can be extended to the case of invariant handlebodies considered in Section \[sec:handlebody\]. A different, more optimal estimate, has been obtained by Laurence and Stredulinsky, via the Massey product formula, in [@Laurence-Stredulinsky00a], but the proof is provided only in a special case of the vector field $B$.
Contrary to these lower bounds, which are given in terms of topological data, the estimate in depends on the geometry of the domain $\mathcal{T}$, and also $\|\omega\|_{L^\infty}$. Unfortunately, $\omega$ blows up on the diagonals $\mathbf{\Delta}\subset (S^3)^3$, and as a result the estimate is meaningless when the handlebodies $\mathcal{T}_i$ get close to each other during the evolution of the magnetic field $B$. At this point, we need an assumption for $\mathcal{T}_i$ to stay *1cm* apart during the evolution. Another drawback is that $\lambda_{1,N}$ is a geometric constant which is altered during the evolution as well. If we consider the situation in which the boundaries of $\mathcal{T}_i$ are invariant during the evolution, the estimate may be useful. Under such assumption, which occurs whenever the velocity field $v$ of plasma in is tangent to $\partial\mathcal{T}_i$, the bound in stays constant.
Comparison to the known approaches via Massey products. {#sec:massey}
=======================================================
In several prior works [@Berger90; @Berger-Evans92; @Laurence-Stredulinsky00b; @Mayer03] helicities were developed via the Massey product formula for $\bar{\mu}_{123}$. These approaches are equivalent to the one presented here in the sense that invariants obtained this way measure the same topological information. Most notably the work [@Laurence-Stredulinsky00b] provides an explicit expression for the third order helicity of the Borromean flux tubes, where the ergodic interpretation in the style of Arnold’s asymptotic linking number is also provided. In [@Mayer03] one finds the following formula for the third order helicity $$\label{eq:massey-3A}
\mathsf{M}_{123}(B_1,B_2,B_3)=\int_M A_1\wedge A_2\wedge A_3,$$ where $A_i=d^{-1}(\iota_{B_i}\mu)$. This formula is valid for three distinct vector fields $B_i$ on a closed manifold $M$. For invariant domains with boundary defines an invariant provided $A_i\bigl|_{\partial M}=0$, but this only happens in certain situations (e.g. $M$ is simply connected, and $A_i$’s are appropriately chosen).
The most commonly known formula directly related to the Massey products was developed by Berger [@Berger90], in the case of Borromean flux tubes $\mathcal{T}=\mathcal{T}_1\cup \mathcal{T}_2\cup \mathcal{T}_3$ $$\label{eq:massey123}
\mathsf{M}_{123}(B_1,B_2,B_3)=\int_{\partial \mathcal{T}_1} A_1\wedge F_{23}+F_{12}\wedge A_3.$$ In [@Berger90] it is expressed as a volume integral over $\mathcal{T}$ by applying *gauge fixing*. When $\mathcal{T}_i$ are topologically solid tori there exists a single Massey product $<a_1,a_2,a_3>$ in the complement $S^3\setminus \mathcal{T}$, represented by the 2-form $A_1\wedge F_{23}+F_{12}\wedge A_3$. When $\mathcal{T}_i$ are handlebodies there are multiple Massey products, but the formula should still be valid. So far, such extensions have not been considered in the literature and the volume integrals over $\mathcal{T}$ may be harder to obtain in such a case. One may also point out that ergodic interpretations of Massey products are more involved [@Laurence-Stredulinsky00b] comparing to the approach presented in Section \[sec:ergodic\].
[10]{}
P. Akhmetiev. On a new integral formula for an invariant of 3-component oriented links. , 53(2):180–196, 2005.
V. Arnold. The asymptotic [H]{}opf invariant and its applications. , 5(4):327–345, 1986. Selected translations.
V. Arnold and B. Khesin. , volume 125 of [ *Applied Mathematical Sciences*]{}. Springer-Verlag, New York, 1998.
M. E. Becker. Multiparameter groups of measure-preserving transformations: a simple proof of [W]{}iener’s ergodic theorem. , 9(3):504–509, 1981.
M. Berger. Third-order link integrals. , 23(13):2787–2793, 1990.
R. Bott and L. Tu. , volume 82 of [ *Graduate Texts in Mathematics*]{}. Springer-Verlag, New York, 1982.
J. Cantarella. A general mutual helicity formula. , 456(2003):2771–2779, 2000.
J. Cantarella, D. DeTurck, and H. Gluck. The [B]{}iot-[S]{}avart operator for application to knot theory, fluid dynamics, and plasma physics. , 42(2):876–905, 2001.
J. Cantarella, D. DeTurck, H. Gluck, and M. Teytel. Isoperimetric problems for the helicity of vector fields and the [B]{}iot-[S]{}avart and curl operators. , 41(8):5615–5641, 2000.
J. Cantarella and J. Parsley. A new cohomological formula for helicity in ${R}^{2k+1}$ reveals the effect of a diffeomorphism on helicity. , 2009.
I. Chavel. , volume 115 of [*Pure and Applied Mathematics*]{}. Academic Press Inc., Orlando, FL, 1984. Including a chapter by Burton Randol, With an appendix by Jozef Dodziuk.
D. DeTurck, H. Gluck, R. Komendarczyk, P. Melvin, C. Shonkwiler, and D. Vela-Vick. Triple linking numbers, [H]{}opf invariants and [I]{}ntegral formulas for three-component links. , 2009.
N. W. Evans and M. A. Berger. A hierarchy of linking integrals. In [*Topological aspects of the dynamics of fluids and plasmas (Santa Barbara, CA, 1991)*]{}, volume 218 of [*NATO Adv. Sci. Inst. Ser. E Appl. Sci.*]{}, pages 237–248. Kluwer Acad. Publ., Dordrecht, 1992.
M. Freedman. Zeldovich’s neutron star and the prediction of magnetic froth. In [*The Arnoldfest (Toronto, ON, 1997)*]{}, volume 24 of [*Fields Inst. Commun.*]{}, pages 165–172. Amer. Math. Soc., Providence, RI, 1999.
M. Freedman and Z. He. Divergence-free fields: energy and asymptotic crossing number. , 134(1):189–229, 1991.
J.-M. Gambaudo and [É]{}. Ghys. Enlacements asymptotiques. , 36(6):1355–1379, 1997.
A. Hatcher. . Cambridge University Press, Cambridge, 2002.
B. Khesin. Ergodic interpretation of integral hydrodynamic invariants. , 9(1):101–110, 1992.
B. Khesin. Topological fluid dynamics. , 52(1):9–19, 2005.
T. Kohno. Loop spaces of configuration spaces and finite type invariants. In [*Invariants of knots and 3-manifolds (Kyoto, 2001)*]{}, volume 4 of [*Geom. Topol. Monogr.*]{}, pages 143–160 (electronic). Geom. Topol. Publ., Coventry, 2002.
U. Koschorke. A generalization of [M]{}ilnor’s [$\mu$]{}-invariants to higher-dimensional link maps. , 36(2):301–324, 1997.
U. Koschorke. Link homotopy in [$S\sp n\times \mathbb{R}\sp {m-n}$]{} and higher order [$\mu$]{}-invariants. , 13(7):917–938, 2004.
D. Kotschick and T. Vogel. Linking numbers of measured foliations. , 23(2):541–558, 2003.
P. Laurence and M. Avellaneda. A [M]{}offatt-[A]{}rnold formula for the mutual helicity of linked flux tubes. , 69(1-4):243–256, 1993.
P. Laurence and E. Stredulinsky. Asymptotic [M]{}assey products, induced currents and [B]{}orromean torus links. , 41(5):3170–3191, 2000.
P. Laurence and E. Stredulinsky. A lower bound for the energy of magnetic fields supported in linked tori. , 331(3):201–206, 2000.
C. Mayer. Topological link invariants of magnetic fields. , 2003.
J. Milnor. Link groups. , 59:177–195, 1954.
J. Milnor. Isotopy of links. In R. Fox, editor, [*Algebraic Geometry and Topology*]{}, pages 280–306. Princeton University Press, 1957.
J. Milnor. , chapter 7. Princeton Landmarks in Mathematics and Physics. Princeton University Press, Princeton, NJ, 1997. Revised reprint of 1965 original.
D. Mitrea, M. Mitrea, and M. Taylor. Layer potentials, the [H]{}odge [L]{}aplacian, and global boundary problems in nonsmooth [R]{}iemannian manifolds. , 150(713):x+120, 2001.
E. Priest. . D.Redidel Publishing Comp., 1984.
T. Rivi[è]{}re. High-dimensional helicities and rigidity of linked foliations. , 6(3):505–533, 2002.
G. Schwarz. , volume 1607 of [*Lecture Notes in Mathematics*]{}. Springer-Verlag, Berlin, 1995.
M. Spera. A survey on the differential and symplectic geometry of linking numbers. , 74:139–197, 2006.
H. v. Bodecker and G. Hornig. Link invariants of electromagnetic fields. , 92(3):030406, 4, 2004.
A. Verjovsky and R. F. Vila Freyer. The [J]{}ones-[W]{}itten invariant for flows on a [$3$]{}-dimensional manifold. , 163(1):73–88, 1994.
T. Vogel. On the asymptotic linking number. , 131(7):2289–2297 (electronic), 2003.
L. Woltjer. A theorem on force-free magnetic fields. , 44:489–491, 1958.
Appendices {#appendices .unnumbered}
==========
Equations for the *frozen-in-field* forms {#apx:A}
=========================================
Given a volume preserving vector field $B$ on $M$ and a path $t\longrightarrow g(t)\in \text{\rm SDiff}(M)$, let $$B^t:=g_\ast(t)B\ .$$ Then for $B^0=B$, by definition $$\label{eq:euler2}
\dot{B^t}=\frac{d}{dt} B^t\bigl|_{t=0}=-{{\cal L}}_V B=-[V,B],$$ and as a result $$\frac{d}{dt} \iota_{B^t}\mu|_{t=0} = -\iota_{[V,B]} \mu$$ Next, calculate ($\partial_t+{{\cal L}}_V\equiv {{\cal L}}_{\partial_t+V}$) $$\begin{aligned}
(\partial_t+{{\cal L}}_V)\iota_{B^t}\mu & = & \partial_t(\iota_{B^t}\mu)+{{\cal L}}_V\iota_{B^t}\mu\\
& = & \iota_{\dot{B^t}}\mu+\iota_{B^t}({{\cal L}}_V\mu)+\iota_{[V,B^t]}\mu\\
& = & \iota_{\dot{B^t}+[V,B^t]}\mu,\end{aligned}$$ where in the second identity we applied the general formula: $\iota_{[A,B]}={{\cal L}}_A\iota_B-\iota_B{{\cal L}}_A$, and in the third equation the fact that $V$ is volume preserving i.e. ${{\cal L}}_V\mu=0$. As a result of we obtain $$(\partial_t+{{\cal L}}_V)\iota_{B^t}\mu=0\ .$$
Next, we justify Formula . First use to calculate $$\label{eq:iota-vol1}
\begin{split}
\iota_{B_i} (\mu_j\wedge\mu_k) & = (\iota_{B_i} \mu_j)\wedge\mu_k-\mu_j\wedge(\iota_{B_i}\mu_k)\\
\iota_{B_i} (\mu_1\wedge\mu_2\wedge\mu_3) & = (\iota_{B_i} \mu_1)\wedge\mu_2\wedge\mu_3-\mu_1\wedge \iota_{B_i}(\mu_2\wedge\mu_3)\\
& =(\iota_{B_i} \mu_1)\wedge\mu_2\wedge\mu_3-\mu_1\wedge \iota_{B_i}\mu_2\wedge\mu_3+\mu_1\wedge \mu_2\wedge\iota_{B_i}\mu_3,
\end{split}$$ since $\iota_{B_i}\mu_j=0$ for $i\neq j$ only one term in the above expressions remains for each $i$. Set $\alpha:=\iota_{B_2}\iota_{B_1}\beta$, $\beta\in \Omega^3((S^3)^3)$, since $\alpha\wedge \mu_1\wedge\mu_2\wedge\mu_3=0$ and $\alpha$ is a 1-form we obtain $$\begin{split}
0 & =\iota_{B_3}(\alpha\wedge \mu_1\wedge\mu_2\wedge\mu_3)\\
& =(\iota_{B_3}\alpha)\wedge\mu_1\wedge\mu_2\wedge\mu_3-\alpha\wedge \mu_1\wedge\mu_2\wedge\iota_{B_3}\mu_3,
\end{split}$$ where in the last step we used . Therefore $$(\iota_{B_3}\iota_{B_2}\iota_{B_1}\beta)\wedge\mu_1\wedge\mu_2\wedge\mu_3=(\iota_{B_2}\iota_{B_1}\beta)\wedge \mu_1\wedge\mu_2\wedge\iota_{B_3}\mu_3\ .$$ Analogously, $(\iota_{B_2}\iota_{B_1}\beta)\wedge \mu_1\wedge\mu_2=(\iota_{B_1}\beta)\wedge \mu_1\wedge \iota_{B_2}\mu_2$ and $\iota_{B_1}\beta\wedge \mu_1=\beta\wedge \iota_{B_1}\mu_1$ which justifies Equation .
Zero contribution of short paths to the time average {#apx:B}
====================================================
It is clear when $\beta\in \Omega^3(\widetilde{\mathcal{T}})$ is at least a $C^1$ on $\widetilde{\mathcal{T}}\subset S^3\times S^3\times S^3$, then $f=\beta(B_1,B_2,B_3)$ is continuous on $\widetilde{\mathcal{T}}$ and $$\begin{gathered}
\int_{\bar{\mathscr{O}}^1_T(x)}\int_{\bar{\mathscr{O}}^2_T(y)}\int_{\bar{\mathscr{O}}^3_T(z)}
f=\\
\bigl(\int_{\mathscr{O}^1_T(x)}+\int_{\sigma(\Phi^1(x,T),x)}\bigr)
\bigl(\int_{\mathscr{O}^2_T(y)}+\int_{\sigma(\Phi^2(y,T),y)}\bigr)
\bigl(\int_{\mathscr{O}^3_T(z)}+\int_{\sigma(\Phi^3(z,T),z)}\bigr)f\ .\end{gathered}$$ After expanding, it is obvious that we must show the following (for all choices), when $T\to \infty$: $$\begin{aligned}
\label{eq:lim-sp1}
\frac{1}{T^3}\int_{\sigma(\Phi^1(x,T),x)}\int_{\mathscr{O}^2_{T}(y)}\int_{\mathscr{O}^3_{T}(z)} f & \longrightarrow & 0,\\
\label{eq:lim-sp2}\frac{1}{T^3}\int_{\sigma(\Phi^1(x,T),x)}\int_{\sigma(\Phi^2(y,T),y)}\int_{\mathscr{O}^3_{T}(z)} f & \longrightarrow & 0,\\
\label{eq:lim-sp3}\frac{1}{T^3}\int_{\sigma(\Phi^1(x,T),x)}\int_{\sigma(\Phi^2(y,T),y)}\int_{\sigma(\Phi^3(z,T),z)} f & \longrightarrow & 0.\end{aligned}$$ Since the lengths of the short paths in $\mathcal{S}$ are bounded by a common constant $d$, - follow immediately, e.g. for we have
$$\begin{gathered}
\bigl|\frac{1}{T^3}\int_{\sigma(\Phi^1(x,T),x)}\int_{\mathscr{O}^2_{T}(y)}\int_{\mathscr{O}^3_{T}(z)} f\bigr|\leq \frac{1}{T^3} d(T+d)(T+d)\|f\|_{\infty} \longrightarrow 0.\end{gathered}$$
Notation in Section \[sec:energy\] {#apx:C}
==================================
We adopt notation from the elegant exposition in [@Schwarz95]. Let $M$ be an orientable manifold with smooth boundary.
- $\Omega^k(M)=C^\infty(M,\Lambda^k)$, smooth differential forms on $M$.
- $\Omega^k_N(M)=\{\phi\in \Omega^k(M)\ |\ \mathbf{n}\phi=0,\, \mathbf{n}d\phi=0\}$ the subspace satisfying the Neumann boundary conditions, ($\mathbf{n}$ denotes a normal component of a form along $\partial M$).
The $L^2$-inner product on $\Omega^k(M)$ is defined as $$\bigl( \omega,\eta\bigr)_{L^2}=\int_M \omega\wedge\ast \eta,$$
- $L^2\Omega^k(M)$, $L^2$-differential forms on $M$.
- $\mathcal{H}^k_N(M)=\{\lambda\in H^1\Omega^k(M)\ |\ d\lambda=\delta\lambda=0, \mathbf{n}\lambda=0\}$ the subspace of the Neumann harmonic fields.
Next we justify , first observe that for any $\gamma\in H^1\Omega^{k-1}(M)$: $$\label{eq:perp-neumann}
(d\gamma,\lambda)_{L^2}=\int_{\partial M} \mathbf{t}\gamma\wedge\ast\mathbf{n}\lambda=0,\qquad \forall \lambda\in \mathcal{H}^k_N(M),$$ where $\mathbf{t}$, and $\mathbf{n}$ stands for respectively tangent and normal to $\partial M$ components of the form. As a result, if $\omega \in \mathcal{H}^k_N(M)^\perp$ we obtain a solution $\phi$ to the Neumann problem: $$\label{eq:neumann-phi}
\delta d\phi+d\delta\phi=\omega,\quad \Rightarrow\quad \omega-d\delta\phi=\delta d\phi\ .$$ Formula implies: $(\omega-d\delta\phi)\in \mathcal{H}^k_N(M)^\perp$, moreover $\mathbf{n}(\omega-d\delta\phi)=\mathbf{n}(\delta d\phi)=\delta\,\mathbf{n}(d\phi)=0$, by the boundary condition in . If $\omega$ is a closed form, $\omega-d\delta\phi$ is also closed, and clearly coclosed by . Thus $\omega-d\delta\phi$ is a harmonic field with zero normal component, and therefore it has to be in $\mathcal{H}^k_N(M)$, and therefore the zero form. This yields $$\omega=d\delta\phi\ .$$ As a result we obtain a necessary and sufficient conditions for $\omega$ to be exact:
- $d\omega=0$,
- $(\omega,\lambda)_{L^2}=0$, for all $\lambda\in \mathcal{H}^k_N(M)$.
[Department of Mathematics, Univ. of Pennsylvania, Philadelphia, PA, 19104]{}\
e-mail: `rako@math.upenn.edu`\
URL: `www.math.upenn.edu/~rako`
[^1]: *2000 Mathematics Subject Classification*. Primary: 76W05,57M25, Secondary: 58F18,58A10
[^2]: This project is supported by DARPA, \#FA9550-08-1-0386.
|
---
abstract: 'A general method is presented to unfold band structures of first-principles super-cell calculations with proper spectral weight, allowing easier visualization of the electronic structure and the degree of broken translational symmetry. The resulting unfolded band structures contain additional rich information from the Kohn-Sham orbitals, and absorb the structure factor that makes them ideal for a direct comparison with angular resolved photoemission spectroscopy experiments. With negligible computational expense via the use of Wannier functions, this simple method has great practical value in the studies of a wide range of materials containing impurities, vacancies, lattice distortions, or spontaneous long-range orders.'
author:
- 'Wei Ku (顧威 )'
- Tom Berlijn
- 'Chi-Cheng Lee (李啟正 )'
bibliography:
- 'refs.bib'
title: 'Unfolding first-principles band structures'
---
[UTF8]{}[bsmi]{}
The electronic band structure is no doubt one of the most widely applied analysis tools in the first-principles electronic structure calculations of crystals, especially within the Kohn-Sham framework [@KohnSham] of density functional theory [@HohenbergKohn]. It contains the basic ingredients to almost all the textbook descriptions of crystal properties (e.g. transport, optical and magnetic properties, and the semiclassical treatment [@AshcroftMermin]). Furthermore, the theoretical band structure, when formulated within the quasi-particle picture of the one-particle Green function, has a direct experimental connection with angular-resolved photoemission spectroscopy (ARPES).
However, the usefulness of the band structure, as well as the agreement with ARPES spectra, diminishes rapidly when a large “super cell” is involved. The use of super cells is a common practice in modern first-principles studies when the original periodicity of the system is modified via the introduction of “external” influences from impurities or lattice distortions. They are also widely applied in the presence of spontaneous translational symmetry breaking, say by a charge density wave, a spin density wave, or an orbital ordering. As illustrated in Fig. \[fig:fig1\], when the period of the super cell grows longer, the corresponding first Brillouin zone of the super cell (SBZ) shrinks its size. In turn, bands in the first Brillouin zone of the normal cell (NBZ) get “folded” into the SBZ. For a very large super cell, the resulting SBZ can be tiny in size but contain a large number of “horizontal” looking bands that no longer resemble the original band structure or the experimental ARPES spectra, and cease to be informative besides giving a rough visualization of the density of states (DOS). The information is now hidden in the Kohn-Sham orbitals, instead of the dispersion of the bands.
![\[fig:fig1\] (color online) Illustration of band folding in the super cell calculations: (a) band structure of a 2D one-band first-neighbor tight-binding model, (b) the same obtained from a 4x4 super-cell calculation, and (c) the same obtained from a 16x16 super-cell calculation. Pannel (d) shows the DOS. ](fig1){width="0.9\columnwidth"}
In this Letter, by explictly utilizing these Kohn-Sham orbitals, we present a method to unfold the band structure of the SBZ back to the larger NBZ with proper spectral weight. Making use of the corresponding Wannier functions, the method can be greatly simplified to negligible computational cost. The resulting unfolded band structure incorporates explicitly the structure factor and thus facilitates significantly a direct comparison with ARPES experiments. Furthermore, the unfolded band structure illustrates very clearly the influence of the symmetry breaker (e.g.: impurities, vacancies, dopants, lattice distortions) via direct comparison with the nominal normal-cell band structure. In the case of spontaneous symmetry breaking, it gives a direct visualization of the strength of each band’s coupling to the order parameters. In light of the amazingly rich information, we expect countless applications of this simple method to a wide range of studies employing super cells, including systems with charge density wave, spin density wave, or orbital ordering, and in the studies of impurities and lattice distortions, to name a few.
Theoretically, the folding of the bands results from the introduction of additional coupling, $V_{kj,k\prime j\prime}$, between the originally uncoupled Kohn-Sham orbitals $|kj\rangle$ and $|k\prime j\prime\rangle$ in the NBZ. (Here $k$ and $j$ denote the crystal momentum and the band index.) This coupling extends the period of Kohn-Sham orbitals to a longer one compatible with the size of the super cell. Equivelantly, this coupling, no matter how small it is, mixes the original orbitals of different normal-cell crystal momentum k and forces us to label them with a supercell crystal momentum K as the new quantum number in the SBZ. (In the following, upper-/lower-case symbols refer to variables corresponding to the super/normal cell, respectively.) Our method is based on the simple idea that unless $V$ is extremely strong, it is much more convenient and informative to represent the band structure or more precisely the spectral function $A = - {\rm{Im}} G / \pi$ of the retarded one-particle Green function, $G$, not in the new eigen-orbital $|KJ\rangle$ basis, but in the $|kj\rangle$ basis of the normal cell instead: $$\label{eqn:eqn1}
G_{kj,k\prime j\prime}^{-1}(\omega) = G_{0 kj,kj}^{-1}(\omega) \delta_{k,k\prime} \delta_{j,j\prime} - V_{kj,k\prime j\prime},$$ where $G_0$ represents a conceptual system with the period of the normal cell before $V$ is applied. Clearly, $G$ smoothly recovers the original period of $G_0$ as $V$ approaches zero. Thus, $$\begin{aligned}
\label{eqn:eqn2}
A_{kj,kj}(\omega) = \sum_{KJ}|\langle kj|KJ\rangle|^2 A_{KJ,KJ}(\omega)\end{aligned}$$ should resemble the band structure of the normal cell with deviations in both the dispersion and in the spectral weight that reflect the effects of $V$. Note that while the coupling $V$ introduces non-diagonal elements of $A_{kj,k\prime j\prime}(\omega)$, we focus only on the diagonal elements here for simplicity, without loss of generality. It is straightforward to show that in the case of $V=0$, the weight of the bands that follow the bands of the normal cell is exactly one, and that of the rest of the folded bands vanishes. One thus recovers exactly the original band structure of the normal cell as expected. That is, *the unfolded band structure is invariant against any arbitrary choice of super cell.*
In addition, it is often desirable to also measure in each band the contribution of local orbitals with well-defined characters (e.g.: $p_x$, $d_{xz}$, $e_g$, or bonding/anti-bonding). This can be achieved rigorously via the use of local Wannier orbitals $|rn\rangle$: $$\label{eqn:eqn3}
A_{kn,kn}(\omega) = \sum_{KJ}|\langle kn|KJ\rangle |^2 A_{KJ,KJ}(\omega),$$ where $|kn\rangle = \sum_{r} |rn\rangle \langle rn|kn\rangle = \sum_{r} |rn\rangle e^{ikr} / \sqrt{l}$ are the Fourier transform of the Wannier orbitals $|rn\rangle$ of orbital index $n$ and associated with the lattice vector $r$. (Here $l$ denotes the number of k-points in the NBZ.) Given a consistent definition of the Wannier functions of the super-cell calculation that maps $|RN\rangle$ of the super cell to $|R+r,n\prime\rangle$ of the normal cell, where $r=r(N)$ is a normal-cell lattice vector within the first super cell, and $n\prime=n\prime(N)$ is the corresponding normal-cell orbital index, the use of Wannier function also reduces dramatically the computational expense by turning the factor $$\begin{aligned}
\label{eqn:eqn4}
\langle kn|KJ\rangle &=& \sum_{RN} \langle kn|RN\rangle \langle RN|KN\rangle
\langle KN|KJ\rangle \nonumber\\
&=& \sum_{RN} \langle kn|R+r(N),n\prime(N)\rangle \langle RN|KN\rangle
\langle KN|KJ\rangle \nonumber \\
&=& \sqrt{1/Ll}\sum_{RN} e^{i(K-k)\cdot R} e^{-ik\cdot r(N)} \delta_{n,n\prime(N)}
\langle KN|KJ\rangle \nonumber \\
&=& \sqrt{L/l}\sum_{N} e^{-ik\cdot r(N)} \delta_{n,n\prime(N)}\delta_{[k],K} \langle KN|KJ\rangle\end{aligned}$$ into merely a structure factor that is a sum of coefficients of the eigen-orbital $|KJ\rangle $ of the super cell in the Wannier function basis, modulated by the proper phase that encapsulates the internal position in the super cell. Here $[k]$ denotes the k-point folded into the SBZ from $k$. Since $A_{KJ,KJ}$ is just a delta function at the eigenvalue $\delta(\omega-\epsilon_{KJ})$, this final expression in essence requires only a simple coding to plot all the eigenvalues of the super cell in the larger NBZ with a proper weight.
Of course, the above definition only makes sense when the Wannier functions $|RN\rangle \leftrightarrow |rn\rangle $ and $|RN \prime\rangle \leftrightarrow |r\prime n\rangle $ (that are translational symmetric in the normal cell unit: same $n$ different $r$) are approximately identical. Therefore, the “gauge” [@Vanderbilt] of constructing $|RN\rangle $ and $|RN\prime\rangle$ *with the same $n$* must be controlled accordingly. In the presence of a potential that breaks the translational symmetry of the normal cell, for example, coming from a CDW, lattice distortions, impurities, etc., the commonly employed [@Thygesen; @Wang; @Eiguren] maximally localized Wannier function [@Vanderbilt] and other minimization-based methods [@Gygi; @Giustino] risk defining the gauge differently in the super cell in favor of better localization, and thus should be used with extreme caution. We found that a maximum projection method [@Andersen; @Ku; @Anisimov] with consistent projection between the normal and the super cells works well to satisfy this requirement. Equations (\[eqn:eqn3\]) and (\[eqn:eqn4\]) should in principle also be applicable in many existing codes employing atomic center local orbitals as basis [@Koepernik; @Scheffler], as long as the non-orthogonal nature of those bases is taken into account. Of course, these methods do not benifit from the energy resolution of the Wannier functions that allows unfolding only the bands within the physically relevant energy range.
The unfolded band structure also has an important direct connection to the ARPES measurement. For systems with enlarged unit cells due to weak symmetry breaking, the ARPES spectra typically shows different band structures in different Brillouin zones of the super cell, distinctly different from the results of first-principles calculations, which have all the bands in the SBZ. In some cases, the observed ARPES spectra might even appear ignorant about the SBZ [@Xu]. This significant mismatch is typically regarded as the effect of the “matrix element” and left unaddressed by both theorists and experimentalists, making a direct comparison very difficult. Within the “sudden approximation”, the ARPES intensity is proportional to [@Caroli; @Bansil] $$\begin{aligned}
\label{eqn:eqn5}
& & \sum_{KJ} | \textbf{e}\cdot \langle f|\textbf{p}|KJ\rangle |^2 A_{KJ,KJ}(\omega) \nonumber \\
&\sim& \sum_{KJkn} |\textbf{e}\cdot \langle f|\textbf{p}|kn\rangle |^2 | \langle kn|KJ\rangle |^2
A_{KJ,KJ}(\omega) \nonumber \\
&=& \sum_{kn} | \textbf{e}\cdot \langle f|\textbf{p}|kn\rangle |^2 A_{kn,kn}(\omega), \end{aligned}$$ where $\textbf{e}$ denotes the polarization vector of light, and $|f\rangle$ the “final state” of the photoelectron. Clearly, except the polarization dependent dipole matrix element, $|\textbf{e}\cdot \langle f|\textbf{p}|kn\rangle |^2$, the unfolded spectral function, $A_{kn,kn}(\omega)$, contains almost the full information of the experimental spectrum by absorbing the additional structure factor $\langle kn|KJ\rangle$, absent in the typical super-cell solution, $A_{KJ,KJ}(\omega)$. Obviously, the inclusion of this additional matrix element would facilitate significantly the comparison between the theory and the ARPES experiment.
As an example, let’s consider the effect of Na impurities in Na-doped cobaltates, Na$_x$CoO$_2$ at $x=1/3$. In typical first-principles studies[@Singh; @Johannes], the impurity is incorporated via a super cell as demonstrated in Fig. \[fig:fig2\](b) in comparison with the undoped normal cell shown in Fig. \[fig:fig2\](a). Fig. \[fig:fig2\](d) and (c) show the corresponding band structures obtained with standard DFT calculations. Since in this example the super cell is three times larger than the normal cell, the corresponding SBZ is three times smaller and contains three times more bands. Clearly, even for such a small super cell, the change of the size/orientation of the SBZ and more importantly the large number of folded bands, make it practically impossible to cleanly compare with the band structure in the NBZ of the undoped parent compound. In fact, to many untrained eyes, these two band structures may appear entirely unrelated.
![\[fig:fig2\] (color online). Lattice structures of (a) Co$_2$O$_4$ (normal cell) and (b) Na$_2$Co$_6$O$_{12}$ (super cell), the corresponding band structure of (c) the normal cell and (d) super cell calculation, and (e) the unfolded band structure of the super cell. Insect illustrates the effects of weak translational symmetry breaking via spectral functions over the region \[-4.6eV,-4.2eV\] and \[$\frac{2}{5}\Gamma M$, $\frac{1}{5}\Gamma M$\]. ](fig2){width="0.9\columnwidth"}
By contrast, the unfolded band structure shown in Fig. \[fig:fig2\](e), demonstrates a strong resemblance to the band structure of the undoped compound. This allows a clear visualization of the effects of the (periodic) Na impurities on the original Co and O bands. Specifically, besides the introduction of additional Na-$s$ bands, one observes shifts in band energies, gap openings and the nearby “shadow bands”, all of which reflects the influence of the Na impurity on these bands. What is really nice here is the cleanness of the unfolded band structure in general, owing to the weak intensity of the shadow bands. As expected, the influence of the Na impurity is only minor on most Co-$d$ and O-$p$ bands, while the Na-$s$ bands themselves show sizable effects of broken translational symmetry. The size of the gap opening and the intensity of the shadow bands actually reflect directly the strength of each band’s coupling to the broken translational symmetry of the normal cell (in this specific case, to the charge-density-wave order parameter introduced by the periodic presence of Na atoms.) Of course, for a simulation of randomly positioned impurities, these CDW-related features are entirely artificial, and the unfolded band structure makes apparent the alarming limitation of common practice of using small super cells in the study of impurities. On the other hand, in many other cases, for example the super modulation of the lattice, these features would actually correspond to a physical order parameter and provide valuable information.
![\[fig:fig3\] (color online). Lattice structures of (a) normal cell and (b) doubled-size supercell of A-type anti-ferromagnetic ordered LaMnO$_3$, without and with the orbital-ordering, respectively, the corresponding band structure of the normal cell (c) and the super cell (d), and the unfolded band structure of the super cell (e) indicating the orbital ordering gap, $\Delta_{OO}$. Red and blue bands denote the $z^{2}$ and $x^{2}-y^{2}$ orbital characters of the spin-majority channel, and green bands gives both $e_g$ characters of the spin-minority channel. ](fig3){width="0.9\columnwidth"}
As another example, let’s consider a spontaneous orbital ordering in A-type anti-ferromagnetic LaMnO$_3$. Figure \[fig:fig3\] (c) and (d) show the similar comparison of band structures without and with the long-range staggered orbital order, corresponding to unit cells shown in Fig. \[fig:fig3\] (a) (normal cell) and Fig. \[fig:fig3\] (b) (super cell), respectively. Both results are obtained via the LSDA+$U$ ($U$=8eV, $J_H$=0.88eV) approximation without lattice relaxation for simplicity, without loss of generality. By comparing the band structures with and without the orbital order on the equal footing, the detailed information of the spontaneous orbital order should be visualized explicitly.
Just like in the Na$_x$CoO$_2$ case, the straightforward results of the orbital ordered (OO) band structure (Fig. \[fig:fig3\] (d)) of the super cell calculation [@Pickett] can hardly be compared with the non-OO one (Fig. \[fig:fig3\] (c)). By contrast, the unfolded band structure (Fig. \[fig:fig3\] (e)) of the OO case resembles strongly the non-OO case. In fact, one finds that only those bands of Mn-$e_g$ character (red, blue, and green) show strong coupling to the OO order parameter with large gap openings and intensive shadow bands, while the rest of the bands are basically uncoupled to the orbital order. In addition, from the significant energy gain corresponding to the large OO gap ($\Delta_{OO}$) near the Fermi level of the red and blue bands, it is apparent that essentially the orbital order is driven only by the spin-majority $e_g$-orbitals (red and blue). All these effects are of course entirely consistent with the existing “electronic interaction assisted Jahn-Teller picture” [@Dagotto; @Yin; @Volja], in which the degenerate Mn-$e_g$ orbitals split to gain energy and stabilize the system at low-temperature. However, this unfolded band structure represents probably the best visualization of such physics in real materials with details of first-principles calculations.
In conclusion, a simple method for unfolding first-principles band structures of super cell calculations is presented. Proper spectral weights are obtained with negligible computational cost by making use of the Kohn-Sham orbitals with the help of carefully chosen Wannier functions. The inclusion of the structure factor in the resulting unfolded band structure makes it ideal for direct comparison with the ARPES measurement. The resulting unfolded band structures allow an easy visualization of each band’s coupling to the order parameter of spontaneous broken translational symmetry, as well as their couplings to the external symmetry breakers like the impurities and lattice distortions. Our method should prove valuable in the study of a wide range of problems requiring the use of super cells, including systems with impurities, vacancies, and lattice distortions, and broken symmetry phases of strongly correlated materials, to name a few.
This work was supported by the U.S. Department of Energy, Office of Basic Energy Science, under Contract No. DE-AC02-98CH10886, and DOE-CMSN.
|
---
abstract: 'The database community has long recognized the importance of graphical query interface to the usability of data management systems. Yet, relatively less has been done. We present [[$\mathsf{Orion}$]{}]{}, a visual interface for querying ultra-heterogeneous graphs. It iteratively assists users in query graph construction by making suggestions via machine learning methods. In its active mode, [[$\mathsf{Orion}$]{}]{} automatically suggests top-$k$ edges to be added to a query graph. In its passive mode, the user adds a new edge manually, and [[$\mathsf{Orion}$]{}]{} suggests a ranked list of labels for the edge. [[$\mathsf{Orion}$]{}]{}’s edge ranking algorithm, Random Decision Paths (RDP), makes use of a query log to rank candidate edges by how likely they will match the user’s query intent. Extensive user studies using Freebase demonstrated that [[$\mathsf{Orion}$]{}]{} users have a 70% success rate in constructing complex query graphs, a significant improvement over the 58% success rate by the users of a baseline system that resembles existing visual query builders. Furthermore, using active mode only, the RDP algorithm was compared with several methods adapting other machine learning algorithms such as random forests and naive Bayes classifier, as well as class association rules and recommendation systems based on singular value decomposition. On average, RDP required 40 suggestions to correctly reach a target query graph (using only its active mode of suggestion) while other methods required 1.5–4 times as many suggestions.'
author:
- |
[Nandish Jayaram Rohit Bhoopalam Chengkai Li Vassilis Athitsos ]{}\
*University of Texas at Arlington*
title: 'Orion: Enabling Suggestions in a Visual Query Builder for Ultra-Heterogeneous Graphs'
---
Introduction {#sec:intro}
============
The database community has long recognized the importance of graphical query interfaces to the usability of data management systems [@lagunareport89]. Yet, relatively less has been done and there remains a pressing need for investigation in this area [@usability; @beckmanreport14]. Nevertheless, a few important ideas (e.g., Query-By-Example [@qbe]) and systems (e.g., Microsoft SQL Query Builder) have been developed for querying relational databases [@visual-tkde02], web services [@clide] and XML [@xqbe; @qursed].
For querying graph data, existing systems [@blau-etal-tr02-37; @graphite; @gblender; @prague; @vogue-cidr13; @quble] allow users to build queries by visually drawing nodes and edges of query graphs, which can then be translated into underlying representations such as SPARQL and SQL queries. While focusing on blending query processing with query formulation [@graphite; @gblender; @prague; @vogue-cidr13; @quble], existing visual query builders do not offer suggestions to users regarding what nodes/edges to include into query graphs. At every step of visual query formulation, after adding a new node or a new edge into the query graph, a user would need to choose from a list of candidate *labels*—names and types for a node or types for an edge. The user, when knowing what label to use, can search the list of labels by keywords or sift through alphabetically sorted options using binary search. But, oftentimes the user does not know the label due to lack of knowledge of the data and the schema. In such a scenario, the user may need to sequentially comb the option list. Furthermore, the user may not have a clear label in mind due to her vague query intent.
The lack of query suggestion presents a substantial usability challenge when the graph data require a long list of options, i.e., many different types and instances of nodes and edges. The aforementioned systems [@blau-etal-tr02-37; @graphite; @gblender; @prague; @vogue-cidr13; @quble] were all deployed on relatively small graphs. The crisis is exacerbated by the proliferation of *ultra-heterogeneous graphs* which have thousands of node/edge types and millions of node/edge instances. Widely-known ultra-heterogeneous graphs include Freebase [@Bollacker+08freebase], DBpedia [@AuerBK+07], YAGO [@SuchanekKW07], Probase [@probase], and the various RDF datasets in the “linked open data” [^1]. Users would be better served, if graph query builders provided suggestions during query formulation. In fact, query suggestion has been identified as an important feature-to-have among the desiderata of next-generation visual query interfaces [@dbhci-dexa14].
This paper presents [[$\mathsf{Orion}$]{}]{}, a visual query builder that provides suggestions, iteratively, to assist users formulate queries on ultra-heterogeneous graphs. [[$\mathsf{Orion}$]{}]{}’s graphical user interface allows users to construct query graphs by drawing nodes and edges onto a canvas using simple mouse actions. To allow schema-agnostic users to specify their exact query intent, [[$\mathsf{Orion}$]{}]{} suggests candidate edge types by ranking them on how likely they will be of interest to the user, according to their relevance to the existing edges in the partially constructed query graph. The relevance is based on the correlation of edge occurrences exhibited in a query log. To the best of our knowledge, [[$\mathsf{Orion}$]{}]{} is the first visual query formulation system that automatically makes ranked suggestions to help users construct query graphs. The demonstration proposal for an early prototype of [[$\mathsf{Orion}$]{}]{} [@viiq] was based on a subset of the ideas in this paper.
[[$\mathsf{Orion}$]{}]{} supports both an *active* and a *passive* operation mode. (1) If the canvas contains a partially constructed query graph, [[$\mathsf{Orion}$]{}]{} operates in the active mode by default. The system automatically recommends top-$k$ new edges that may be relevant to the user’s query intent, without being triggered by any user actions. Figure \[fig:interface-viiq\](a) shows the snapshot of a partially constructed query graph, with nodes and edges suggested in the active mode. The white nodes and the edges incident on them are newly suggested. The user can select some of the suggested edges by clicking on them, and a mouse click on the canvas adds the selected edges to the partial query graph, and ignores the unselected edges. (2) The passive mode is triggered when the user adds new nodes or edges to the partial query graph using simple mouse actions. For a newly added edge, the suggested edge types are ranked based on their relevance to the user’s query intent. Figure \[fig:interface-viiq\](c) shows the ranked suggestions for the newly added edge between the two nodes of types [<span style="font-variant:small-caps;">Person</span>]{} and [<span style="font-variant:small-caps;">Film</span>]{}, displayed in a pop-up box. For a newly added node, labels are suggested for its type, the domain of its type, and its name if the node is to be matched with a specific entity. The suggested labels are displayed in a pop-up box, as shown in Figure \[fig:interface-viiq\](b), where type [<span style="font-variant:small-caps;">Person</span>]{} is chosen as the label for the node.
The query construction process of a user can be summarized as a query session, consisting of positive and negative edges that correspond to edge suggestions accepted and ignored by the user, respectively. At every step of the iterative process, based on the partially constructed query graph so far and the corresponding query session, [[$\mathsf{Orion}$]{}]{}’s edge ranking algorithm—Random Decision Paths (RDP)—ranks candidate edges using a query log of past query sessions. RDP ranks the candidate edges by how likely they will be of interest to the user, according to their correlation with the current query session’s edges. RDP constructs multiple decision paths using different random subsets of edges in the query session. This idea is inspired by the ensemble learning method of random forests, which uses multiple decision trees. Entries in the query log that subsume the edges of a decision path are used to find the “support” score of each candidate edge. For each candidate, its support scores over all random decision paths are aggregated into its final score. Section \[sec:rcp\] describes this ranking method in detail. We also implemented several other edge ranking methods by adapting machine learning algorithms such as random forests (RF) and na[ï]{}ve Bayes classifier (NB), as well as class association rules (CAR) and recommendation systems based on singular value decomposition (SVD). Section \[sec:baselineMethods\] describes these techniques in detail.
To the best of our knowledge, there exists no publicly available real-world graph query log in the aforementioned form. Existing visual query builders, possibly due to lack of users, do not have publicly available logs from their usage either. The DBpedia SPARQL query benchmark [@dbpedia-sparql] records queries posed by real users through the SPARQL query interface on DBpedia. This can represent the positive edges in query sessions. However, this query log may offer little help to [[$\mathsf{Orion}$]{}]{}, due to two limitations: 1) It is applicable to DBpedia only and no other data graph, and 2) Only a third of the edge types present in DBpedia are used in the query log. Hence, in addition to experimenting with this query log, we also simulated query logs for both Freebase and DBpedia data graphs using Wikipedia. The premise is that the various relationships between entities, implied in the sentences of Wikipedia articles, represent co-occurring properties that simulate the positive edges in a query session. Section \[sec:workload\] describes various ways of finding such positive edges and injecting negative edges, in order to simulate query logs. Once [[$\mathsf{Orion}$]{}]{} is in use, query sessions collected by it would result in a real-world query log that might be useful to the community in this line of research.
We conducted extensive user studies over the Freebase data graph, using 30 graduate students from the authors’ institution, to compare [[$\mathsf{Orion}$]{}]{} with a baseline system resembling existing visual query builders. 15 students worked on [[$\mathsf{Orion}$]{}]{}, and the other 15 on the baseline system. A total of 105 query tasks were performed by users of each system. It was observed that [[$\mathsf{Orion}$]{}]{} users had a 70% success rate in constructing complex query graphs, significantly better than the 58% success rate of the baseline system’s users. We also conducted experiments on both Freebase and DBpedia data graphs to compare RDP with other edge ranking methods—RF, NB, CAR and SVD. The experiments were executed on the computing resources of the Texas Advanced Computing Center (TACC), [^2] to accommodate memory-intensive methods such as RF, SVD and CAR, which required between 40 GB to 100 GB of memory. On average, the other methods required 1.5-4 times more suggestions to complete a query graph, compared to RDP’s 40 suggestions. The wall-clock time required to complete query graphs by RDP was mostly comparable with that of RF and NB, and significantly less than that of SVD and CAR. We also performed experiments to study the effectiveness of the various query logs simulated. RDP attained higher efficiency with the Wikipedia based query log compared to the query logs simulated using other ways discussed in Section \[sec:workload\].
We summarize the contributions of this paper as follows:
[$\bullet$]{} [ ]{}
We present [[$\mathsf{Orion}$]{}]{}, a visual query builder that helps schema-agnostic users construct query graphs by making automatic edge suggestions. To the best of our knowledge, none of the existing visual query builders for graphs offers suggestions.
To help users quickly construct query graphs, [[$\mathsf{Orion}$]{}]{} uses a novel edge ranking algorithm, Random Decision Paths (RDP), which ranks candidate edges by how likely they are to be relevant to the user’s query intent. RDP is trained using a query log containing past query sessions.
There exists no such real-world query logs publicly available. We thus designed several ways of simulating query logs. Once [[$\mathsf{Orion}$]{}]{} is in use, the real-world query log collected by it will become a valuable resource to the community.
We conducted user studies on the Freebase data graph to compare [[$\mathsf{Orion}$]{}]{} with a baseline system resembling existing visual query builders. [[$\mathsf{Orion}$]{}]{} had a 70% success rate of constructing complex query graphs, significantly better than the baseline system’s 58%.
We also performed extensive experiments comparing RDP with several other machine learning based methods, on the Freebase and DBpedia data graphs. Other methods required 1.5–4 times more suggestions than RDP, in order to complete query graphs.
Related Work {#sec-related}
============
The unprecedented proliferation of linked data and large, heterogeneous graphs has sparked extensive interest in building knowledge-intensive applications. The usability challenges in building such applications are widely recognized—declarative query languages such as SPARQL present a steep learning curve, as forming queries requires expertise in these languages and knowledge of data schema. To tackle the challenges, a number of alternate querying paradigms for graph data have been proposed recently, including keyword search [@blinks; @freeq], query-by-example [@gqbe-icde14demo; @gqbe-tkde; @lim_edbt13; @exemplarQueries], natural language query [@naturallang], and faceted browsing [@facet; @facet1; @facet2].
Visual query builders [@graphite; @VISAGEiui15; @gblender; @prague; @vogue-cidr13; @quble] provide an intuitive and simple approach to query formulation. Most of these systems deal with querying a graph database and not a single large graph, except [@quble; @graphite; @VISAGEiui15]. Firstly, it is unclear how to directly apply the techniques proposed by systems that deal with graph databases to a single large graph. This is because, their solutions work best on a data model with many small graphs, rather than a single large graph. Secondly, these systems do not assist the user in query formulation by automatically suggesting the new top-$k$ relevant edges.
[[$\mathsf{QUBLE}$]{}]{} [@quble], [[$\mathsf{GRAPHITE}$]{}]{} [@graphite] and [@VISAGEiui15] provide visual query interfaces for querying a single large graph. But, they focus on efficient query processing, and only facilitate query graph formulation by giving options to quickly draw various components of the query graph. Instead of recommending query components that a user might be interested in, they alphabetically list all possible options for node labels (which may be extended to edge labels similarly). They also deal with smaller data graphs. For instance, the graph considered by [[$\mathsf{QUBLE}$]{}]{} contains only around 10 thousand nodes with 300 distinct node types, and they do not consider edge types. [[$\mathsf{Orion}$]{}]{}, on the other hand, considers large graphs such as Freebase, which has over 30 million distinct node types and 5 thousand distinct edge types. With such large graphs, it is impractical to expect users to browse through all options alphabetically to select the most appropriate edge to add to a query graph. Ranking these edges by their relevance to the user’s query intent is a necessity, for which [[$\mathsf{Orion}$]{}]{} is designed.
System Overview {#sec:solution}
===============
Data Model and Query Model {#sec:prelim}
--------------------------
An ultra-heterogeneous graph $G_d$, also called the data graph, is a connected, directed multi-graph with node set $V(G_d)$ and edge set $E(G_d)$. A node is an entity [^3] and an edge represents a relationship between two entities. The nodes and edges belong to a set of *node types* $T_V$ and a set of *edge types* $T_E$, respectively. Each node (edge) type has a number of node (edge) instances. Each node $v \in V(G_d)$ has an unique identifier, a name, [^4] and one or more node types $\mathrm{vtype}(v) \subseteq T_V$. Each edge $e = (v_i,v_j) \in E(G_d)$, denoting a relationship from node $v_i$ to node $v_j$, belongs to a single *edge type* $\mathrm{etype}(e) \in T_E$.
For example, and are instances of node type [<span style="font-variant:small-caps;">Film Actor</span>]{}. They are also instances of node type [<span style="font-variant:small-caps;">Person</span>]{}. There exist an edge (, ) and another edge (, ) which are both edges of type .
The type of an edge constraints the types of the edge’s two end nodes. For instance, given any edge $e=(v_i,v_j)$ of edge type [<span style="font-variant:small-caps;">starring</span>]{}, it is implied that $v_i$ is an instance of node type [<span style="font-variant:small-caps;">Film Actor</span>]{} and $v_j$ is an instance of node type [<span style="font-variant:small-caps;">Film</span>]{}. In other words, [<span style="font-variant:small-caps;">Film Actor</span>]{}$ \in \mathrm{vtype}(v_i)$ and [<span style="font-variant:small-caps;">Film</span>]{} $\in \mathrm{vtype}(v_j)$.
Given a data graph, users can specify their query intent through query graphs. The concept of query graph is in Definition \[def:qgraph\]. The nodes in a query graph are labeled by either names of specific nodes or node types. Each answer graph to the query graph is a subgraph of the data graph and is edge-isomorphic to the query graph. In the answer graph, a node of the query graph is matched by a node of the specified name or any node of the specified type. For instance, the query graph in Step 3 of Figure \[fig:querygraphs\] finds all educated film actors who starred in films featuring . In Figure \[fig:querygraphs\] and other query graphs in this paper, the all-capitalized node labels represent node types, while others represent node names.
\[def:qgraph\] A query graph $G_q$ is a connected, directed multi-graph with node set $V(G_q)$ that may consist of both names and types, and edge set $E(G_q)$, such that:
[$\bullet$]{} [ ]{}
$V(G_q) \subseteq T_V \cup V(G_d)$.
$\forall e \in E(G_q), \mathrm{etype}(e) \in T_E$.
\[fig:querygraphs\]
\[fig:interface-viiq\]
User Interface for Providing Suggestions {#sec:ui-overview}
----------------------------------------
[[$\mathsf{Orion}$]{}]{} helps users interactively and iteratively grow a partial query graph $G_p$ to a target query graph $G_t$. It suggests edges to a user and solicit the user’s response on the edges’ relevance, in order to obtain a $G_t$ that satisfies the user’s query intent. The query session ends when either the user is satisfied by the constructed query graph or the user aborts the process. The goal is to minimize the number of suggestions required to construct the target query graph.
Figure \[fig:querygraphs\] shows an example sequence of steps to construct a query graph. The user starts by forming the initial partial query graph $G_p$ consisting of a single node. Step 1 in Figure \[fig:querygraphs\] shows one such $G_p$ with a node of type [<span style="font-variant:small-caps;">Film Actor</span>]{}. New edges are then suggested to the user, who can choose to accept some of the suggestions. For instance, step 2 in Figure \[fig:querygraphs\] shows the modified partial query graph obtained after adding two edges (together with two new nodes incident on the edges). Without taking the suggested edges, the user can also directly add a new node or a new edge. The system provides a ranked list of suggestions on the label of the new node/edge, for the user to choose from. Step 3 in Figure \[fig:querygraphs\] shows the example target query graph obtained after adding the edge between and [<span style="font-variant:small-caps;">Film</span>]{}. In general, to arrive at the target query graph $G_t$, the user continues the aforementioned process iteratively. Figure \[fig:interface-viiq\](a) shows the user interface of [[$\mathsf{Orion}$]{}]{}. It consists of a query canvas where the query graph is constructed. In its active mode, [[$\mathsf{Orion}$]{}]{} automatically suggests and displays top-$k$ new edges to add to the partial query graph. In its passive mode, users use simple mouse actions on the query canvas to add new nodes and new edges. [[$\mathsf{Orion}$]{}]{} ranks candidate node and edge labels and displays them using drop-down lists in pop-up windows as shown in Figures \[fig:interface-viiq\](b) and (c). [[$\mathsf{Orion}$]{}]{} also offers dynamic tips which list all allowable user actions at any given moment of the query construction process, as shown in Figure \[fig:interface-viiq\](a).
**Active Mode:** An [[$\mathsf{Orion}$]{}]{} user begins the query construction process by adding a single node into the empty canvas. Once the canvas contains a partial query graph consisting of at least a node, [[$\mathsf{Orion}$]{}]{} automatically operates in its active mode and suggests top-$k$ new edges. Each suggested new edge is between two existing nodes or between an existing node and a new node. Figure \[fig:interface-viiq\](a) shows a partial query graph comprised of the four dark nodes and the edges between them. The system suggests top-$3$ new edges, of which each is between an existing node (dark color) and a new node (white or light color). The user can click on some white nodes (which then become light colored, e.g., [<span style="font-variant:small-caps;">Location</span>]{} in Figure \[fig:interface-viiq\](a)) to add them to the query graph, and ignore others. The unselected white nodes are removed from display with a mouse click on the canvas, and the next set of new suggestions are automatically displayed. If the user does not want to select any white nodes, a new set of suggestions can be manually triggered by clicking the “Refresh Suggestions” button on the query canvas.
**Passive Mode:** At any moment in the query construction process, a user can add a node or an edge using simple mouse actions, which triggers [[$\mathsf{Orion}$]{}]{} to suggest labels for the newly added node/edge, i.e. it operates in the passive mode. **1)** To add a new edge between two existing nodes in the partial query graph, the user clicks on one node and drags their mouse to the destination node. The possible edge types for the newly added edge are displayed using a drop-down list in a pop-up suggestion panel, as shown in Figure \[fig:interface-viiq\](c). The edge types are ranked by their relevance to the query intent. **2)** To add a new node, the user can click on any empty part of the canvas. A suggestion panel pops up, as shown in Figure \[fig:interface-viiq\](b). It assists the user to select either a name or a type for the node. The options in the two drop-down lists in Figure \[fig:interface-viiq\](b), one for selecting names and the other for types, are sorted alphabetically. [^5] To help the user find the desired node name or type, the suggestion panel is organized in a 3-level hierarchy. Node types are grouped into domains. The user can choose a domain first, followed by a node type in the domain and, if desired, the name of a specific node belonging to the chosen type. The panel also allows the user to search for desired node name or type using keywords. Right after the new node is added, it is not connected to the rest of the partial query graph. [[$\mathsf{Orion}$]{}]{} makes sure the partial query graph is connected all the time, except for such a moment. Hence, no other operation is allowed, until the user adds an edge connecting the newly added node with some existing node, by using the aforementioned step 1).
Candidate Edges
---------------
[[$\mathsf{Orion}$]{}]{} assists users in query construction by suggesting edge types to add to the partial query graph $G_p$, in both active and passive modes. In its passive mode, a new edge is drawn between nodes $v$ and $v'$ by clicking the mouse on one node and dragging it to the other. The set of candidate edges in the passive mode, $C_P$, consists of all possible edge types between $v$ and $v'$. The set of candidate edges in the active mode, $C_A$, consists of any edge that can be incident on any node in $V(G_p)$, subject to the schema of the underlying data graph. A candidate edge can be either between two existing nodes in $G_p$, or between a node in $G_p$ and a new node automatically suggested along with the edge.
\[def:incidentEdges\] Given a data graph $G_d$, the incident edges $\mathrm{IE}(v)$ of a node $v \in V(G_d)$, is the set of types of the edges in $E(G_d)$ that are incident on node $v$. I.e., $\mathrm{IE}(v) = \{\mathrm{etype}(e) \lvert e=(v, v_i) \text{ or } e=(v_i, v), e \in E(G_d) \}$.
\[def:incident-edges\] Given a partial query graph $G_p$, the neighboring candidate edges $\mathrm{NE}(v)$ of any node $v \in V(G_p)$, is the set of edge types defined as follows, depending on if $v$ is a specific node name or a node type (cf. Definition \[def:qgraph\]):\
1) if $v \in V(G_d), \mathrm{NE}(v) = \mathrm{IE}(v)$;\
2) if $v \in T_V, \mathrm{NE}(v) = \bigcup\{\mathrm{IE}(v') \lvert v' \in V(G_d) \text{, } v \in \mathrm{vtype}(v')\}$.
When a new edge is added between two nodes $v$ and $v'$ in passive mode, $C_P = \mathrm{NE}(v) \cap \mathrm{NE}(v')$, and the set of candidate edges in active mode is $C_A = \bigcup_{v\in V(G_p)}\{e \lvert e\in \mathrm{NE}(v) \}$.
\[def:candidateEdges\] Candidate edges $C$ is the set of possible edges that can be added to the partial query graph $G_p$ at any given moment in the query construction process. $$\begin{gathered}
\label{eq:candidateEdges}
C\textsf{=}
\begin{cases}
C_P & \textsf{ in passive mode} \\
C_A & \textsf{ in active mode}
\end{cases}\end{gathered}$$
In Section \[sec:ranking\] we discuss how to rank candidate edges and thus make suggestions to users in the query construction process.
Ranking Candidate Edges {#sec:ranking}
=======================
A simple method to rank candidate edges is to order them alphabetically. A more sophisticated method is to rank them by using statistics such as frequency in the data graph. Such a method ignores information regarding users’ intent. A query log naturally captures different users’ query intent. It contains past query sessions which indicate what edges have been used together by users. Such co-occurrence information gives evidence useful to rank candidate edges by their relevance to the user’s query intent.
In a user’s query session, edges found relevant, accepted and added to the query graph by the user are called *positive* edges. In [[$\mathsf{Orion}$]{}]{}’s active mode, suggested edges that are not accepted by the user are called *negative* edges. Both positive and negative edges play an important role in gauging the user’s query intent, as evidenced by our experiments. At any given moment in the query formulation process, the set of all positive and negative edges hitherto forms a query session.
\[def:querysession\] A query log $W$ is a set of query sessions. A query session $Q$ is defined as a set of positive and negative edges. $T_E$ (cf. Section \[sec:prelim\]) is the set of all possible positive edges for a data graph $G_d$. The set of all possible negative edges, denoted $\overline{T_E}$, is defined as $\overline{T_E} = \cup_{e \in T_E} \{\overline{e}\}$. If an edge $e \in T_E$ appears as a negative edge in a query session, it is represented as $\overline{e}$. Let $T = T_E \cup \overline{T_E}$. A query session $Q \in \mathcal{P}(T)$, where $\mathcal{P}(T)$ is the power set of $T$.
Table \[tab:querylog\] shows an example query log containing 8 query sessions, one per line. For instance, $w_4$ is a query session where the suggested edges $\overline{{\textsf{\emph{\scriptsize artist}}}}$ and $\overline{{\textsf{\emph{\scriptsize title}}}}$ were not accepted by the user, while edges and were accepted.
[**Problem Statement:**]{} Given a query log $W$, an ongoing query session $Q$ and a set of candidate edges $C$ (cf. Equation \[eq:candidateEdges\]), the problem is to rank the edges in $C$ by a scoring function that captures the likelihood that the user would find them relevant.
In Section \[sec:baselineMethods\], we describe several baseline methods to rank candidate edges using query logs. In Section \[sec:rdp\] we propose a novel method inspired by random forests. Section \[sec:workload\] discusses several ways of obtaining a query log.
Baseline Methods {#sec:baselineMethods}
----------------
Several machine learning algorithms can be adapted to rank candidate edges. For instance, it can be seen as a recommendation problem. One can also use a na[ï]{}ve Bayes classifier or a random forest based classifier to find the probability that an edge $e$ is the *class* associated with the ongoing query session $Q$, given by $P(e\lvert Q)$. The query log $W$ can be used to learn such models off-line. We implemented several baseline methods by adapting random forests (RF) and na¨ıve Bayes classifier (NB), as well as class association rules (CAR) [@car] and recommendation systems based on singular value decomposition (SVD) [@svd-reco]. Below we provide a brief sketch of these methods.
For RF and NB, we used a modified version of the query log $W$ as the training data. A query session with $t$ positive edges and $t'$ negative edges was converted to $t$ training instances, with a different positive edge as the class of each training instance containing $t-1+t'$ attributes. For instance, $w_1$ in Table \[tab:querylog\] was converted to $\langle ({\textsf{\emph{\scriptsize education}}}, \overline{{\textsf{\emph{\scriptsize nationality}}}}), ({\textsf{\emph{\scriptsize founder}}})\rangle$ and $\langle ({\textsf{\emph{\scriptsize founder}}}, \overline{{\textsf{\emph{\scriptsize nationality}}}}), ({\textsf{\emph{\scriptsize education}}})\rangle$, where is the class of the first instance and the class for the second instance. Multi-class classification models were learnt for RF and NB, wherein the number of classes equals the number of distinct positive edge types found in $W$.
For CAR, $W$ was modified to generate multiple rules. The query sessions in $W$ are itemsets. For a query session with $t$ positive edges and $t'$ negative edges, we generated $t$ association rules. The antecedent (left hand side) of each rule contains $t-1+t'$ attributes, while the consequent (right hand side) contains exactly one positive edge. For instance, $w_1$ in Table \[tab:querylog\] was converted to rules $\langle {\textsf{\emph{\scriptsize education}}}, \overline{{\textsf{\emph{\scriptsize nationality}}}} \rightarrow {\textsf{\emph{\scriptsize founder}}}\rangle$ and $\langle {\textsf{\emph{\scriptsize founder}}}, \overline{{\textsf{\emph{\scriptsize nationality}}}} \rightarrow {\textsf{\emph{\scriptsize education}}}\rangle$. If the antecedent of a rule and the ongoing session $Q$ overlap, the rule’s consequent can be suggested to the user, weighted by the degree of overlap together with the commonly used measures of support and confidence in association rule mining.
For SVD, $W$ was converted to a $\lvert W \lvert$ rows $\times$ $\lvert T \lvert$ columns matrix. Each element in the matrix was assigned a value of 0 or 1, based on their occurrence in the corresponding query session. For example, for query log $W$ in Table \[tab:querylog\], in the first row of the matrix, the columns corresponding to ${\textsf{\emph{\scriptsize education}}}$, ${\textsf{\emph{\scriptsize founder}}}$ and $\overline{{\textsf{\emph{\scriptsize nationality}}}}$ were set to 1, while the rest were set to 0.
\[bt\]
[**Id**]{} [**Query Session**]{}
------------ ---------------------------------------------------------------------------------------------------------------
$w_1$ , , $\overline{{\textsf{\emph{\scriptsize nationality}}}}$
$w_2$ , $\overline{{\textsf{\emph{\scriptsize music}}}}$,
$w_3$ , $\overline{{\textsf{\emph{\scriptsize education}}}}$, , $\overline{{\textsf{\emph{\scriptsize starring}}}}$
$w_4$ $\overline{{\textsf{\emph{\scriptsize artist}}}}$, $\overline{{\textsf{\emph{\scriptsize title}}}}$, ,
$w_5$ $\overline{{\textsf{\emph{\scriptsize director}}}}$, ,
$w_6$ , $\overline{{\textsf{\emph{\scriptsize editor}}}}$,
$w_7$ $\overline{{\textsf{\emph{\scriptsize award}}}}$, , , $\overline{{\textsf{\emph{\scriptsize genre}}}}$
$w_8$ , , $\overline{{\textsf{\emph{\scriptsize nationality}}}}$
: Example Query Log $W$[]{data-label="tab:querylog"}
Random Decision Paths (RDP) {#sec:rdp}
---------------------------
Here we describe random decision paths (RDP), a novel method for measuring the relevance of a candidate edge. The RDP formulation is motivated by random forests [@breiman_ml2001]. However, RDP has important differences from the standard definition and application of random forests, and significantly outperforms standard random forests in our experiments.
### Motivation: from Random Forests to Random Decision Paths {#sec:rdp_motivation}
To better understand the similarities and differences between RDP and random forests, it is useful to briefly review decision trees and random forests. In a general classification setting, a decision tree $D$ defines a probability function $P_D(y | x)$, where $x$ is a pattern, and $y$ is the class of that pattern. The decision tree $D$ can also be seen as a classifier that maps patterns to classes: $D(x) = \operatorname*{arg\,max}_{y}P(y | x)$. The output of tree $D$ on a pattern $x$ is computed by applying to $x$ a test defined at the root of $D$, and using the result of the test to direct $x$ to one of the children of the root. Each child of the root is a decision tree in itself, and thus $x$ moves recursively along a path from the root to a leaf, based on results of tests applied at each node. A leaf node L stores precomputed probabilities $P_L(y)$ for each class y. If pattern $x$ ends up on a leaf $L$ of $D$, then the tree outputs $P_D(y | x) = P_L(y)$.
A random forest $F$ is a set of decision trees. A forest $F$ defines a probability $P_F(y | x)$, as the average $P_D(y | x)$ over all trees $D \in F$. To construct a random forest, each tree is built by choosing a random feature to test at each node, until reaching a predetermined number of trees. The probability values stored at the leaves of each tree are computed using a set of training patterns, for each of which the true class is known.
Random forests can be applied to our problem, but have certain undesirable properties. Each pattern is a query session, consisting typically of a few (or a few tens of) positive and negative edges. The total number of edge types can reach thousands (it equals 5253 in one of our experimental datasets). The test applied at each node of a decision tree simply checks if a certain edge (positive or negative) is present in the query session. Since query sessions contain relatively few edges compared to the number of edge types, for most tests the vast majority of results is a “no”, meaning that the query session does not contain the edge specified in the test. This leads to highly unbalanced trees, where the path corresponding to all “no” results gets the majority of training examples, and paths corresponding to more than 1-2 “yes” results frequently receive no training examples. At classification time, the input pattern $x$ ends up at the all-no path most of the times, and thus the class probabilities $P_D(y | x)$ do not vary much from the priors $P(y)$ averaged over all training examples.
Our solution to this problem is mathematically equivalent to constructing a random forest on the fly, given a query session $Q$ to classify. This random forest is explicitly constructed to classify $Q$, and is discarded afterwards; a new forest is built for every $Q$. The tests that we use for tree nodes in that forest consider exclusively edges that appear in $Q$. This way, the probabilities stored at leaf nodes are computed from training examples that are similar to $Q$ in a sense, as they share at least some edges with $Q$. This is why we expect these probabilities to be more accurate compared to the probabilities obtained from a random forest constructed offline, without knowledge of $Q$. This expectation is validated in the experimental results.
At the same time, since we know $Q$, constructing full random forests is not necessary, and we can save significant computational time by exploiting that fact. The key idea is that, for any decision tree $D$ that we may build, since we know $Q$, we know the path that $Q$ is going to take within that tree. Computing the output for any other paths of $D$ is useless, since $D$ is constructed for the sole purpose of being applied to $Q$. Therefore, out of every tree in the random forest, we only need to compute and store a single path. Consequently, our random forest is reduced to a set of decision paths, and this set is what we call “random decision paths” (RDP).
### Formulation of Random Decision Paths {#sec:rcp}
We measure the relevance of a candidate edge $e$ to query session $Q$, by aggregating the relevance of $e$ to several different subsets of edges in $Q$. We estimate the relevance of an edge $e$ to each such subset of $Q$ using the query log $W$. We define a support function $\mathrm{supp}(e, Q_i, W)$ to estimate the relevance of an edge $e$ to $Q_i \subseteq Q$: $$\begin{aligned}
\label{eq:support}
\mathrm{supp}(e, Q_i, W) = \frac{\lvert \{w \lvert w\in W \textsf{, } Q_i \cup \{ e\} \subseteq w \}\lvert }
{\lvert \{w \lvert w\in W \textsf{, } Q_i \subseteq w \}\lvert }\end{aligned}$$ The intuition behind using multiple subsets of $Q$ to measure the relevance of an edge $e$ to the query session $Q$, instead of using the entire query session $Q$ alone is the following: if $Q$ is long, [i.e.,]{}the query session contains a large number of positive and negative edges, $\mathrm{supp}(e, Q_i, W)$ might be equal to 0 for every candidate edge $e$. This is because it is unlikely to find any query session in the query log that is a super-set of $Q$.
If $\mathcal{P}(Q)$ is the power set of query session $Q$, we propose to build a set of random decision paths $\Re$, that is: 1) a set of decision paths based only on the edges in query session $Q$, and 2) a subset of $\mathcal{P}(Q)$ such that $\lvert \Re \lvert \ll \lvert \mathcal{P}(Q) \lvert$. We do not attempt to pre-learn a set of decision paths using query log $W$ that are used to rank edges for any arbitrary query session (like learning a decision tree or rules for a classification model). Instead, given a query session $Q$, we only build random decision paths specific to $Q$, that measure the correlation of a candidate edge $e$ with different random subsets of edges in $Q$. In other words, we assume the presence of a virtual space of all possible decision paths, but only instantiate and use a few random paths specific to $Q$.
\[def:correlationPath\] A decision path $\overrightarrow{O}$ is an ordered sequence of edges, for a set of edges $O$.
The positive and negative edges in a query session $Q$ reflect the relevance and irrelevance of the edges to the user’s query intent. An example order for the decision path $\overrightarrow{Q}$ corresponding to query session $Q$ is the order of the edge suggestion sequence. There can be several such ordered sequences for a query session. For any query session $O \in \mathcal{P}(T)'$, the number of possible orders are equal to the total number of permutations of $O$, which is equal to $\lvert O \lvert !$. Given the set of all query sessions $\mathcal{P}(T)'$, we define $\overrightarrow{\mathcal{P}(T)'}$ as the set of all possible decision paths. $\overrightarrow{\mathcal{P}(T)'} = \bigcup_{O \in \mathcal{P}(T)'}\{\overrightarrow{O_i} \lvert \forall i, 1 \leq i \leq \lvert O \lvert !\}$, and $\lvert \overrightarrow{\mathcal{P}(T)'} \lvert$ is prohibitively large in practice.
A decision path $\overrightarrow{O}$ has a prefix path associated with it. For instance, the prefix of a decision path $\overrightarrow{O}$, denoted by $\mathrm{prefix}(\overrightarrow{O})$, is the path before adding the last edge that formed $\overrightarrow{O}$. If $\overrightarrow{O} = \{e_1, e_2, \ldots, e_{k-1}, e_k \}$, then $\mathrm{prefix}(\overrightarrow{O}) = \{e_1, e_2, \ldots, e_{k-1}\}$. The support for a decision path $\overrightarrow{O}$ is given by $\mathrm{count}(\overrightarrow{O})$, defined as $$\begin{aligned}
W_{\overrightarrow{O}} = \{w \lvert w \in W, O \subseteq w\},
\mathrm{count}(\overrightarrow{O}) = \lvert W_{\overrightarrow{O}} \lvert\end{aligned}$$ For a single edged query session, [i.e.,]{}if $\lvert O \lvert = 1$, the support of the corresponding prefix path $\mathrm{count}(\mathrm{prefix}(\overrightarrow{O})) = \lvert W \lvert$.
Given the query session $Q$, we define $\mathcal{Q} \subseteq \overrightarrow{\mathcal{P}(T)'}$, the set of all decision paths that can be formed using subsets of edges in $Q$, whose support is no more than a threshold $\tau$. More formally, $$\begin{aligned}
\label{eq:allRandPaths}
\hspace{-2mm}\mathcal{Q} = \{\overrightarrow{Q_i} \lvert Q_i \subseteq Q, \mathrm{count}(\overrightarrow{Q_i}) \leq \tau, \mathrm{count}(\mathrm{prefix}(\overrightarrow{Q_i})) > \tau \}\end{aligned}$$
We propose to build a random set of decision paths $\Re \subseteq \mathcal{Q}$, such that $\lvert \Re \lvert = N$, consisting of only decision paths that are based on the current query session $Q$, and whose support is no more than $\tau$. A random decision path $\overrightarrow{Q_i}$ is grown using edges in $Q$ until either $\mathrm{count}(\overrightarrow{Q_i}) \leq \tau$, or all the edges in $Q$ are exhausted, whichever comes first. Note that in case all edges in $Q$ are exhausted before we obtain a path $\overrightarrow{Q_i} \in \mathcal{Q}$, then $\mathcal{Q} = \phi$. The final score of an edge $e \in C$ for query session $Q$ is given by $$\begin{aligned}
\label{eq:totalScore}
\mathrm{score}(e) = \frac{1}{\lvert \Re \lvert} \times \sum_{\overrightarrow{Q_i} \in \Re} \mathrm{supp}(e, Q_i, W)\end{aligned}$$
$E_{sugg} \leftarrow \phi$, $i \leftarrow 0$;
\[line:decisionPaths-instantiate-end\]
\[line:decisionPaths-choosebest\]
/\* Return candidate edges by decreasing order of $\mathrm{score}(.)$;\*/
Algorithm \[alg:decisionPaths\] explains the random decision paths based edge ranking algorithm in detail. Given a set of candidate edges $C$ and a query session $Q$, we instantiate $N$ random decision paths (line \[line:decisionPaths-instantiate-start\]). The next edge of the path is chosen uniformly at random without replacement from $Q$ (line \[line:decisionPaths-randsplit\]). The new edge chosen in the path is used to obtain a subset of entries from the query log $W$. Only those entries in $W$ that contain all the positive and negative edges in the decision path $\overrightarrow{Q_i}$ are chosen to be present in $W_{Q_i}$ (line \[line:decisionPaths-path-start\]). A decision path $\overrightarrow{Q_i}$ is grown until $W_{Q_i}$ contains no more than $\tau$ entries in it (or there are no more edges to be randomly chosen from in $Q$). The support for each candidate edge $e\in C$ is computed for each decision path (line \[line:decisionPaths-suppcnt-start\]). The support for each candidate edge is averaged across all the decision paths and the edges are ranked based on the final score obtained using Equation \[eq:totalScore\] (line \[line:decisionPaths-choosebest\]).
Figure \[fig:random-forest\] shows an example of using random decision paths to rank the candidate edges. If the set of candidate edges is $C$ = $\{$, , $\}$ and query session $Q$ contains edges , $\overline{{\textsf{\emph{\scriptsize education}}}}$, , $\overline{{\textsf{\emph{\scriptsize nationality}}}}$, and , $\overrightarrow{path_1}$ through $\overrightarrow{path_N}$ are examples of various random decision paths. For instance, decision path $\overrightarrow{path_2}$ consists of edges and $\overline{{\textsf{\emph{\scriptsize nationality}}}}$, which lead to query log subset $W_{path_2}$ where $\lvert W_{path_2} \lvert \leq \tau$. In a decision path $\overrightarrow{path_i}$, the support for each candidate edge $e \in C$ with entry $\overline{e}$ in $W_{path_i}$ is computed. The support for each candidate across all the decision paths is aggregated to rank edges in $C$.
Simulating Query Logs {#sec:workload}
=====================
All the baseline methods and the random decision paths rely on a query log. But, to the best of our knowledge, a query log for large graphs is not publicly available, except for a SPARQL query log [@dbpedia-sparql], which is applicable only for the DBpedia data graph. We thus simulate and bootstrap a query log. We first find correlated positive edges, using three different methods: 1) using Wikipedia and the data graph, 2) using only the data graph, and 3) using the aforementioned SPARQL query log. Then negative edges, which indicate edge suggestions that were not accepted by the user, are injected into the simulated query sessions. If positive edges $e_1$ and $e_2$ are in query session $Q_i$, and another query session $Q_j$ contains $e_1$ but not $e_2$, then $e_2$ is injected into $Q_j$ as a negative edge.
**Positive edges using Wikipedia and data graph ([[$\mathsf{WikiPos}$]{}]{}):** Each Wikipedia article describes an entity in detail and refers to other Wikiepdia entities by wikilinks. Given a sentence in a Wikipedia article (or a window of consecutive sentences), the multiple entities mentioned in it can be considered related in some way. We discover the pairwise relationships between these entities. Our premise is that these co-occurring relationships simulate the positive edges of a query session. The intuition is that such consecutive sentences describe closely related facts, and an [[$\mathsf{Orion}$]{}]{} user may also have such closely related facts as their query intent.
To find co-occurring positive edges, we map entities mentioned in Wikipedia articles to nodes in the data graph. Data graphs such as Freebase and DBpedia provide a straight-forward mapping of their nodes to Wikipedia entities. Given a sentence window, all edges found in the data graph between the mapped entities are approximated to the co-occurring positive edges of a query session in $W$. We consider all edges between the mapped entities in the data graph, while only a subset of these might actually be mentioned in the corresponding Wikipedia article. Thus, the co-occurring positive edges identified using this method might be noisy. We filter out co-occurring positive edges with less support. Every session in the query log is viewed as an itemset. We use the Apriori algorithm to generate frequent itemsets, subject to a support $\rho_w$. The resulting frequent itemsets thus form query sessions with only positive edges.
**Positive edges using the data graph ([[$\mathsf{DataPos}$]{}]{}):** Another way of finding co-occurring positive edges is to use statistics based on the data graph $G_d$ alone. For every node $v \in V(G_d)$, an itemset is created which includes all edges incident on $v$ in $G_d$. This way we converted the graph $G_d$ to $\lvert V(G_d) \lvert$ itemsets. Here too, we apply the Apriori algorithm to find all frequent itemsets using support $\rho_d$.
**Positive edges using SPARQL query log ([[$\mathsf{SparqlPos}$]{}]{}):** The DBpedia SPARQL query log [@dbpedia-sparql] contains benchmark queries posed by users on DBpedia through its SPARQL query interface. We extract co-occurring positive edges using the properties specified in the WHERE clause of the queries. Since this is a real query log, every set of positive edges found in each WHERE clause is used as is, without applying any pruning as in [[$\mathsf{WikiPos}$]{}]{} and [[$\mathsf{DataPos}$]{}]{}.
**Injecting negative edges to query log ([[$\mathsf{InjectNeg}$]{}]{}):** The aforementioned methods only generate query sessions with positive edges. But it is crucial to simulate edges that were not accepted by users, since we must rank candidate edges that are correlated with both accepted and ignored edges in a query session. A simple, but effective strategy is used to introduce negative edges into the query logs. Consider a query log which has only positive edges, as produced by the aforementioned methods. For a query session $w \in W$, $T(w)$ is defined as the set of node types of end nodes of all edges in $w$. I.e., $T(w) = \{t \lvert t\in T_V, \exists e\textsf{=}(u,v) \in E(G_d), \mathrm{etype}(e) \in w \textrm{ s.t. } t \in \mathrm{vtype}(u) \textrm{ or } t \in \mathrm{vtype}(v)\}$. The set of negative edges added to $w$, denoted $\overline{w}$, is the set of all edges incident on the node types in $T(w)$. I.e., $\overline{w} = \{\overline{e} \lvert e\textsf{=}(u,v) \in E(G_d), \mathrm{vtype}(u) \in T(w) \textrm{ or } \mathrm{vtype}(v) \in T(w), \mathrm{etype}(e)\notin w\}$. The new entry for every $w \in W$ consists of $w \cup \overline{w}$, which is then used as the final query log by the various candidate edge ranking methods in Section \[sec:ranking\].
Experiments {#sec:experiments}
===========
Setup
-----
We conducted user studies on a double quad-core 24 GB memory 2.0 GHz Xeon server. Furthermore, RDP was compared with other edge ranking algorithms (RF, NB, CAR and SVD) on the Lonestar Linux cluster of TACC, [^6] which consists of five Dell PowerEdge R910 server nodes, with four Intel Xeon E7540 2.0GHz 6-core processors on each node, and a total of 1TB memory.
----------------- ------------------ ----------------- ------------------- ----------------------------------
[**Freebase**]{} [**DBpedia**]{} [**Wikipedia**]{} [**SPARQL**]{} [@dbpedia-sparql]
[**Wiki-FB**]{} Yes - Yes -
[**Data-FB**]{} Yes - - -
[**Wiki-DB**]{} - Yes Yes -
[**Data-DB**]{} - Yes - -
[**QLog-DB**]{} - - - Yes
----------------- ------------------ ----------------- ------------------- ----------------------------------
: Query Logs Simulated[]{data-label="tab:querylogs"}
[**Query Type**]{} [**Query Task**]{}
-------------------- --------------------
[**Easy**]{}
[**Medium**]{}
[**Hard**]{}
: Sample Query Tasks From User Studies[]{data-label="tab:samplequeries"}
-- ------------- ----------------------- -------------- -------------------
Very Poorly Very Hard Unacceptable Strongly Disagree
Poorly Hard Poor Disagree
Adequately Neither Easy Nor Hard Satisfactory Uncertain
Well Easy Good Agree
Very Well Very Easy Excellent Strongly Agree
-- ------------- ----------------------- -------------- -------------------
**Datasets:** We used two large real-world data graphs: the 2011 version of Freebase [@Bollacker+08freebase], and the 2015 version of DBpedia [@AuerBK+07]. We pre-processed the graphs to keep only nodes that are named entities ([e.g.,]{}), while pruning out nodes corresponding to constant values such as integers and strings among others. In the original Freebase dataset, every relationship has an inverse relationship in the opposite direction. For instance, the relationship has in the opposite direction. All such edges in the opposite direction were deleted, since they are redundant. The resulting Freebase graph contains 30 million nodes, 33 million edges, and 5253 edge types. After similar pre-processing, the DBpedia graph obtained contains 4 million nodes, 12 million edges and 647 edge types.
**Query Logs:** Table \[tab:querylogs\] lists the various query logs simulated using the techniques described in Section \[sec:workload\]. One can find positive edges of a query session using different methods, and inject negative edges into them using the method [[$\mathsf{InjectNeg}$]{}]{} in Section \[sec:workload\]. We simulated two different query logs for Freebase: Wiki-FB and Data-FB. The positive edges for Wiki-FB were simulated using both Wikipedia (September 2014 version) and the Freebase data graph, and the positive edges for Data-DB were simulated using only the Freebase data graph, by methods [[$\mathsf{WikiPos}$]{}]{} and [[$\mathsf{DataPos}$]{}]{} in Section \[sec:workload\], respectively. We simulated three different query logs for DBpedia: Wiki-DB, Data-DB and QLog-DB. Wiki-DB and Data-DB were simulated via the same approach for Wiki-FB and Data-FB, except that DBpedia (instead of Freebase) was the data graph. For QLog-DB, the positive edges were simulated by [[$\mathsf{SparqlPos}$]{}]{} in Section \[sec:workload\].
**Systems Compared in User Studies:** To verify if [[$\mathsf{Orion}$]{}]{} indeed makes it easier for users to formulate query graphs, we conducted user studies with two different user interfaces: [[$\mathsf{Orion}$]{}]{}, and [[$\mathsf{Naive}$]{}]{}. [[$\mathsf{Orion}$]{}]{} operates in both passive and active modes (cf. Section \[sec:ui-overview\]). [[$\mathsf{Naive}$]{}]{} on the other hand does not make any automatic suggestions and only lets users manually add nodes and edges on the canvas. The various candidate edges are sorted alphabetically and presented to the user in a drop down list. This mimics the query formulation support offered in existing visual query systems such as [@quble].
**Methods Compared for Ranking Candidate Edges:** We compared the effectiveness of [[$\mathsf{Orion}$]{}]{}’s candidate edge ranking algorithm (RDP) with the baseline methods described in Section \[sec:baselineMethods\], including RF, NB, CAR and SVD.
User Studies {#sec:exp-userstudies}
------------
**User Study Set-up:** We conducted an extensive user study with 30 graduate students in the authors’ institution. The students neither had any expertise with graph query formulation, nor did they have exposure to the data graphs. None of these students were exposed to this research in any way other than participating in the user study. We conducted A/B testing using the two interfaces, [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{}. The underlying data graph for both systems was Freebase, and were hosted online on the aforementioned Xeon server. We arbitrarily chose 15 students to work with [[$\mathsf{Orion}$]{}]{}, and the other 15 students worked with [[$\mathsf{Naive}$]{}]{}. The users of [[$\mathsf{Orion}$]{}]{} were not exposed to [[$\mathsf{Naive}$]{}]{}, and vice versa. We created a pool of 21 query tasks, which consisted of three levels of difficulty. 9 queries were *easy*, 6 queries were *medium* and 6 queries were *hard*. The target query graphs for each easy and medium query tasks had exactly one and two edges, respectively. The target query graphs for hard query tasks had at least three and at most 5 edges. Table \[tab:samplequeries\] lists one sample query for each of the three categories. Figures \[fig:targetquery\](a), (b) and (c) depict the target query graphs for the query tasks listed in Table \[tab:samplequeries\].
We created 15 different query sheets, where each consisted of 3 easy, 2 medium and 2 hard query tasks, chosen from the pool of 21 queries designed. Each [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{} user was given a query sheet as the task set to complete which ensured that users of both systems worked on the same query tasks. Each user was given an initial 15-minute introduction by the moderators regarding the data graphs, graph query formulation, and the user interface. The users then spent 45 minutes working on their respective query sheets. The users were allowed to ask any clarification questions regarding the tasks during the user study. Each user was awarded a gift card worth $\$15.00$ for their participation in the user study. Since 15 users worked on 7 queries each, we obtained a total of 105 responses for both [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{}.
**Survey Form:** The users were requested to fill an online survey form at the end of each query task, thus resulting in 105 different survey form responses for each user interface. The survey form had four questions: $Q1$, $Q2$, $Q3$ and $Q4$, as listed in Table \[table:survey\]. Each question had five options, specifying the level of agreement a user could have with the particular aspect of the interface measured by the question. We assign a score for every option in each question based on the Likert scale shown in Table \[table:survey\]. The least favourable experience with respect to each question is assigned a score of 1, and the most favoured experience is assigned a score of 5.
[**System**]{} [**Queries**]{} [**z-value**]{} [**p-value**]{}
-------------------------- ----------------- -- ------------ ----------------- -----------------
[[$\mathsf{Orion}$]{}]{} $c_O$=0.74
[[$\mathsf{Naive}$]{}]{} $c_N$=0.68
[[$\mathsf{Orion}$]{}]{} $c_O$=0.70
[[$\mathsf{Naive}$]{}]{} $c_N$=0.58
: Conversion Rates of [[$\mathsf{Naive}$]{}]{} and [[$\mathsf{Orion}$]{}]{}[]{data-label="tab:userstudy-conversion"}
### Efficiency Based on Conversion Rate
**Measure:** One of the popular metrics used to measure the effectiveness of the systems compared in A/B testing is conversion rate $c$, which is the percentage of tasks completed successfully by users. The conversion rate is defined over a set of $\mathrm{Tasks}$ as: $$\begin{aligned}
c = \frac{\sum_{\mathrm{task} \in \mathrm{Tasks}}{\mathrm{sim}(G_u,G_t)}}{\lvert \mathrm{Tasks} \lvert}\end{aligned}$$ where $\mathrm{task}$ is a query task assigned to the user, $G_u$ is the corresponding query graph constructed by the user, and $G_t$ is the actual target query graph corresponding to $\mathrm{task}$. The similarity measure $\mathrm{sim}(G_u,G_t)$ captures the notion of success, based on how similar $G_u$ is to $G_t$. Since we designed the query tasks, the target query graph for each query task was known to us apriori. The query graph constructed by each user was recorded by the interface during the user study. Intuitively, the similarity between $G_u$ and $G_t$ is based on the edge-preserving sub-graph isomorphic match between the two graphs. More formally, $sim(G_u,G_t)$ is defined as: $$\begin{aligned}
\label{eq:ranking_function}
\mathrm{sim}(G_u, G_t) = \frac{\max_{f}{\sum_{\substack{ e=(u,v) \in E(G_u) \\ e'=(f(u), f(v)) \in E(G_t)}}} \mathrm{match}(e, e')}{\lvert E(G_t) \lvert}\end{aligned}$$ where $f:V(G_u) \rightarrow V(G_t)$ is a bijection, and $\mathrm{match}(e, e')$ is a matching function defined as: $$\begin{gathered}
\label{eq:match}
\hspace{-4mm} \mathrm{match}(e,e')\text{=}
\begin{cases}
1 & \text{if } u\text{=}f(u), v\text{=}f(v), etype(e)=etype(e')\\
0 & \text{otherwise}
\end{cases}\vspace{-2mm}\end{gathered}$$
**Results:** Table \[tab:userstudy-conversion\] summarizes the conversion rates of [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{} over the set of all query tasks (easy, medium and hard query tasks), and also over only the medium and hard query tasks. We observe that [[$\mathsf{Orion}$]{}]{} has a better conversion rate than [[$\mathsf{Naive}$]{}]{} in both scenarios. But, on performing a two sample Z-test with significance level $\alpha$=0.1, only the observation that [[$\mathsf{Orion}$]{}]{} has a better conversion rate than [[$\mathsf{Naive}$]{}]{} for medium and hard queries is statistically significant. We next describe the hypothesis testing of the two scenarios in detail.

The conversion rate of [[$\mathsf{Orion}$]{}]{}, $c_O$, over all the 105 query tasks is 0.74, and the conversion rate of [[$\mathsf{Naive}$]{}]{}, $c_N$, for the same set of tasks is 0.68. On average, [[$\mathsf{Orion}$]{}]{} users had a higher chance of formulating the correct query graph compared to the [[$\mathsf{Naive}$]{}]{} users. We assume that constructing a query graph follows a Bernoulli trial, with the probability of successfully constructing the target query graph on [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{} as $p_O = c_O$ and $p_N = c_N$ respectively. Our hypothesis, $H_{A1}$, is that [[$\mathsf{Orion}$]{}]{} has a better conversion rate than [[$\mathsf{Naive}$]{}]{}: $H_{A1}$: $p_{O} > p_{N}$. The null hypothesis $H_{01}$ is given by $H_{01}$: $p_{O} \leq p_{N}$. For the aforementioned conversion rates of [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{}, and a sample size of 105, $z = 0.92$. This results in a p-value of 0.1788. Since the p-value $ > \alpha$, the null hypothesis cannot be rejected as the data does not significantly support our hypothesis.
We dive in deeper to investigate if there are scenarios where [[$\mathsf{Orion}$]{}]{} does perform better than [[$\mathsf{Naive}$]{}]{}. The conversion rate of only medium and hard query tasks (which is equal to a total of 60 query tasks) for [[$\mathsf{Orion}$]{}]{} is 0.70, and is equal to 0.58 for [[$\mathsf{Naive}$]{}]{}, [i.e.,]{}$c_O = p_O = 0.70$ and $c_N = p_N = 0.58$. This indicates that [[$\mathsf{Orion}$]{}]{} users have a better chance of successfully constructing query graphs with two or more edges, compared to [[$\mathsf{Naive}$]{}]{} users. Our new hypothesis, $H_{A2}$, is that [[$\mathsf{Orion}$]{}]{} has a better conversion rate than [[$\mathsf{Naive}$]{}]{} for medium and hard queries: $H_{A2}$: $p_{O} > p_{N}$. The null hypothesis $H_{02}$ is given by $H_{02}$: $p_{O} \leq p_{N}$. For the aforementioned conversion rates of [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{}, and a sample size of 60, $z = 1.36$, resulting in a p-value of 0.0869. Since the p-value $ < \alpha$, the data significantly supports our claim that [[$\mathsf{Orion}$]{}]{} users have a higher chance of successfully constructing complex query graphs containing two or more edges.
### Efficiency Based on Time
We next measure the time taken by a user to construct the query graph for a given query task: the time elapsed between the first time a user clicks on the query canvas for a new query task, to the time the user clicks on the “Submit” button of the interface. This was recorded in the background during the user study. Figure \[fig:userstudy-time-all\] shows the distribution of the time taken to complete a query task. We observe that half of the 105 query tasks were completed within 180 seconds by [[$\mathsf{Orion}$]{}]{} users, while [[$\mathsf{Naive}$]{}]{} users completed the same number of query tasks within 183.2 seconds. Around 26 query tasks were completed between 180 to 340.5 seconds, and between 183.24 to 325.7 seconds by [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{} users respectively. Although, there were a few query tasks that took a long time to be completed, with a maximum of 1446.3 seconds for [[$\mathsf{Orion}$]{}]{} users and 1027.8 seconds for [[$\mathsf{Naive}$]{}]{} users. We further study the distribution of the time taken to complete query tasks based on the level of difficulty of the tasks. Figure \[fig:userstudy-time-easy\] compares the time taken for easy query tasks. We observe that around 23 of the 45 easy queries are completed within 135.5 and 130.3 seconds by [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{} users respectively. Another 12 queries were completed between 135.5 to 202.3 seconds by [[$\mathsf{Orion}$]{}]{} users, and between 130.3 to 211.3 seconds by [[$\mathsf{Naive}$]{}]{} users. Figure \[fig:userstudy-time-med\] compares the time taken for medium query tasks. We observe that around 15 of the 30 medium queries are completed within 188.2 and 224.6 seconds by [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{} users respectively. Another 7 queries were completed between 188.2 to 349.6 seconds by [[$\mathsf{Orion}$]{}]{} users, and between 224.6 to 296.2 seconds by [[$\mathsf{Naive}$]{}]{} users. Finally, Figure \[fig:userstudy-time-hard\] compares the time taken for hard query tasks. We observe that around 15 of the 30 hard queries are completed within 296.1 and 259.6 seconds by [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{} users respectively. Another 7 queries were completed between 296.1 to 540.4 seconds by [[$\mathsf{Orion}$]{}]{} users, and between 259.6 to 406.4 seconds by [[$\mathsf{Naive}$]{}]{} users. We observe that despite the steeper learning curve of [[$\mathsf{Orion}$]{}]{} due to the superior number of features in it, the time taken to complete a majority of the query tasks is comparable with that of [[$\mathsf{Naive}$]{}]{}.
### Efficiency Based on Number of Iterations
We next measure the effectiveness of [[$\mathsf{Orion}$]{}]{} using the number of iterations involved in the query construction process: the number of times a ranked list of edges is presented to the user. The number of iterations is incremented in one of three ways: 1) the user selects one or more of the automatically suggested edges in active mode, and clicks on the canvas to get the next set of suggestions, 2) the user ignores all the suggestions made in active mode and clicks on “Refresh Suggestions” to get a new set of automatic suggestions, and 3) the user draws a new edge in passive mode. We do not measure this for [[$\mathsf{Naive}$]{}]{} since there are no automatic ranked suggestions made in it. Figure \[fig:userstudy-iters\] shows the distribution of the number of iterations required to construct query graphs. Overall, [[$\mathsf{Orion}$]{}]{} users needed no more than only 13 iterations to complete around 79 of the 105 queries. Half of the easy, medium and hard queries required no more than 3, 10 and 14 iterations respectively. Another 11 easy queries required between 3 to 7 iterations, while 7 medium and hard queries each required between 10 to 15.5 and 14 to 23.5 iterations respectively. This indicates that the features offered by [[$\mathsf{Orion}$]{}]{} helped users formulate query graphs with few interactions with the interface.
\[fig:userstudy-survey\]
### User Experience Results
The user experience results is based on the answers to all the questions in the survey form by all the users. The overall user experience for each question of an interface is measured by averaging the score obtained for that question across all the users working on that interface. Figure \[fig:userstudy-survey-all\] shows the overall user response of all the questions, across all the 105 users for both [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{}. We observe that [[$\mathsf{Orion}$]{}]{} users report an improvement of 0.5 for $Q1$, 0.2 for $Q2$, 0.25 for $Q3$ and 0.3 for $Q4$ on Likert scale, when compared to the [[$\mathsf{Naive}$]{}]{} users.
We further break down the average score over each question based on the difficulty level of the query task to study the difference in user experience between [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{} in detail. Figure \[fig:userstudy-survey-easy\] shows the average score over only the easy query tasks (a total of 45 query tasks each for both [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{}), which shows that [[$\mathsf{Orion}$]{}]{} users had a better experience than the [[$\mathsf{Naive}$]{}]{} users w.r.t $Q1$, while the [[$\mathsf{Naive}$]{}]{} users had a slightly better experience than [[$\mathsf{Orion}$]{}]{} users w.r.t $Q2$ and $Q3$. Both the sets of users had similar experience w.r.t $Q4$. Figure \[fig:userstudy-survey-med\] shows the average score over only the medium query tasks (a total of 30 query tasks each for both [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{}), which shows that [[$\mathsf{Orion}$]{}]{} users had an improvement of 0.4 on Likert scale w.r.t $Q1$ and $Q4$ compared to the [[$\mathsf{Naive}$]{}]{} users. They also had an improvement close to 0.1 on Likert scale w.r.t both $Q2$ and $Q3$. Finally, Figure \[fig:userstudy-survey-hard\] shows the average score over only the hard query tasks (a total of 30 query tasks each for both [[$\mathsf{Orion}$]{}]{} and [[$\mathsf{Naive}$]{}]{}), which shows that [[$\mathsf{Orion}$]{}]{} users felt a significant improvement in the user experience across all four questions. [[$\mathsf{Orion}$]{}]{} users had an improvement of around 1.0 w.r.t $Q1$, 0.6 w.r.t $Q2$, and 0.7 w.r.t both $Q3$ and $Q4$. We thus observe that as the difficulty level of the query graph being constructed increases, the usability of [[$\mathsf{Orion}$]{}]{} seems significantly better than [[$\mathsf{Naive}$]{}]{}’s. [[$\mathsf{Naive}$]{}]{} users find the system uncomfortable to use when the target query graph contains two or more edges.
\[fig:algo-iters\]
\[fig:algo-time\]
Comparing Candidate Edge Ranking Methods {#sec:algoscompare}
----------------------------------------
We next compare the performance of RDP, [[$\mathsf{Orion}$]{}]{}’s edge ranking algorithm, with other machine learning algorithms: RF, NB, SVD and CAR. We compared the performance of these algorithms over two widely used real-world data graphs: Freebase and DBpedia. We used the Wiki-FB and Wiki-DB query logs for Freebase and DBpedia respectively. We had to perform these experiments on the TACC machine, because RF has high memory requirements. For instance, generating a random forest model with 80 trees, using a query log containing around 100,000 query sessions, requires 55 GB of RAM.
We created multiple target query graphs for each dataset, conforming with the schema of the underlying data graph. For a given target query graph, the input to each of the algorithms was an initial partial query graph containing exactly one edge in it. The task of each algorithm was to iteratively suggest exactly one edge at a time, given the partial query graph. If the edge suggested was present in the target query graph, it was added into the partial query graph, and recorded as a positive edge. If not, the edge was ignored, and recorded as a negative edge. The process was stopped either when the partial query graph was grown completely into the target query graph, or if 200 suggestions were up. For each target query graph $G_t$ containing $E(G_t)$ number of edges, we internally converted it into $E(G_t)$ different instances of target query graphs, each starting with a different-edged initial partial query graph as input to the algorithms.
We created 43 target query graphs for Freebase, consisting of 6 two-edged query graphs, 10 three-edged query graphs, 9 four-edged query graphs, 17 five-edged query graphs and 1 six-edged query graph. These 43 target query graphs were thus converted to 167 different input instances, creating a query set called *Freebase-Queries*. We created 33 target query graphs for DBpedia, consisting of 2 three-edged query graphs, 29 four-edged query graphs, and 2 five-edged query graphs. These 33 target query graphs were converted to 130 different input instances, creating a query set called *DBpedia-Queries*.
### Efficiency Based on Number of Suggestions
For a query graph completion system, we believe an important measure of its efficiency is the number of suggestions required to successfully grow a partial query graph to its corresponding target query graph. This is because, if a system can help users construct the target query graph with fewer number of suggestions, it indicates that the suggestions made indeed captured the user’s query intent. Figure \[fig:algo-iters-fb\] shows the average number of suggestions required to complete each of the 167 input instances for Freebase. We observe that RDP significantly outperforms the other methods. RDP requires only 43.5 suggestions per query graph on average, nearly half the number of suggestions required to complete a query graph using RF and NB. It also requires only a quarter of the number of suggestions required to complete a query graph using SVD, while CAR requires 67.8 suggestions. Figure \[fig:algo-iters-db\] shows the average number of suggestions required to complete each of the 167 input instances for DBpedia. We observe that RDP requires 126.6 suggestions on average to complete a query graph, performing slightly better than NB which requires 134.3 suggestions. RDP also comfortably outperforms RF, SVD and CAR which on average require 164, 150.7 and 157.9 suggestions per query graph respectively.
### Efficiency Based on Time
We next compare the efficiency of the various methods over the time required to grow the initial partial query graph to its corresponding target query graph. Figure \[fig:algo-time-fb\] compares the average time required to complete a query task by each of the algorithms over Freebase. RDP, NB and RF significantly outperform SVD and CAR. RDP requires 7.7 seconds, slightly higher than NB’s 3.9 seconds, and better than RF’s 11.8 seconds per query, which is commendable especially since both random forest and Bayesian classifiers are extremely efficient once the models are learnt. Figure \[fig:algo-time-db\] compares the average time required to complete a query task by each of the algorithms over DBpedia. SVD and CAR are inefficient requiring 250.2 and 444.2 seconds per query respectively. NB requires 5.9 seconds, which is faster than both RF and RDP that require 26.7 and 119.7 seconds per query respectively.
Effectiveness of Query Logs
---------------------------
We compare the effectiveness of the various query logs listed in Table \[tab:querylogs\]. We use RDP as the algorithm for edge suggestion, and the number of suggestions required to grow the initial partial query graph to the target query as the measure of effectiveness of the query logs. Freebase-Queries and DBpedia-Queries, described in Section \[sec:algoscompare\], were the sets of queries used to compare the various Freebase and DBpedia query logs respectively.
**Query Logs for Freebase:** Figure \[fig:algo-log-fb\] shows the distribution of the number of suggestions required to complete a query task using Wiki-FB and Data-FB query logs. We observe that 83 of the 167 input instances needed no more than 26 edge suggestions with the Wiki-FB query log, while it required at most 65 edge suggestions to complete the same number of queries using the Data-FB query log. Around 42 more input instances required between 26 to 47 suggestions with Wiki-FB, while it required between 65 to 200 suggestions with Data-FB. This indicates that the query log simulated using Wikipedia and the Freebase data graph using [[$\mathsf{WikiPos}$]{}]{} described in Section \[sec:workload\] is of superior quality compared to the one simulated using only the Freebase data graph. This suggests that positive edges established based on the context of human usage of the relationships is better than the positive edges established using only the data graph.
**Query Logs for DBpedia:** Figure \[fig:algo-log-db\] shows the average number of edge suggestions required to process the 130 different DBpedia input instances, using each of the three aforementioned query logs for DBpedia. We first observe that QLog-DB performs poorly compared to the other two query logs. This is because the DBpedia SPARQL query log is not comprehensive enough and is limited in the variety of relationships captured, making it ineffective. The second interesting observation we make is the algorithm requires 120.3 suggestions on average using Data-DB, while it requires 126.6 suggestions with Wiki-DB. Data-DB performs slightly better than Wiki-DB due to the fact that DBpedia is a high quality data graph generated using the info-boxes in Wikipedia pages. The sets of positive edges in Wiki-DB are simulated using the text in Wikipedia and the DBpedia data graph. The two query logs are thus highly similar to each other, unlike the case in Freebase where we could see a significant difference between the performance of Wiki-FB and Data-FB.
Parameter Tuning for RDP
------------------------
We finally study a variation of RDP, and the effect of $N$ and $\tau$, the two parameters used in RDP. As described in Section \[sec:rcp\], given a query session $Q$, RDP builds $N$ different random decision paths. Each random decision path is grown incrementally, until either the support for the path is no more than a threshold $\tau$, or if all edges in $Q$ are exhausted. While building a random decision path, RDP considers both the positive and negative edges. To study if considering the negative edges indeed helps in better identifying the user’s query intent, we create a variation of RDP, called RDP-noneg, which does not include any negative edges in the random decision paths. Figures \[fig:algo-param-fb\] and \[fig:algo-param-db\] compare the average number of suggestions required to complete each query graph with different values of $N$ and $\tau$, for Freebase and DBpedia queries respectively. In both the cases, we observe that the average number of suggestions required per query decreases as we increase the number of random decision paths, and the threshold $\tau$. It saturates after we reach around 10 for both $N$ and $\tau$ in RDP. Figures \[fig:algo-param-fb\] and \[fig:algo-param-db\] also compare the average number of suggestions required to complete the query graphs using RDP and RDP-noneg. With the best parameter values of $N=25$ and $\tau=25$, RDP requires 44.2 suggestions while RDP-noneg requires 60.9 suggestions in Freebase. RDP also requires fewer suggestions in DBpedia with 128.5 suggestions compared to 141.5 suggestions required by RDP-noneg. We observe that RDP significantly outperforms its variation RDP-noneg, indicating that considering negative edges in query sessions is indeed helpful.
Conclusions
===========
We introduce [[$\mathsf{Orion}$]{}]{}, a visual query builder that helps schema-agnostic users construct complex query graphs by automatically suggesting new edges to add to the query graph. [[$\mathsf{Orion}$]{}]{}’s edge ranking algorithm RDP, ranks candidate edges by how likely they will be of interest to the user, using a query log. Since there are no real-world query logs, we propose several ways of simulating a query log. User studies show that [[$\mathsf{Orion}$]{}]{} has a 70% success rate of building complex query graphs, significantly better than a baseline system resembling existing visual query builders, that has a 58% success rate. We also compare RDP with several methods based on other machine learning algorithms and observe that, on average, those other methods require 1.5-4 more suggestions to complete query graphs.
[10]{}
D. Abadi et al. The beckman report on database research. , pages 61–70, 2014.
M. Arenas, B. Cuenca Grau, E. Kharlamov, S. Marciuska, and D. Zheleznyakov. Faceted search over ontology-enhanced [RDF]{} data. CIKM, pages 939–948, 2014.
S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. Ives. pedia: A nucleus for a [Web]{} of open data. ISWC, 2007.
N. H. Balkir, G. ?zsoyoglu, and Z. M. ?zsoyoglu. . , 2002.
P. A. Bernstein et al. Future directions in [DBMS]{} research - the laguna beach participants. , pages 17–26, 1989.
S. S. Bhowmick. towards bridging the chasm between graph data management and [HCI]{}. DEXA, pages 1–11, 2014.
S. S. Bhowmick, B. Choi, and S. Zhou. : Towards [A]{} visual interaction-aware graph query processing framework. CIDR, 2013.
H. Blau, N. Immerman, and D. D. Jensen. A visual language for relational knowledge discovery. Technical Report UM-CS-2002-37, Department of Computer Science, University of Massachusetts, 2002.
K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. SIGMOD, pages 1247–1250, 2008.
D. Braga, A. Campi, and S. Ceri. (xquery by example): A visual interface to the standard [XML]{} query language. , pages 398–443, 2005.
L. Breiman. Random forests. , 45(1):5–32.
D. H. Chau, C. Faloutsos, H. Tong, J. I. Hong, B. Gallagher, and T. Eliassi[-]{}Rad. visual query system for large graphs. ICDMW, pages 963–966, 2008.
E. Demidova, X. Zhou, and W. Nejdl. Efficient query construction for large scale data. SIGIR, pages 573–582, 2013.
H. He, H. Wang, J. Yang, and P. S. Yu. : Ranked keyword searches on graphs. SIGMOD, pages 305–316, 2007.
M. Hildebrand, J. van Ossenbruggen, and L. Hardman. /facet: [A]{} browser for heterogeneous semantic web repositories. , 2006.
H. H. Hung, S. S Bhowmick, B. Q. Truong, B. Choi, and S. Zhou. : Blending visual subgraph query formulation with query processing on large networks. SIGMOD, pages 1097–1100, 2013.
H. V. Jagadish, A. Chapman, A. Elkiss, M. Jayapandian, Y. Li, A. Nandi, and C. Yu. Making database systems usable. SIGMOD, pages 13–24, 2007.
N. Jayaram, S. Goyal, and C. Li. : Auto-suggestion enabled visual interface for interactive graph query formulation. , pages 1940–1943, 2015.
N. Jayaram, M. Gupta, A. Khan, C. Li, X. Yan, and R. Elmasri. querying knowledge graphs by example entity tuples. ICDE, pages 1250–1253, 2014.
N. Jayaram, A. Khan, C. Li, X. Yan, and R. Elmasri. Querying knowledge graphs by example entity tuples. , 27(10):2797–2811, 2015.
C. Jin, S. S. Bhowmick, B. Choi, and S. Zhou. : A practical framework for blending visual subgraph query formulation and query processing. ICDE, pages 222–233, 2012.
C. Jin, S. S. Bhowmick, X. Xiao, J. Cheng, and B. Choi. : Towards blending visual query formulation and query processing in graph databases. SIGMOD, pages 111–122, 2010.
L. Lim, H. Wang, and M. Wang. Semantic queries by example. EDBT, pages 347–358, 2013.
B. Liu, W. Hsu, and Y. Ma. Integrating classification and association rule mining. KDD, pages 80–86, 1998.
M. Morsey, J. Lehmann, S. Auer, and A.-C. N. Ngomo. benchmark: Performance assessment with real queries on real data. ISWC, pages 454–469, 2011.
D. Mottin, M. Lissandrini, Y. Velegrakis, and T. Palpanas. Exemplar queries: Give me an example of what you need. , 2014.
E. Oren, R. Delbru, and S. Decker. Extending faceted navigation for [RDF]{} data. ISWC, pages 559–572, 2006.
M. Petropoulos, A. Deutsch, and Y. Papakonstantinou. Interactive query formulation over web service-accessed sources. SIGMOD, pages 253–264, 2006.
M. Petropoulos, Y. Papakonstantinou, and V. Vassalos. Graphical query interfaces for semistructured data: The [QURSED]{} system. , pages 390–438, 2005.
R. Pienta, A. Tamersoy, H. Tong, A. Endert, and D. H. P. Chau. Interactive querying over large network data: Scalability, visualization, and interaction design. , pages 61–64, 2015.
X. Su and T. M. Khoshgoftaar. A survey of collaborative filtering techniques. , 2009.
F. M. Suchanek, G. Kasneci, and G. Weikum. : a core of semantic knowledge unifying [WordNet]{} and [Wikipedia]{}. WWW, 2007.
W. Wu, H. Li, H. Wang, and K. Q. Zhu. Probase: a probabilistic taxonomy for text understanding. SIGMOD, pages 481–492, 2012.
M. Yahya, K. Berberich, S. Elbassuoni, M. Ramanath, V. Tresp, and G. Weikum. Natural language questions for the web of data. EMNLP-CoNLL, pages 379–390, 2012.
M. M. Zloof. Query by example. AFIPS, pages 1–24, 1975.
[^1]: Linking open data. <http://www.w3.org/wiki/SweoIG/TaskForces/CommunityProjects/LinkingOpenData>.
[^2]: <http://www.tacc.utexas.edu>.
[^3]: Atomic values such as integers are not supported in the current version of the system.
[^4]: Without loss of generality, we use a node’s name as its identifier in presenting examples, assuming the names are unique.
[^5]: [[$\mathsf{Orion}$]{}]{} currently ranks suggested edges by their relevance to users’ query intent, in both active and passive modes. How to rank node names/types based on query intent is an interesting future direction.
[^6]: <https://portal.tacc.utexas.edu/user-guides/lonestar>.
|
---
abstract: 'We find that a class of entanglement measures for bipartite pure state can be expressed by the average values of quantum operators, which are related to any complete basis of one partite operator space. Two specific examples are given based on two different ways to generalize Pauli matrices to $d$ dimensional Hilbert space and the case for identical particle system is also considered. In addition, applying our measure to mixed state case will give a sufficient condition for entanglement.'
author:
- 'Z. Xu'
- 'B. Zeng'
- 'D.L. Zhou'
title: Operator representations for a class of quantum entanglement measures and criterions
---
Introduction
============
Quantum entanglement is an essential physical resource to process quantum information and computation, which enables us to complete the tasks intractable in classical domain, such as quantum teleportation, quantum cryptography, Shor’s algorithm of factoring large numbers, and Grover’s quantum searching algorithm [@Ni].
In order to use such kind of resources efficiently, it is necessary to qualify the properties and quantify the degrees of quantum entanglement for a given quantum state. In this direction, continuous progresses have been made. To clarify the meaning and qualify the properties of entanglement, Werner defined separate state from whether being able to prepare the state classically, whose definition has become the standard mathematical basis of entanglement state[@Wer]. Next, Peres proposed a famous necessary condition for separity —positive of partial transpose density operator [@Per], then Horodeckies prove this criteria is also sufficient in the cases of $\mathcal{H}^2\otimes \mathcal{H}^2$ and $\mathcal{H}^2\otimes
\mathcal{H}^3$ [@Ho1].
In order to quantify this property, many entanglement measures have been proposed in the past years, both for pure states and mixed states[@Be; @Wo; @Ho]. However, only for bipartite pure states the quantitative theory of entanglement satisfies with all the *priori* axioms of a good entanglement measure, which is mainly due to the existence of the celebrated Schmidt decomposition for these states. It is well known that the von Noemann entropy of the reduced density matrix $S_{E}$ is the unique measure for bipartite pure states in the sense that $S_{E}$ can be concentrated and diluted with unit asymptotic efficiency[@Be; @Ni1]. However, Vidal developed the concept of entanglement monotone and shows that to characterize the non-local property of finite number bipartite pure states, indeed $d-1$ independent measure is needed in the sense that there is $d-1$ Schmidt coefficients [@Vi]. Thus it is known that although the entanglement monotones have different asymptotic properties than $S_{E}$, they are important for characterizing non-local properties under LOCC transformations. For mixed state case, recently Wooters gives out an explicit expression of entanglement of formation in $\mathcal{H}^2 \otimes \mathcal{H}^2$[@Woo]. However, there are still many open questions, especially for many partite system and mixed states.
As we know, entanglement measures are functionals of density operator. However, quantities in traditional quantum physics are quantum observables. In this sense, entanglement measures are not standard physical quantum observables, and they are also not the average values of some entanglement measure operators. In this article, we attempt to establish the relations between entanglement measures and quantum observables. However, we achieve this end only for a specific class of entanglement measures, which will be analyzed in Section $2$, where the case for identical system is also considered. In sec. $3$, two specific examples are given based on different generations of Pauli matrixes, which preserve Hermitian and Unitary respectively. Finally, we apply those results to form a criterion of mixed state entanglement and a short summary is also given in sec. $4$.
Operator space representations for a class of quantum entanglement measures
===========================================================================
For a bipartite pure state $|\psi_{AB}\rangle$ in Hilbert space $\mathcal{H}_A^d \otimes \mathcal{H}_B^{d^\prime} \quad (d\le
d^{\prime})$, a class of functions of reduced density operator $\rho_A$ can be defined as $$M_e(n)=1-\mbox{Tr} {\hat{\rho}_A}^n, \qquad (n\in N\ and\ n\ge 2)$$ where the reduced density operator $\hat{\rho}_A=\mbox{Tr}_B
(|\psi_{AB}\rangle \langle \psi_{AB}|)$. It is easy to show that the above class of functions are entanglement monotones, or entanglement measures due to the fact that they only depend on the eigenvalues the reduced density matrix $\rho_A$, or equivalently the Schmidt numbers of the state $|\psi_{AB}\rangle$ [@Vi].
Denote the linear space of operators act on the Hilbert space $\mathcal{H}_A^d$ as $\mathcal{M}_A^d$, which is a linear space of $d\times d$ dimensions, and denote the arbitrary operator $P\in
\mathcal{M}_A^d$ as $|P\rangle$ and $P^{\dag}\in \mathcal{M}_A^d$ as $\langle P|$. Define the inner product of $\mathcal{H}_A^d$ as $$\langle P | Q \rangle =\mbox{Tr} (P^\dagger Q), \qquad \forall P,Q
\in \mathcal{M}_A^d.$$ Then we can rewrite the class of entanglement measures in Eq. (1) as $$M_e(n)=1-\langle \rho_A| \rho_A^{n-2} | \rho_A \rangle.$$
For each entanglement measure $M_e(n)$, we take $n-1$ sets of complete operators $ S_C^m=\{ \mathcal{O}_i^m \} \quad
(m=1,2,\cdots,n-1) $, which satisfy $$\sum_i |\mathcal{O}^m_i\rangle \langle \mathcal{O}^m_i|=1.$$ Using the above relations, we rewrite the entanglement measures as $$\begin{aligned}
M_e(n)&=&1- \sum_{i_1,i_2,\cdots,i_{n-1}}\langle
\rho_A|\mathcal{O}^1_{i_1} \rangle \langle\mathcal{O}^1_{i_1}
|\rho_A |\mathcal{O}^2_{i_2}\rangle \cdots
\langle\mathcal{O}^{n-2}_{i_{n-2}}
|\rho_A|\mathcal{O}^{n-1}_{i_{n-1}}\rangle
\langle\mathcal{O}^{n-1}_{i_{n-1}}|\rho_A \rangle \nonumber\\
&=&1- \sum_{i_1,i_2,\cdots,i_{n-1}}\langle \mathcal{O}^1_{i_1}
\rangle \langle\mathcal{O}^2_{i_2}{\mathcal{O}^1_{i_1}}^\dagger
\rangle \cdots
\langle\mathcal{O}^{n-1}_{i_{n-1}}{\mathcal{O}^{n-2}_{i_{n-2}}}^\dagger
\rangle \langle{\mathcal{O}^{n-1}_{i_{n-1}}}^\dagger\rangle,
\label{mr}\end{aligned}$$ where $$\langle \mathcal{O} \rangle=\mbox{Tr} (\rho_A \mathcal{O}),$$ and obviously it is the also expected value of operator $\mathcal{O}$ in state $|\psi_{AB}\rangle$.
Eq. (\[mr\]) is the main result of this paper and it relates the entanglement measures with physical obeservables, *i.e.*, it tells us the following information: If we obtain a serious of expected values for some complete operators, the degree of entanglement can be evaluated by Eq. (\[mr\]). In other words, we can measure entanglement by measuring some physical observables. It is worthy to note that physical observables can be represented by unitary operators besides Hermitian ones, in the sense that for any unitary operator we can always find such a Hermitian operator that might be mapped to the unitary operator by exponential functions.
In the case of $n=2$, Eq.(\[mr\]) takes a much simpler form: $$M_e(2)=1-\sum_i {\left| \langle \mathcal{O}_i \rangle
\right|}^2=\frac{1}{2}C_I^2.$$
Where $C_I$ is the generalized concurrence, or $I$-concurence for two qudits. It is well known that among all the entanglement monotones, concurrence is important since it is related to the entanglement of formation for two qubits[@Wo]. It is also found that there is many ways to define concurrence for bipartite pure states, which reveals different physical meanings[@Be; @Ab; @Ch]. Very recently, the concept of concurrence is generalized to higher dimensions based on the “universal inverter" and the mathematical point of view[@Ru; @Fei], although almost all the ways of defining concurrence for two qubits can not be generalized to higher dimensions[@We]. It is found that the generalized $I$-concurence $C_I$ with its mixed state counterpart is useful in characterizing the non-local properties for bipartite states, both pure and mixed[@Ru2; @De]. Due to these reasons, we will concentrate our attention to this specific case and give explicit examples in the following section.
Before going to concrete example, we first consider a special case, *i.e.*, entanglement of identical particle systems. Although the theory of entanglement is widely developed in the systems of distinguishable particles, only very recently the entanglement properties in identical particle systems began to attract much attention[@slm; @sckll; @You; @lbll; @gf; @esbl] in the fields of quantum information and quantum computation. It is also shown that for any $N$ identical particle pure state $|\psi_N\rangle$, all the information of their quantum correlation between one particle and the others are contained in the single particle density matrix [@fan]. Therefore our entanglement measure is not only suitable for bipartite case here, but also a measure (to see this is indeed an entanglement measure here, see ref [@Bre]) for $N$ identical particle entanglement, *i.e.*,
$$M_e(2)=1-\sum\limits_{i=0}^{d^2-1}|\langle\Psi_N|O_{i}|\Psi_N\rangle|^2.$$
Realization of $M_e(2)$ with Pauli Operators and its high dimensional generalizations
=====================================================================================
In this section, we will give examples of realization of $M_e(2)$ with Pauli operators and the two different generations of Pauli matrixes to higher dimensional Hilbert space, which preserve Hermitian and Unitary respectively. We know that an arbitrary state of two qubits in the Hilbert space $H=H_{A}{\otimes}H_{B}$ (where $H_{A}=H_{B}=C^{2}$) can be written as
$$\Psi=\alpha_{1}|00\rangle+\alpha_{2}|01\rangle+\alpha_{3}|10\rangle+\alpha_{4}|11\rangle.$$
where $\sum_{i}|\alpha_{i}|^{2}=1$.
Let $s_i=\frac{1}{\sqrt{2}}\sigma_i$, where $\sigma_0=I$ and $\sigma_{i}\ (i=1,2,3)$ are usual Pauli operators. Obviously $\{s_i\}$ form a basis for $2\times 2$ operator and thus
$$\begin{aligned}
M_e(2)&=& 1-\sum\limits_{i=0}^{3} \langle s_i \rangle^2=
1-\frac{1}{2}\sum\limits_{i=0}^{3}
\langle \sigma_i \rangle^2\nonumber\\
&=&\frac{1}{2}\left( 1-\sum\limits_{i=0}^{3} \langle \sigma_i
\rangle^2 \right)=\frac{1}{2}C^2,\end{aligned}$$
where
$$C=2|\alpha_{1}\alpha_{4}-\alpha_{2}\alpha_{3}|$$
is the usual concurrence.
For qudits case, we demonstrate two kinds of commonly used “generalized" Pauli operators. The first kind is so-called Gell-mann matrices $\lambda_i$, which are Hermitian generators of $SU(d)$. From the completeness relation of $\lambda_i$
$$\sum\limits_{i=1}^{d^2-1}(\lambda_i)_{kl}(\lambda_i)_{pq}=2\left(\delta_{kp}\delta_{lp}-
\frac{1}{d}\delta_{kp}\delta_{lp}\right),$$
it is easy to show that
$$M_e(2)=\frac{(d-1)}{d}-\frac{1}{d}\sum\limits_{i=1}^{d^2-1}
\langle\Psi|\lambda_{i}|\Psi\rangle^2.$$
It is noticed that this result is in fact already gotten in Ref. [@Mah].
Another kind of generalized Pauli operators are $Z^mX^n$, which are all unitary matrices. Here $Z$ and $X$ are the generators of quantum plane algebra with $q^{d}=1$ [@Sun]. The $Z$-diagonal representation of $Z$ and $X$ given by $$\begin{aligned}
Z &\equiv &\sum_{k_{0}}^{d-1}|k\rangle q_{d}^{k}\langle k|, \\
X &\equiv &\sum_{k=0}^{d-1}|k\rangle \langle k+1|,\end{aligned}$$for $q_{d}=e^{i\frac{2\pi }{d}}$.
From the completeness relation of $Z^mX^n$
$$\frac {1} {d} \sum\limits_{m,n=0}^{d-1}|Z^mX^n\rangle\langle
Z^mX^n|=1,$$
it is easy to show that
$$M_e(2)=1-\frac {1} {d}
\sum\limits_{m,n=0}^{d-1}|\langle\Psi|Z^mX^n|\Psi\rangle|^2.$$
Applications to mixed state entanglement
========================================
Apparently the entanglement measure defined in Eq. (1) cannot be a entanglement measure for mixed state case. However, the technique developed above can help us to derive some criterion for mixed state entanglement.
The completeness relation Eq. (4) is equivalent to
$$\sum\limits_{i=1}^{d^2}O_{i}^{\dag}YO_i=trY$$
for arbitrary $d\times d$ operator $Y$. Therefore, if $Y=I$ and $O_m^i$ are hermitian, we have
$$\sum\limits_{i=1}^{d^2}O_i^2=d.$$
So the sum of uncertainty of $O_m$ gives that
$$\begin{aligned}
\sum\limits_{i=1}^{d^2}(\delta O_i)^2
&=&\sum\limits_{i=1}^{d^2}\mbox{tr}(\rho
O_i^{2})-(\mbox{tr}(\rho O_{i}))^2\nonumber\\
&=&d-\sum\limits_{i=1}^{d^2}(\mbox{tr}(\rho O_{i}))^2=d-\sum\limits_{i=1}^{d^2}\langle O_{i}\rangle^2\nonumber\\
&=&d-\mbox{tr}(\rho^2)\geq d-1.\end{aligned}$$
Then we can get a non-trivial sum uncertainty relation [@Hof1]
$$\sum\limits_{i=1}^{d^2}\Big(\delta\left(O_{iA}-O_{iB}\right)\Big)^2\geq
2(d-1)$$
to result in a sufficient condition for entanglement if the above inequality is violated. This entanglement criterion may be stronger than Peres-Horodecki criterion for it is shown that some PPT state violate this criterion[@Hof2].
This idea is also useful in $N$-identical particle case, which will lead to an entanglement criterion based on the sum uncertainty of collective operators for many identical particles. For $N$ identical particles, the collective operator is defined as
$$O_i=\sum\limits_{K=1}^{N}O_{iK},\; (K=1,2,...,N).$$
Correspondingly the sufficient condition for a $N$ identical particles state to be entangled is
$$\sum\limits_{i=1}^{d^2}(\delta O_{i})^2< N(d-1).$$
Usually, for $N$-identical qubits we choose $O_{iK}\, (i=0, 1, 2,
3)$ as $I,s_1,s_2,s_3$, then $O_i\, (i=0, 1, 2, 3)$ will be the total spin of the system apart from a constant multiplier $\frac{1}{\sqrt{2}}$. This criterion is analogous to the criterions defined by the squeezing parameters in the literatures [@sq1; @sq2; @sq3].
In summary, we showed that a class of entanglement measures for bipartite pure state can be expressed by the average values of quantum operators, which are related to any complete basis of one partite operator space with two specific examples given based on two different ways to generalize Pauli matrices to $d$ dimensional Hilbert space. In addition, applying our measure to mixed state case gave a sufficient condition for entanglement and the case for identical particle systems was also considered.
The authors would like to thank Prof. L. You for useful discussions. The work of Z. X is supported by CNSF (Grant No. 90103004, 10247002). The work of D. L. Z is partially supported by the National Science Foundation of China (CNSF) grant No. 10205022.
[99]{}
M. A. Nielsen and I. S. Chuang, *Quantm computation and quantum information*, Cambridge University Press (2000). \[51\] R. Werner, Quantum states with Einstein-Podolscky-Rosen correlations admitting a hidden-variable model, Phys. Rev. A 40, 4277 (1989). A. Peres, Phys. Rev. Lett. **77**, 1413 (1996). M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A **223**, 1 (1996). C. H. Bennett, D. P. DiVincenzo, and J. A. Smolin et. al, Phys. Rev. **A54**, 3824 (1996). W. K. Wootters, Phys. Rev. Lett. **80**, 2245, 1998. M. Horodecki, P. Horodecki, and R. Horodecki, Springer Tr. Mod. Phys. **173**, 151 (2001) M. A. Nielsen, Phys. Rev. **A61**, 064301 (2000). G. Vidal, J.Mod.Opt. **47**, 355 (2000). A. F. Abouraddy, B. E. A. Saleh, and A. V. Sergienko et. al, Phys. Rev. **A64**, 050101 (2001). J. L. Chen, L. Fu, A. A. Ungar and X. G. Zhao, Phys. Rev. **A65**, 044303 (2002). J. Schliemann, D. Loss and A. H. MacDonald, Phys. Rev. **B63**, 085311 (2001).
J. Schliemann, J. I. Cirac, M. Kus, M. Lewenstein and D. Loss, Phys. Rev. **A64**, 022303 (2001). P. Paskauskas and L. You, Phys. Rev. **A64**, 042310 (2001).
Y. S. Li, B. Zeng, X. S. Liu and G. L. Long, Phys. Rev. **A64**, 054302 (2001).
J. R. Gittings and A. J. Fisher, quant-ph/0202051.
K. Eckert, J. Schliemann, D. Bru$\beta$, and M. Lewenstein, Annals of Physics (New York) **299**, 88 (2002). A. Fang and Y. C. Zhang, Phys. Lett. **A311**, 443 (2003). C.H. Bennett, D. P. DiVincenzo and J.A. Smolin et al, Mixed State Entanglement and Quantum Error Correction, Phys. Rev. A 54, 3824 (1996). W. K. Wootters, Entanglement of formation of an arbitrary state of two qubits, Phys. Rev. Lett. 80, 2245 (1998). P. Rungta, V. Bu$\check{z}$ek, and C. M. Caves et. al, Phys. Rev. **A64**, 042315 (2001). S. Albeverio and S. M. Fei, J. Opt. **B3**,223 (2001). K. G. H. Vollbrecht and R. F. Werner, J. Math. Phys. **41**, 6772 (2000). P. Rungta and C. M. Caves, quant-ph/0208002. A. Delgado and T. Tessier, quant-ph/0210153. G. K. Brennen, quant-ph/0305094. G. Mahler, V. A. Weverruß, *Quantum Networks, dynamics of open nanostructures*, Springer-Verlag Berlin Herdelberg (1995). C. P. Sun, in “Quantum Group and Quantum Integrable Systems", ed by M. L. Ge, World Scientific, 1992, p.133; M. L. Ge, X. F. Liu, C. P. Sun, J. Phys A-Math. Gen **25** (10): 2907, (1992). Holger F. Hofmann, and Shigeki Takeuchi, quant-ph/0305002. Holger F. Hofmann, quant-ph/0305003. A. S. Sørensen, L.M. Duan, J.I. Cirac, and P. Zoller, Nature $409$, 63 (2001). A. Messikh, Z. Ficek, and M.R.B. Wahiddin, quant-ph/0305166. J.K. Stochton, J.M. Geremia, A.C. Doherty, and H. Mabuchi, Phys. Rev. **A67**, 022112 (2003).
|
---
abstract: 'There is much current interest in modelling suspensions of algae and other micro-organisms for biotechnological exploitation, and many bioreactors are of tubular design. Using generalized Taylor dispersion theory, we develop a population-level swimming-advection-diffusion model for suspensions of micro-organisms in a vertical pipe flow. In particular, a combination of gravitational and viscous torques acting on individual cells can affect their swimming behaviour, which is termed gyrotaxis. This typically leads to local cell drift and diffusion in a suspension of cells. In a flow in a pipe, small amounts of radial drift across streamlines can have a major impact on the effective axial drift and diffusion of the cells. We present a Galerkin method to calculate the local mean swimming velocity and diffusion tensor based on local shear for arbitrary flow rates. This method is validated with asymptotic results obtained in the limits of weak and strong shear. We solve the resultant swimming-advection-diffusion equation using numerical methods for the case of imposed Poiseuille flow and investigate how the flow modifies the dispersion of active swimmers from that of passive scalars. We establish that generalized Taylor dispersion theory predicts an enhancement of gyrotactic focussing in pipe flow with increasing shear strength, in contrast to earlier models. We also show that biased swimming cells may behave very differently to passive tracers, drifting axially at up to twice the rate and diffusing much less.'
author:
- 'R. N. Bearon'
- 'M. A. Bees'
- 'O. A. Croze'
bibliography:
- 'biblio23\_02\_12.bib'
nocite: '[@*]'
title: 'Biased swimming cells do not disperse in pipes as tracers: a population model based on microscale behaviour'
---
Introduction
============
Swimming micro-organisms, such as algae and bacteria, have their own agenda; selective pressures lead cells to adopt strategies to optimize a combination of environmental conditions, such as illumination, nutrients or the exchange of genetic material. This can significantly impact the behaviour of suspensions of swimming micro-organisms, particularly in flows where biased motion across streamlines can lead to rapid transport. For example, various algae are gravitactic, that is they swim upwards on average in still fluid which can be beneficial for reaching regions of optimal light. For some species this is due to being bottom-heavy - the centre of gravity for these cells is offset from the centre of buoyancy, and the combination of the effects of gravity with the buoyancy force gives rise to a gravitational torque which serves to reorient the cell allowing it to swim upwards - whereas in others sedimentary torques lead to similar behaviour [@Roberts:2006]. However, in shear flow the cells may be reoriented from the vertical due to viscous torques [@Pedley:1992a]. For a vertical pipe containing downwelling fluid, gravitactic cells can accumulate near the centre [@Kessler:1985a], a phenomenon known as gyrotactic focussing. As recently predicted theoretically by @Bees:2010, such a modification of the spatial distribution of algae in tubes alters significantly the effective axial dispersion of the cells.
There is much current interest in employing micro-organisms for biotechnological purposes, from the production of biofuels[@Melis:2001; @Chisti:2007], such as hydrogen, biomass or lipids, to high-value products, such as $\beta$-carotene. Cells are grown either extensively on low value land or intensively to optimize growth. Intensive culture systems typically consist of arrays of tubes (vertical, horizontal or helical) and aim to maximize light and nutrient uptake. Bioreactors may be pumped or bubbled, in turbulent or laminar regimes. However, energy input may be energy wasted; efficient bioreactor designs might aim to make use of the swimming motion of the cells themselves, or accommodate the fact that swimming micro-organisms (where drift across streamlines is more important than axial motion) and nutrients are likely to drift and diffuse at different rates along the tubes.
In a still fluid, the swimming behaviour of individual gyrotactic phytoplankton has been usefully described as a biased random walk: the cell orientation is assumed to be a random variable that undergoes diffusion with drift [@Hill:1997]. At the population-level the dynamics can be modelled with a swimming-diffusion equation for the cell concentration, where the cells swim in a preferred direction at a mean velocity and diffuse with an anisotropic diffusion tensor that represents the random component of swimming [@Bearon:2008]. Extending such population-level models to incorporate the effects of ambient flow is non-trivial. Although the orientation distribution and resultant mean swimming velocity of such cells in unbounded homogeneous shear flow has previously been computed [@Bees:1998a; @Almog:1998], the resultant diffusion tensor is more complicated. For homogeneous shear flow, subject to certain constraints on the form of the flow, @Hill:2002 and @Manela:2003 calculated expressions for the diffusion tensor using the theory of generalized Taylor dispersion (GTD). Because of its account of shear-induced correlations in cell position, GTD is a more rational account over earlier approaches based on an orientation only description using a Fokker-Planck equation and diffusion tensor estimate (FP)[@Pedley:1990].
@Bearon:2011 compared two-dimensional individual-based simulations of swimming micro-organisms with swimming-advection-diffusion models for the whole population in situations where the flow is not homogeneous, that is in flows in which the cells can experience a range of shear environments. Using GTD theory to calculate local expressions for the mean swimming direction and diffusion coefficients, the results of the individual and population models were generally in good agreement and were able to successfully predict the phenomena of gyrotactic focussing. However, this work was restricted to two-dimensions; both the swimming motions and velocity field were confined to a vertical plane.
Here, we consider axisymmetric pipe flow, which locally can be described by planar shear, and consider swimming motions which are allowed to be fully three-dimensional. First, we develop a population-level swimming-advection-diffusion model where the mean swimming velocity and diffusion tensor are based on the local shear. Next, a Galerkin method is presented for calculating the mean swimming velocity and diffusion tensor based on the local shear, and asymptotic results are obtained in the limits of weak and strong shear. The resultant swimming-advection-diffusion equation is then solved numerically for the case of imposed Poiseuille flow. We contrast the GTD results with the FP approach. Finally, we investigate how the flow modifies qualitatively and quantitatively the dispersion of active swimmers from that of a passive scalar.
This paper represents an important link study that will facilitate the comparison of the exact long-time theoretical results of @Bees:2010 and the forthcoming experimental results by the authors on the transient dynamics.
Mathematical Model
==================
Vertical pipe flow {#sec:vert}
------------------
Consider axisymmetric fluid flow with velocity $\mathbf{u}$ through a vertical tube of circular cross-section, radius $a$, with axis parallel to the $z$-axis pointing in the downwards direction, such that $$\begin{aligned}
\label{eq:pipe_flow}
\mathbf{u}=u(r)\mathbf{e}_z=U(1+\chi(r/a))\mathbf{e}_z.\end{aligned}$$ Here, $U$ is the mean flow speed, $U\chi$ is the variation of the flow speed relative to the mean, $r$ is the radial distance from the centre of the tube and ($\mathbf{e}_r,\mathbf{e}_\psi,\mathbf{e}_z$) are right-handed orthonormal unit vectors that define the cylindrical co-ordinates. For flow subject to a uniform pressure gradient and no-slip boundary conditions on the walls, we have simple Poiseuille flow, $\chi(r)=1-2r^2$. In the fully coupled problem, where the negative buoyancy of the cells modifies the flow, $\chi(r)$ must be determined, as in @Bees:2010.
A population-level model for gyrotactic micro-organisms in [*homogeneous*]{} shear flow has previously been derived based on generalized Taylor dispersion theory[@Hill:2002; @Manela:2003] (GTD). Specifically, for particular types of flow and on timescales long compared to $1/d_r$, where $d_r$ is the rotational diffusivity due to the intrinsic randomness in cell swimming, the cell concentration $n(\mathbf{x},t)$ was shown to satisfy a swimming-advection-diffusion equation of the form $$\begin{aligned}
\label{eq:ad_diff_in_flow}
\frac{\partial n}{\partial t}+\nabla_\mathbf{x}.\left[\left(\mathbf{u}+V_s\mathbf{q}\right)n-\frac{V_s^2}{d_r}\mathbf{D}.\nabla_\mathbf{x}n\right]=0,\end{aligned}$$ where $V_s$ is the constant cell swimming speed, and $\mathbf{q}$ and $\mathbf{D}$ are the non-dimensional mean cell swimming direction and diffusion tensor, respectively. Explicit expressions for $\mathbf{q}$ and $\mathbf{D}$ as a function of the local shear strength will be given in section \[sec:GTD\]. Furthermore, @Bearon:2011 show that this population-level approach is a good approximation for flow fields more general than homogeneous shear. Therefore, we shall use (\[eq:ad\_diff\_in\_flow\]) to describe the cell concentration in a pipe flow with non-homogeneous shear. To solve the swimming-advection-diffusion equation numerically, it is convenient to non-dimensionalize lengths based on the pipe radius, $a$, and non-dimensionalize time on $a^2d_r/V_s^2$, a characteristic timescale for diffusion across the pipe. This reveals two non-dimensional parameters in the problem: the Péclet number which is given by $$\begin{aligned}
Pe&=&\frac{Ua d_r}{{V_s}^2},\end{aligned}$$ and $\beta$, the ratio of pipe radius to a typical correlation length-scale of the random walk in the absence of bias, defined as $$\begin{aligned}
\label{eq:beta}
\beta=\frac{ ad_r }{V_s}.\end{aligned}$$ An alternative interpretation of $\beta=aV_s/(V_s^2d_r^{-1})$ is as a ‘swimming Péclet number’. Equation (\[eq:ad\_diff\_in\_flow\]) in non-dimensional form thus becomes $$\begin{aligned}
\label{eq:nd_ad_diff_in_flow}
\frac{\partial n}{\partial t}+\nabla_\mathbf{x}.\left[(Pe[1+\chi(r)]\mathbf{e}_z+ \beta\mathbf{q})n-\mathbf{D}.\nabla_\mathbf{x} n\right]=0.\end{aligned}$$
Generalized Taylor dispersion {#sec:GTD}
-----------------------------
The shear in the pipe flow given by (\[eq:pipe\_flow\]) can be locally described as a simple shear flow. Specifically, consider a Taylor expansion of the flow field near some reference point $\mathbf{R}_0$ which is at radial position $r=R_0$ $$\begin{aligned}
\mathbf{u}(\mathbf{R}) \approx \mathbf{u}(\mathbf{R_0})+(\mathbf{R}-\mathbf{R}_0).\mathbf{e}_r \frac{U}{a}\chi'(R_0/a)\mathbf{e}_z.\end{aligned}$$ We consider local co-ordinates relative to an origin located at $\mathbf{R}_0$ such that $\mathbf{k}$ is pointing vertically upwards and ($\mathbf{i},\mathbf{j},\mathbf{k}$) form a right-handed orthonormal set of unit vectors so that $$\begin{aligned}
\label{eq:local_co_ord1}
\mathbf{i} =\mathbf{e}_r,\quad \mathbf{j}=-\mathbf{e}_\psi,\quad \mathbf{k}=-\mathbf{e}_z.\end{aligned}$$ Defining the local position co-ordinate, $\mathbf{R}-\mathbf{R}_0=\xi \mathbf{i}+\eta\mathbf{j}+\zeta\mathbf{k}$, the flow field can then be written locally as simple shear, such that $$\begin{aligned}
\mathbf{u}(\mathbf{R})=\mathbf{u}(\mathbf{R}_0)+ G\xi\mathbf{k},\end{aligned}$$ where the shear strength $G$ is given by $-\frac{U}{a}\chi'$. With this choice of co-ordinates, the velocity gradient tensor, $\mathbf{G}$, defined such that $\mathbf{u}(\mathbf{R})=\mathbf{u}(\mathbf{R}_0)+(\mathbf{R}-\mathbf{R}_0).\mathbf{G}$, has the simple form $G_{ij}=G \delta_{i1}\delta_{j3}$.
The mean swimming direction, $\mathbf{q}$, and non-dimensional diffusion tensor $\mathbf{D}$ can be written as integrals over cell orientation, $\mathbf{p}$, in the form [@Hill:2002; @Manela:2003] $$\begin{aligned}
\label{eq:def_p}
\mathbf{q}&=&\int_\mathbf{p}\mathbf{p} f(\mathbf{p} ) d\mathbf{p} ,\\
\label{eq:pos_def_diffusion}
\mathbf{D}&=&\int_\mathbf{p} [\mathbf{b}\mathbf{p}+\frac{2\sigma}{ f(\mathbf{p})}\mathbf{b}\mathbf{b}.\hat{\mathbf{G}}]^{sym}d\mathbf{p}.\end{aligned}$$ Here $[]^{sym}$ denotes the symmetric part of the tensor, $\hat{\mathbf{G}}=\mathbf{i}\mathbf{k}$, and $\sigma$ is a non-dimensional measure of the shear, defined as $$\begin{aligned}
\label{eq:Pr_Pe_beta_relate}
\sigma=\frac{ G }{2d_r}=-\frac{Pe }{2 \beta^2} \chi'.\end{aligned}$$ We note that $\sigma$ varies with $r$ because the shear varies across the radius of the tube. However, in the theory of GTD the shear is assumed locally homogeneous, and so we calculate local expressions for the mean swimming and diffusion based on the local value of $\sigma$.
The equilibrium orientation, $f(\mathbf{p})$, and vector $\mathbf{b}(\mathbf{p})$ satisfy[@Hill:2002; @Manela:2003] $$\begin{aligned}
\label{eq:f_eqn_with_flow}
\mathcal{L}f&=&0,\\
\label{eq:b_eqn_with_flow}
\mathcal{L}\mathbf{b}-2\sigma\mathbf{b}.\hat{\mathbf{G}}&=&f(\mathbf{p})(\mathbf{p}-\mathbf{q}),\end{aligned}$$ subject to the integral constraints $$\begin{aligned}
\label{eq:int_constraints}
\int_\mathbf{p} f d\mathbf{p}=1,\quad \label{eq:fnorm}
\int_\mathbf{p} \mathbf{b}d\mathbf{p}=0.\label{eq:bnorm}\end{aligned}$$ Here, the linear operator $\mathcal{L}$ for a spherical swimming cell is defined by $$\begin{aligned}
\label{eq:mathcalG}
\mathcal{L}f&=&
\nabla_\mathbf{p}.(
(\lambda(\mathbf{k}-(\mathbf{k}.\mathbf{p})\mathbf{p})-\sigma\mathbf{j} \wedge \mathbf{p}
)f-\nabla_\mathbf{p} f),\end{aligned}$$ the gyrotactic bias in swimming direction is represented by the non-dimensional parameter $$\begin{aligned}
\label{eq:def_lambda}
\lambda=\frac{1}{2d_rB},\end{aligned}$$ and $B = \mu\alpha_{\perp}/2h\rho g$ is the gyrotactic reorientation time scale, where $h$ is the distance between an average cell’s centre-of-mass and centre-of-buoyancy, $\alpha_{\perp}$ is the dimensionless resistance coefficient for rotation about an axis perpendicular to $\mathbf{p}$, $\mu$ and $\rho$ are the fluid viscosity and density respectively, and $g$ is the magnitude of the gravitational force.
To summarize, the non-dimensional mean swimming velocity and diffusion tensor are given as functions of two non-dimensional parameters: $\lambda$, which only depends on properties of the cell, and $\sigma$, which quantifies the strength of the shear. See equations (\[eq:def\_p\]-\[eq:def\_lambda\]). Furthermore, for the pipe flow considered in the previous section, we can express $\sigma$ as a simple function of the non-dimensional parameters $Pe$, the global Péclet number, and $\beta$, the swimming Péclet number, and the shear profile $\chi'(r)$ (Eq. \[eq:Pr\_Pe\_beta\_relate\]). The solution of the governing non-dimensional swimming-advection-diffusion equation (Eq. \[eq:nd\_ad\_diff\_in\_flow\]) can therefore be determined by specifying the three non-dimensional parameters $\lambda, Pe$ and $\beta$ and the non-dimensional flow profile $\chi (r)$.
When explicit calculations are presented in this paper we have assumed that the flow is Poiseuille, $\chi(r)=1-2r^2$, and take $\lambda=2.2$ so as to compare with previous work [@Hill:2002; @Bees:1998a] based on the algal species [*C. augustae*]{} (wrongly identified as [*C. nivalis*]{}[@Croze:2010]). In @Bearon:2011, good agreement was found in planar pipe flow between individual based simulations and the population-level model with $\beta=10$ (where the reciprocal of $\beta$ was defined as $\epsilon=0.1$ therein). Motivated by the pipe dimensions in experiments currently in progress, we also consider $\beta=2.34$. Note that $\beta$ represents the ratio of pipe radius to the correlation length-scale of the random walk in the absence of bias. Therefore, when modelling a random walk as a diffusion process, $\beta$ should be sufficiently large[@Bearon:2011]. However, we hypothesize that this restriction may be relaxed in the case of gyrotactic cells that are well-focussed by the flow along the axis of the tube and only suffer rare collisions with the wall.
Calculation of mean swimming velocity and diffusion
---------------------------------------------------
@Hill:2002 demonstrate that the GTD equations (\[eq:f\_eqn\_with\_flow\]) and (\[eq:b\_eqn\_with\_flow\]) for $f$ and $\mathbf{b}$, respectively, can in general be solved by expanding in spherical harmonics using a Galerkin method. The method is summarized for the flow employed in this paper in appendix \[eq:App\_Garlekin\].
To simplify the numerical solution of the swimming-advection-diffusion equation in pipe flow, we fit the rather complex algebraic expressions in $\sigma$ obtained using the Galerkin method for the mean swimming direction and diffusion tensor with the simpler curves $$\begin{aligned}
\label{eq:q_sigma_fit}
q^r(\sigma)&=&- \sigma P(\sigma; \mathbf{a}^r,\mathbf{b}^r),\quad q^z(\sigma)=-P(\sigma; \mathbf{a}^z, \mathbf{b}^z), \\
D^{rr}_G(\sigma)&=&P(\sigma; \mathbf{a}^{rr}_G, \mathbf{b}^{rr}_G),\quad
D^{rz}_G(\sigma)=- \sigma P(\sigma; \mathbf{a}^{rz}_G, \mathbf{b}^{rz}_G), \quad
D^{zz}_G(\sigma)=P(\sigma; \mathbf{a}^{zz}_G, \mathbf{b}^{zz}_G),\end{aligned}$$ where $$\begin{aligned}
\label{eq:P_rat_func}
P(\sigma; \mathbf{a},\mathbf{b})=\frac{a_0 +a_2 \sigma ^2+a_4\sigma^4}{1+b_2 \sigma^2+b_4 \sigma^4}.\end{aligned}$$ The choice of $ \mathbf{a}$ and $\mathbf{b}$ coefficients is described in table \[tab:func\_fits\] with reference to asymptotic results presented below. Please refer to appendix \[eq:App\_functional fits\] for the coefficients of fits to the full Galerkin solution. We also consider results using the simpler estimate for the diffusion tensor, which we describe as the Fokker-Planck approximation (or FP), discussed in detail in appendix \[eq:App\_FP\_diff\].
$a_{0}$ $a_{2}$ $a_{4}$
---------------------- -------------------------- ----------------------------------- --------------------
$\mathbf{a}^r$ $\frac{ J_1}{\lambda}$ $\frac{2\lambda}{3} b^r_{4}$ $0$
$\mathbf{a}^z$ $K_1$ $\frac{4 \lambda}{3} b^z_{4}$ $0$
$ \mathbf{a}^{rr}_G$ $\frac{ J_1}{\lambda^2}$ $d_1b^{rr}_{4,G}$ $0$
$ \mathbf{a}^{zz}_G$ $\frac{ L_1}{\lambda}$ $d_4b^{zz}_{4,G}+d_3b^{zz}_{2,G}$ $d_3 b^{zz}_{4,G}$
$ \mathbf{a}^{rz}_G$ \*\* $d_2b^{rz}_{4,G}$ $0$
: \[tab:func\_fits\]In order to obtain the simplest functional fits whilst ensuring the asymptotic results are satisfied, the $ \mathbf{a}$ coefficients are as specified. The free parameters, $b_2, b_4$ are obtained through least squared optimization of each fit of velocity or diffusion component against $\sigma$. Because we are unable to obtain easily the coefficient of the $O(\sigma)$ correction to $D^{rz}_G$, in addition we allow $a^{rz}_{0,G}$ to vary. The fit coefficients for $\lambda=2.2$ are given explicitly in appendix \[eq:App\_functional fits\]. The subscript $G$ highlights that the results are for generalized Taylor dispersion (GTD), and \*\* indicates that the parameter is fitted.
In figure \[fg:diffusion\_fits\] we see that these simple functions are good approximations for the exact solutions for the mean swimming and diffusion, and it is evident how shear can significantly affect the mean swimming and diffusion. Furthermore, we note how that the calculation of diffusion via the GTD method is qualitatively different to that calculated via the simpler Fokker-Planck method (FP). In particular, we note that the components of diffusion approach zero in the limit of large shear using the GTD method, whereas they approach a finite non-zero limit via the FP method (see @Hill:2002).
![Mean swimming and diffusion coefficients as a function of shear, $\sigma$, for $\lambda=2.2$. Points are calculated using the Galerkin method, solid lines are functional fits described in the text, and dashed lines are asymptotic results. For diffusion calculations, black lines are for GTD, whereas red (grey) lines indicate the FP estimate. []{data-label="fg:diffusion_fits"}](./images/Diffusion_fits.png){width="\textwidth"}
To provide confidence in the results from the Galerkin method, we have obtained asymptotic expressions for $\sigma \ll 1$ and $\sigma \gg 1$, as described in appendices \[eq:App\_small\_sigma\] and \[eq:App\_large\_sigma\].
Specifically, for $\sigma \ll 1$, the mean swimming direction with respect to coordinates ($\mathbf{e}_r,\mathbf{e}_\psi,\mathbf{e}_z$) correct to $O(\sigma)$ is given by $$\begin{aligned}
\label{eq:mean_swim_small_sigma}
\mathbf{q}&=&-(\frac{\sigma}{\lambda} J_1,0, K_1)^T,\end{aligned}$$ where the quantities $J_1$ and $K_1$ are specified functions of $\lambda$, coinciding with the results of @Pedley:1990 using the FP model. However, calculation of the diffusion tensor from (\[eq:pos\_def\_diffusion\]) reveals the new result that at leading order the diffusion tensor is diagonal with horizontal component $D^{rr}=\frac{J_1}{\lambda^2}$, and vertical component $D^{zz}=\frac{L_1}{\lambda}$, where $L_1$ is also a specified power series in $\lambda$ (appendix \[sec:small\_sigma\_D\]). For $\lambda=2.2$ we have that $K_1=0.57, J_1=0.45$ and $L_1=0.11$, see appendix \[eq:App\_small\_sigma\]. There is an $O(\sigma)$ correction to the off-diagonal term $D^{rz}$, but the second term in the definition of the diffusion tensor in (\[eq:pos\_def\_diffusion\]) does not allow for a simple closed form expression for these components. (This is in contrast to expressions obtained by @Pedley:1990 using the simpler orientation-only FP model with a diffusion estimate proportional to the variance of $\mathbf{p}$.)
For $\sigma \gg 1$, as in @Bees:1998a, the mean swimming direction correct to $O(1/\sigma^2)$ is $$\begin{aligned}
\mathbf{q}&=&-(\frac{2\lambda}{ 3\sigma},0,\frac{4\lambda}{ 3\sigma^2})^T.\end{aligned}$$ Here, using GTD theory, we have the new result that the non-zero coefficients of the diffusion tensor are $$\begin{aligned}
\mathbf{D}&=&
\left(
\begin{array}{ccc}
\frac{d_1}{\sigma^2}&0&-\frac{d_2}{ \sigma }\\
0&\frac{1}{6}-\frac{d_5}{\sigma^2}&0\\
-\frac{d_2}{ \sigma }&0&d_3+\frac{d_4}{\sigma^2}
\end{array}
\right),\end{aligned}$$ where the quantities $d_1,d_2,d_3,d_4,d_5$ are polynomials in $\lambda$, given in appendix \[eq:App\_large\_sigma\]. For $\lambda=2.2$, we find that $d_1=0.68, d_2=0.0060,d_3=0.0020, d_4=5.9, d_5=1.3$.
The asymptotic results are presented in figure \[fg:diffusion\_fits\], indicating excellent agreement with results from the Galerkin method and demonstrating correspondence with the functional fits described above.
Population-level numerical simulations
======================================
Numerical Methods
-----------------
The governing swimming-advection-diffusion equation is solved using a spatially adaptive finite element method as described in @Bearon:2011. The cell concentration, $n$, is approximated using standard Lagrangian quadratic finite elements and the time derivative is approximated using an implicit second-order, backward difference scheme. The subsequent discrete linear system is assembled using the C++ library `oomph-lib` [@Heil:2006] and solved by a direct solver, SuperLU [@Demmel:1999]. In unsteady simulations, a fixed time-step of $dt=10^{-3}$ is used. The results were validated by repeating selected simulations with smaller error tolerances and timesteps.
Steady gyrotactic focussing
---------------------------
First, we seek an equilibrium solution $n(r)$ of equation (\[eq:nd\_ad\_diff\_in\_flow\]) which represents gyrotactic focussing of cells towards the centre of the pipe. Imposing zero flux on the pipe wall, at $r=1$, we have that $$\begin{aligned}
\beta q^r n-D^{rr}\frac{d n}{dr}=0,\end{aligned}$$ which we can integrate to obtain $$\begin{aligned}
\label{eq:equil_gyro}
n=n_0 \exp\left(\int \frac{\beta q^r}{D^{rr}} dr\right),\end{aligned}$$ where the radial components of the mean swimming direction and diffusion tensor, $q^r$ and $D^{rr}$, respectively, are functions of the local shear. In particular, if we take simple Poiseuille flow, $\chi(r)=1-2r^2$, we have that $\sigma$, the non-dimensional measure of the shear, is given by $\sigma=-\frac{Pe }{2 \beta^2} \chi'=\frac{2Pe }{ \beta^2} r$. For $\sigma\ll1$, at leading order we have that $q^r/D^{rr}=-\sigma \lambda=-\frac{2 Pe \lambda }{\beta^2} r$ from which we predict the Gaussian distribution $$\begin{aligned}
\label{eq:Gauss_dist}
n=n_0 \exp\left(-\frac{Pe \lambda }{\beta} r^2\right).\end{aligned}$$ As demonstrated in figure \[fg:diffusion\_fits\](f), the leading order asymptotic solution $q^r/D^{rr}=-\sigma \lambda$ is an excellent approximation for $\sigma=O(1)$. It is important to note that the GTD and FP methods yield a qualitative difference in the behaviour of $q^r/D^{rr}$.
Example calculations of the equilibrium solution (Eq. \[eq:equil\_gyro\]) are shown in figure \[fg:equil\_gyro\_focus\]. For the given values of $Pe$ and $\beta$, we see that cells undergo gyrotactic focussing, and that the distribution predicted by GTD theory can be well-approximated by the Gaussian distribution (Eq. \[eq:Gauss\_dist\]) but shows a marked difference to that predicted by FP theory. Furthermore, whereas we see that GTD predicts an enhancement of gyrotactic focussing with increasing shear strength, the FP approximation predicts a reduction in gyrotactic focussing with increasing shear strength at sufficiently large shear.
![Equilibrium concentration (\[eq:equil\_gyro\]) for $Pe=20$ (solid line) and $Pe=50$ (dashed line) for swimming parameter $\beta=2.34$. Cell diffusion is calculated using GTD (black) and FP (red/grey) approaches. The dotted lines are the associated Gaussian distributions (\[eq:Gauss\_dist\]). The solutions are normalized so that there is unit total mass per unit length, $\int 2\pi r n(r) dr=1$. []{data-label="fg:equil_gyro_focus"}](./images/equil_gyro_focus){width="80.00000%"}
Vertical dispersion
-------------------
@Bees:2010 investigated how the average axial dispersion was modified for gyrotactic organisms compared with a passive solute. Specifically, using the method of moments and the FP approach they obtained long-time expressions for the vertical drift relative to the mean flow and the effective axial swimming diffusivity as a function of $Pe$ and a gyrotactic parameter. Here, we perform a similar calculation, using simulations and the GTD calculations for the diffusion tensor. We solve numerically the swimming-advection-diffusion equation (\[eq:nd\_ad\_diff\_in\_flow\]) with initial condition $$\begin{aligned}
n(r,z,0)=n_0 \exp\left(-\left(\frac{z-0.1L}{0.01 L}\right)^2-\left(\frac{r}{0.5}\right)^2 \right), \end{aligned}$$ representing a Gaussian blob of cells centred at $z=0.1L, r=0$. For the simulation domain we take $z\in (0,L)$, $r\in (0,1)$. Furthermore, we impose no-flux boundary conditions on the walls $r=1$, symmetry around the centreline, and periodic boundary conditions in the vertical direction, but take $L$ to be sufficiently large that boundary effects do not influence the vertical distribution. In the results presented in figures \[fg:unsteady\_comparison\] and \[fg:drift\_diffusion\_transient\], we take $Pe=50$, $L=1200$ and run the simulations for $t\in[0,8]$. In figure \[fg:unsteady\_comparison\] we see example plots of the early concentration distribution as a function of time for both gyrotactic cells and a passive solute. For the passive solute, we take $\mathbf{D}=\frac{1}{6}\mathbf{I}$, and $\mathbf{q}=\mathbf{0}$ in equation (\[eq:nd\_ad\_diff\_in\_flow\]). As shown in appendix \[eq:App\_small\_sigma\], this is equivalent to considering mean swimming and diffusion in the absence of gyrotactic bias, $\lambda=0$, and shear, $\sigma=0$. Whereas the gyrotactic cells are focussed towards the centre of the pipe, the passive solute diffuses radially.
![Concentration in region $z\in(0,600)$, $r\in (0,1)$ from $t=0$ to $t=1$ at intervals of $\delta t=0.1$ with $Pe=50$. Upper plots are for gyrotactic cells with $\lambda=2.2$, $\beta=10$. Lower plots are equivalent results for a passive solute. The colour scale is based on the initial concentration distribution, with red representing the maximal initial concentration at the centre of the blob of cells, and blue zero concentration[]{data-label="fg:unsteady_comparison"}](./images/compare_unsteady_Pe_50.png){width="\textwidth"}
Following @Bees:2010, we quantify dispersion in terms of cross-sectionally averaged axial moments of the concentration distribution. To compute the moments of the distribution, we first translate to a reference frame moving with the mean flow, $\hat{z}=z-Pe~t$. The cross-sectional average, $m_p(t)$, of the $p$th axial moment, $c_p$, is (dropping hats for clarity) $$\begin{aligned}
c_p(r,t) &=&\int z^pn (r, z,t) dz, \quad p=0,1,2, \\
m_p(t) &=&2 \int c_p (r,t) r dr, \quad p=0,1,2.\end{aligned}$$ The mean and variance, $m_1$ and $m_2-m_1^2$, of the distribution are plotted in figure \[fg:drift\_diffusion\_transient\]. The solution is normalised so that the total mass is unity, $m_0=1$.
![The mean and variance, $m_1$ and $m_2-m_1^2$ of the distribution as a function of time, $t$ for $Pe=50$. Upper plots are for gyrotactic cells with $\lambda=2.2$, $\beta=10$. Lower plots are equivalent results for a passive solute. Open circles are results from numerical simulation, solid lines are linear regressions for $t\in[4,8]$. []{data-label="fg:drift_diffusion_transient"}](./images/drift_diffusion_transient.png){width="\textwidth"}
From the calculations of the $m_1$ and $m_2$, we then define the axial drift and effective axial diffusion to be $$\begin{aligned}
\Lambda_0 &=&\lim_{t\to\infty} \frac{d}{dt}m_1, \\
D_e &=&\lim_{t\to\infty} \frac{1}{2}\frac{d}{dt}(m_2-m_1^2).\end{aligned}$$
As depicted in figure \[fg:drift\_diffusion\_transient\], for $Pe=50$, performing a linear regression over the interval $t\in[4,8]$ we obtain $\Lambda_0 =35.2$ for the gyrotactic cells with parameters $\lambda=2.2$, $\beta=10$, compared to the long-time limit of $\Lambda_0=0$ for a passive scalar predicted from classic Taylor dispersion theory. This occurs because gyrotactic cells are focussed towards the centre of the tube where the flow is fastest and, hence, they are transported more rapidly than the mean flow. Noting that $\mathbf{D}=\frac{1}{6}\mathbf{I}$ for the passive solute, for $Pe=50$ the classical Taylor dispersion result predicts that $D_e =1/6+6Pe^2/48=313$, which compares well with the numerical calculation of $D_e=312$. For the gyrotactic cells with parameters $\lambda=2.2$, $\beta=10$, we see a much reduced axial dispersion, with an estimate of $D_e=20.0$. As discussed by @Bees:2010, this reduction in axial dispersion can be explained due to gyrotactic focussing: by self-concentrating towards the axis of the tube, cells undergo a much reduced sampling of radial space and thus sidestep classical shear-induced Taylor dispersion. Furthermore, preliminary calculations based on equations 6.1 and 6.2 of @Bees:2010 using the GTD values for the components of $\mathbf{q}$ and $\mathbf{D}$ give excellent agreement with these numerical computations. Specifically, the calculations yield $\Lambda_0 =35.2$ and $D_e=20.6$ for gyrotactic cells with parameters $\lambda=2.2$, $\beta=10$.
Discussion
==========
Here, we have considered the spatial distribution of gyrotactic algae in axisymmetric pipe flow. We have computed a population-level swimming-advection-diffusion model where the mean swimming velocity and diffusion tensor are based on the local shear using the theory of generalized Taylor dispersion. We have shown how shear modifies the mean swimming velocity and diffusion tensor and, furthermore, demonstrated how the diffusion tensor differs qualitatively from previous simpler models, such as the “Fokker-Plank" approach for which the diffusion tensor is estimated to be the product of the variance of the orientation distribution and a correlation timescale. We have demonstrated that the shear-induced modification to mean swimming velocity and diffusion results in gyrotactic focussing and have quantified how the axial drift and diffusion of a population of cells is modified from that predicted for a passive scalar.
In this paper, we only considered unidirectional coupling between flow field and cell concentration. However, actively swimming cells, that are typically denser than the fluid, will modify the flow field. In a dilute suspension, where direct cell-cell hydrodynamic coupling can be neglected, negatively buoyant cells modify the flow field from Poiseuille flow [@Bees:2010], which results in a change in local shear and thus a modification of the mean swimming velocity and diffusion tensor. Furthermore, direct hydrodynamic interactions between cells, and stresses induced by the swimming motions, may also alter the flow field [@Ishikawa:2009].
Work in progress by the authors aims to incorporate the population-level model derived here from generalized Taylor dispersion in Bees and Croze’s [@Bees:2010] modification of the classical Taylor-Aris theory in order to predict the axial drift and diffusion. Furthermore, both these predictions of long-time dispersion and the transient results presented in this paper will be compared with experimental observations of axial drift and diffusion in dyed suspensions of the alga [*Dunaliella salina*]{} in vertical tubes subject to imposed flow. Finally, work is in progress by the authors to use direct numerical simulations to study the dispersion of active swimmers in laminar and turbulent flows, comparing statistical measures of dispersion from simulations with analytical predictions using the GTD expressions derived in this paper.
R.N.B. acknowledges assistance from A.L. Hazel to implement the C++ library `oomph-lib`. M.A.B. and O.A.C. gratefully acknowledge support from EPSRC (EP/D073398/1) and the Carnegie Trust.
Galerkin method {#eq:App_Garlekin}
===============
To implement the Galerkin method, we follow the approach of @Hill:2002 who considered the flow field $\mathbf{u}=G\zeta \mathbf{i}$. Here, for the flow field $\mathbf{u}=G\xi \mathbf{k}$ (see Sec. \[sec:vert\]) we further extend the method to establish results for the full positive-definite diffusion tensor in (\[eq:pos\_def\_diffusion\]). We parameterize cell orientation in terms of spherical-polar co-ordinates ($\theta,\phi$) $$\begin{aligned}
\mathbf{p}=\sin\theta\cos\phi\mathbf{i}+\sin\theta\sin\phi\mathbf{j}+\cos\theta\mathbf{k}.\end{aligned}$$ Note that the direction $\theta=0$ corresponds to cells directed vertically upwards.
Equations (\[eq:f\_eqn\_with\_flow\]) and (\[eq:b\_eqn\_with\_flow\]) are solved by expanding $f$ and $b_j$, $j=1, 2, 3$, in spherical harmonics: $$\begin{aligned}
\label{eq:gen_sigma_f_expan}
f&=&\sum_{n=0}^\infty\sum_{m=0}^nA_n^m\cos m\phi P_n^m(\cos\theta),\\
\label{eq:gen_sigma_b_expan}
b_j&=&\sum_{n=0}^\infty\sum_{m=0}^n(\beta_{nj}^m\cos m\phi +\gamma_{nj}^m\sin m\phi)P_n^m(\cos\theta).\end{aligned}$$ Defining $$\begin{aligned}
F_{nj}^m&\equiv& R_{nj}^m(\phi)P_n^m(\cos\theta),\mbox{~~~}j=0,1,2,3,\\
%B_{nj}^m&\equiv& R_{nj}^m(\phi)P_n^m(\cos\theta)
R_{nj}^m& =& \left\{ \begin{array}{ll} A_{n}^m \cos(m\phi), & j=0 \\
\beta_{nj}^m \cos(m\phi) + \gamma_{nj}^m \sin(m\phi), \mbox{~~~} & j=1,2,3, \end{array} \right.\end{aligned}$$ equations (\[eq:f\_eqn\_with\_flow\]) and (\[eq:b\_eqn\_with\_flow\]) then yield $$\begin{aligned}
\sum_{n=0}^\infty\sum_{m=0}^n \left\{n(n+1) F_{nj}^m
+\lambda \sin^2\theta R_{nj}^m {P_n^m}'
+\sigma (\cos\phi \sin\theta R_{nj}^m {P_n^m}'
+\cot\theta\sin\phi {R_{nj}^m}' P_n^m) \right. \nonumber \\ \left.
-2\lambda \cos\theta F_{nj}^m \right\}
=\begin{cases}
0, & j=0, \\
\sum_{n=0}^\infty\sum_{m=0}^n (\sin\theta\cos\phi-(4\pi/3)A_1^1)F_{n0}^m, &j=1,\\
\sum_{n=0}^\infty\sum_{m=0}^n \sin\theta\sin\phi F_{n0}^m &j=2,\\
\sum_{n=0}^\infty\sum_{m=0}^n \left\{ 2\sigma F_{n1}^m +(\cos\theta-(4\pi/3)A_1^0)F_{n0}^m \right\}, &j=3,
\end{cases}\end{aligned}$$ where primes denote differentiation with respect to the dependent variable. Note that the normalization condition (Eq. \[eq:int\_constraints\]) requires that $A_0^0=1/4\pi, \beta_{0j}^0=0$, and from equation (\[eq:def\_p\]), we calculate the mean swimming direction to be $\mathbf{q}=(4\pi/3)(A_1^1,0,A_1^0)^T$. These equations can be simplified using identities for spherical harmonics so that inner products with other harmonics can be calculated. Finally, the resulting equations can be approximated by truncating the above series solutions to give a set of simultaneous equations that may be solved for the coefficients $A_n^m, \beta_{nj}^m$ and $\gamma_{nj}^m$.
The first term for the positive-definite diffusion tensor in equation (\[eq:pos\_def\_diffusion\]), is given in part by equation $(52)$ of @Hill:2002, which only depends on the first few terms in the expansion (i.e. $\beta_{1j}^m,\gamma_{1j}^m$, for $m=0,1$). The second term cannot be written in such simple terms but can be approximated directly using all available coefficients.
Small $\sigma$ asymptotics {#eq:App_small_sigma}
==========================
The calculation of the mean swimming direction, $\mathbf{q}$, is the same for both the generalized Taylor dispersion theory and the Fokker-Planck approach[@Hill:2002]. Hence, we follow @Pedley:1990 to compute $f$ for small vorticity case (note that their small parameter $\epsilon$ is related to $\sigma$ via $\sigma=\epsilon \lambda$).
Calculation of equilibrium distribution, $f$, and mean swimming, $\mathbf{q}$
-----------------------------------------------------------------------------
At leading order, $\sigma=0$, (\[eq:f\_eqn\_with\_flow\]) becomes $$\begin{aligned}
-\mathcal{L}_0f=
\frac{1}{\sin\theta}\frac{\partial}{\partial \theta}
\left(\sin\theta
\frac{\partial f}{\partial \theta}
\right)
+\frac{1}{\sin^2\theta}
\frac{\partial^2 f}{\partial \phi^2}
+\frac{\lambda}{\sin\theta}
\frac{\partial}{\partial \theta}
\left(
\sin^2\theta f
\right)=0.\end{aligned}$$ Looking for a solution independent of $\phi$, we obtain the von Mises distribution, $$\begin{aligned}
f=f^{(0)}(\theta)=\mu e^{\lambda\cos\theta},\end{aligned}$$ where (\[eq:fnorm\]) yields the normalization constant $\mu=\lambda/4\pi\sinh\lambda$.
From equation (\[eq:def\_p\]), the mean swimming velocity is computed to be $$\begin{aligned}
\mathbf{q}^{(0)}=\int_0^{2\pi} \int_0^\pi \mathbf{p} f^{(0)}(\theta) \sin\theta d\theta d\phi =(0,0,K_1),\end{aligned}$$ where $K_1=\coth\lambda -1/\lambda$. For $\lambda=2.2$ we have $K_1=0.57$.
To find the $O(\sigma)$ correction, put $$\begin{aligned}
f=f^{(0)}(\theta)+\sigma f^{(1)},\end{aligned}$$ to obtain $$\begin{aligned}
-\mathcal{L}_0 f^{(1)}=
\frac{1}{\sin\theta}\frac{\partial}{\partial \theta}
\left(\sin\theta
\frac{\partial f^{(1)}
}{\partial \theta}
\right)
+\frac{1}{\sin^2\theta}
\frac{\partial^2 f^{(1)}
}{\partial \phi^2}
+\frac{\lambda}{\sin\theta}
\frac{\partial}{\partial \theta}
\left(
\sin^2\theta f^{(1)}
\right)=\lambda \cos\phi \sin\theta f^{(0)}.\end{aligned}$$ Looking for a solution of the form $f^{(1)}=\cos\phi F(\theta)$, we obtain the ODE $$\begin{aligned}
\label{eq:ODE_F}
\frac{1}{\sin\theta}\frac{d}{d \theta}
\left(\sin\theta
\frac{d F}{d \theta}
\right)
-\frac{F}{\sin^2\theta}
+\frac{\lambda}{\sin\theta}
\frac{d}{d \theta}
\left(
\sin^2\theta F
\right)= \lambda \sin\theta f^{(0)}.\end{aligned}$$ Defining $x=\cos\theta$, and letting $$\begin{aligned}
F=-\mu g_1(x),\end{aligned}$$ we obtain equation (3.4) of @Pedley:1990, which has a power series solution $$\begin{aligned}
\label{eq:lambda_P11_exp}
g_1(x)=\sum_{n=1}^\infty \lambda^n A_n(x),\quad
A_n(x)=\sum_{r=1}^n a_{n,r} P^1_r(x),\end{aligned}$$ where the $P^1_r$ are the associated Legendre functions and the coeffiicents $a_{n,r}$ satisfy $$\begin{aligned}
a_{n+1,r}=-a_{n,r+1}\frac{r+2}{(r+1)(2r+3)}+a_{n,r-1}\frac{(r-1)}{r(2r-1)}+\frac{e_{n+1,r}}{r(r+1)},\end{aligned}$$ where $$\begin{aligned}
e_{n+1,r}=\frac{2r+1}{n!2r(r+1)}\int_{-1}^1 (1-x^2)^{1/2} x^nP_r^1(x)dx.\end{aligned}$$
The first order correction to the mean swimming is then given by $$\begin{aligned}
\mathbf{q}^{(1)}=\int_0^{2\pi} \int_0^\pi \mathbf{p} f^{(1)} \sin\theta d\theta d\phi = \int_0^{2\pi} \int_0^\pi \mathbf{p}\cos\phi F(\theta)\sin\theta d\theta d\phi=(-\frac{J_1}{\lambda},0,0),\end{aligned}$$ where $$\begin{aligned}
J_1 =\frac{4\pi}{3}\lambda \mu \sum_{l=0}^\infty \lambda^{2l+1} a_{2l+1,1}.\end{aligned}$$ With $\lambda=2.2$ a calculation using Maple provides $J_1=0.45$.
Thus the mean swimming correct to $O(\sigma)$ with respect to $\mathbf{i}, \mathbf{j}, \mathbf{k}$ unit vectors is given by $\mathbf{q}=(-\frac{\sigma}{\lambda} J_1,0, K_1)^T.$ Recalling the relationship between local and global co-ordinate vectors, $$\begin{aligned}
\mathbf{i} =\mathbf{e}_r,\quad \mathbf{j}=-\mathbf{e}_\psi,\quad \mathbf{k}=-\mathbf{e}_z,\end{aligned}$$ the mean swimming direction, with respect to $\mathbf{e}_r, \mathbf{e}_\psi,\mathbf{e}_z$ is (see Eq. \[eq:mean\_swim\_small\_sigma\]) $$\begin{aligned}
\mathbf{q}&=&-(\frac{\sigma}{\lambda} J_1,0, K_1)^T.\end{aligned}$$
Calculation of $\mathbf{b}$, and diffusion tensor, $\mathbf{D}$. {#sec:small_sigma_D}
----------------------------------------------------------------
At leading order, setting $\sigma=0$ in equation (\[eq:b\_eqn\_with\_flow\]) yields $$\begin{aligned}
-\mathcal{L}_0 \mathbf{b}=
\frac{1}{\sin\theta}\frac{\partial}{\partial \theta}
\left(\sin\theta
\frac{\partial \mathbf{b}}{\partial \theta}
\right)
+\frac{1}{\sin^2\theta}
\frac{\partial^2 \mathbf{b}}{\partial \phi^2}
+\frac{\lambda}{\sin\theta}
\frac{\partial}{\partial \theta}
\left(
\sin^2\theta \mathbf{b}
\right)=f^{(0)}(K_1\mathbf{k}-\mathbf{p}).\end{aligned}$$ By inspection, consider $$\begin{aligned}
b_\xi=B_H(\theta)\cos\phi\\
b_\eta=B_H(\theta)\sin\phi\\
b_\zeta=B_V(\theta).\end{aligned}$$ $B_H$ then satisfies the ODE $$\begin{aligned}
\label{ref:hor_b}
\frac{1}{\sin\theta}\frac{d}{d \theta}
\left(\sin\theta
\frac{d B_H}{d \theta}
\right)
-\frac{B_H}{\sin^2\theta}
+\frac{\lambda}{\sin\theta}
\frac{d}{d \theta}
\left(
\sin^2\theta B_H
\right)
&=&-\sin\theta f^{(0)}.\end{aligned}$$ On comparing equations (\[ref:hor\_b\]) and (\[eq:ODE\_F\]), we can write $$\begin{aligned}
B_H=-\frac{1}{\lambda}F.\end{aligned}$$
From equation (\[eq:pos\_def\_diffusion\]), the leading order expression for the non-dimensional horizontal component of diffusion ($D^{\xi\xi}=D^{\eta\eta}$) can thus be written as $$\begin{aligned}
D^{\xi\xi}=\int_0^{2\pi} \int_0^\pi p_\xi \cos\phi B_H(\theta) \sin\theta d\theta d\phi =-\frac{1}{\lambda}\int_0^{2\pi} \int_0^\pi p_\xi \cos\phi F(\theta) \sin\theta d\theta d\phi =\frac{J_1}{\lambda^2}.\end{aligned}$$
The function $B_V$ satisfies the ODE $$\begin{aligned}
\frac{1}{\sin\theta}\frac{d}{d \theta}
\left(\sin\theta
\frac{d B_V}{d \theta}
\right)
+\frac{\lambda}{\sin\theta}
\frac{d}{d \theta}
\left(
\sin^2\theta B_V
\right)
&=&(K_1-\cos\theta)f^{(0)}.\end{aligned}$$ To solve this, as for $F$, we define $x=\cos\theta$, let $$\begin{aligned}
B_V=\frac{\mu}{\lambda} h_1(x),\end{aligned}$$ and seek power series solutions $$\begin{aligned}
h_1(x)=\sum_{n=1}^\infty \lambda^n B_n(x),\\
B_n(x)=\sum_{r=1}^n b_{n,r} P^0_r(x).\end{aligned}$$
By utilizing properties of Legendre polynomials we obtain the following recurrence relationship for the $b_{n,r}$: $$\begin{aligned}
b_{n+1,r}=-\frac{b_{n,r+1}}{2r+3}+\frac{b_{n,r-1}}{2r-1}+\frac{f_{n+1,r}}{r(r+1)},\end{aligned}$$ where $$\begin{aligned}
f_{n+1,r}=\frac{2r+1}{2n!}\int_{-1}^1 (x-K_1) x^nP_r^0(x)dx.\end{aligned}$$
The leading order expression for the non-dimensional vertical component of diffusion can thus be written as $$\begin{aligned}
D^{\zeta\zeta}=\int_0^{2\pi} \int_0^\pi p_\zeta B_V(\theta) \sin\theta d\theta d\phi = \frac{L_1}{\lambda},\end{aligned}$$ where $$\begin{aligned}
L_1 =\frac{4\pi}{3}\mu \sum_{n=1}^\infty \lambda^{n} b_{n,1}.\end{aligned}$$ For $\lambda=2.2$ a computation employing Maple reveals that $L_1=0.11$.
The off-diagonal terms, $D^{\xi\zeta}$, etc., are all zero at leading order.
When comparing with a passive solute, we note that if we set $\sigma=0$, at leading order in $\lambda$ we have that $$\begin{aligned}
\mu=1/4\pi, \quad J_1 =\frac{1}{3}\lambda^2 a_{1,1}, \quad L_1 =\frac{1}{3} \lambda b_{1,1}.\end{aligned}$$ Noting that $a_{1,1}=b_{1,1}=1/2$, it is clear that in the limit of $\lambda\to0$ the diffusion tensor tends to the isotropic tensor $\mathbf{I}/6$. This result can be obtained directly by considering equations (\[eq:f\_eqn\_with\_flow\]-\[eq:mathcalG\]), which have solution $f=1/4\pi, \mathbf{b}= \mathbf{p}/8\pi$.
Large $\sigma$ asymptotics {#eq:App_large_sigma}
==========================
For the calculation in the limit of large $\sigma$, it is convenient to follow @Manela:2003 and @Brenner:1972 and define local co-ordinates so that the vorticity vector is in the direction of $\hat{ \mathbf{k}}$: $$\begin{aligned}
\label{eq:local_co_ord2}
\hat{\mathbf{i}} =\mathbf{e}_r,\quad \hat{\mathbf{j}}=-\mathbf{e}_z,\quad\hat{ \mathbf{k}}=\mathbf{e}_\psi.\end{aligned}$$
Defining the local position co-ordinate, $\mathbf{R}-\mathbf{R}_0=\xi\hat{\mathbf{i}} +\eta\hat{\mathbf{j}} +\zeta\hat{\mathbf{k}} $, the flow field can then be written locally as simple shear: $$\begin{aligned}
\mathbf{u}(\mathbf{R})=\mathbf{u}(\mathbf{R}_0)+ G\xi\hat{\mathbf{j}},\end{aligned}$$ where the shear strength $G$ is given as before by $-\frac{U}{a}\chi'$. For this flow field, the velocity gradient tensor has the simple form $G_{ij}=G \delta_{i1}\delta_{j2}$.
Writing the orientation vector as $$\begin{aligned}
\mathbf{p}=\sin\theta\cos\phi\hat{\mathbf{i}}+\sin\theta\sin\phi\hat{\mathbf{j}}+\cos\theta\hat{\mathbf{k}}\end{aligned}$$ we can write the governing equation (\[eq:f\_eqn\_with\_flow\]) as $$\begin{aligned}
\label{eq:large_sigma_gov_eq}
\mathcal{L}f=
\sigma\frac{\partial f}{\partial \phi}
-\mathcal{L}_s f=0\end{aligned}$$ where the linear operator independent of $\sigma$ is given by $$\begin{aligned}
\mathcal{L}_s f=-\lambda\left(
\frac{1}{\sin\theta}
\frac{\partial}{\partial \theta}
\left(
\cos\theta\sin\theta\sin\phi f
\right)
+
\frac{\partial}{\partial \phi}
\left(\frac{\cos\phi}{\sin\theta} f
\right)\right)
+\nabla^2_{\mathbf{p}} f\end{aligned}$$ and where in sphericals the Laplacian is given by $$\begin{aligned}
\nabla^2_{\mathbf{p}}f=
\frac{1}{\sin\theta}\frac{\partial}{\partial \theta}
\left(\sin\theta
\frac{\partial f}{\partial \theta}
\right)
+\frac{1}{\sin^2\theta}
\frac{\partial^2 f}{\partial \phi^2}.\end{aligned}$$
For $\sigma\gg1$, we consider the following perturbation expansions for $f$ and $\mathbf{b}$: $$\begin{aligned}
f=\frac{1}{4\pi}\left(f^{(0)}+\frac{1}{\sigma}f^{(1)}+\left(\frac{1}{\sigma}\right)^2f^{(2)}+\dots\right)\\
\mathbf{b}=\frac{1}{4\pi}\left(\mathbf{b}^{(0)}+\frac{1}{\sigma}\mathbf{b}^{(1)}+\left(\frac{1}{\sigma}\right)^2\mathbf{b}^{(2)}+\dots\right).\end{aligned}$$
Calculation of equilibrium distribution, $f$, and mean swimming, $\mathbf{q}$
-----------------------------------------------------------------------------
Substituting the expansion for $f$ into equation (\[eq:large\_sigma\_gov\_eq\]) we obtain an explicit iterative scheme for computing the expansion: $$\begin{aligned}
\label{eq:it_scheme}
\frac{\partial f^{(k+1)}}{\partial \phi}
&=&\mathcal{L}_s f^{(k)}.\end{aligned}$$
At leading order: $$\begin{aligned}
\frac{\partial f^{(0)}}{\partial \phi}
&=&0.\end{aligned}$$ Hence $ f^{(0)}= f^{(0)}(\theta)$ subject to $\int_0^\pi f^{(0)}(\theta) \sin\theta d\theta=2$.
At $O(\frac{1}{\sigma})$: $$\begin{aligned}
\frac{\partial f^{(1)}}{\partial \phi}
&=&\mathcal{L}_s f^{(0)}(\theta).\end{aligned}$$ Because $ f^{(1)}$ must be periodic in $\phi$ with period $2\pi$, integrating this equation with respect to $\phi$ from $0$ to $2\pi$ gives: $$\begin{aligned}
\frac{1}{\sin\theta}\frac{d}{d \theta}
\left(\sin\theta
\frac{df^{(0)}(\theta)}{d\theta}\right)
=0.\end{aligned}$$ Excluding singular solutions, and given that $\int_0^\pi f^{(0)}(\theta) \sin\theta d\theta=2$, we obtain the solution $ f^{(0)}=1$.
We can summarize the general iteration algorithm for the terms $k\geq1$:
1. [ Integrate equation (\[eq:it\_scheme\]) $$\begin{aligned}
f^{(k+1)}&=&\int_0^\phi \mathcal{L}_s f^{(k)}d\phi +F^{(k+1)}(\theta).\end{aligned}$$ ]{}
2. [ Impose periodicity of $f^{(k+2)}$ & integral constraint $$\begin{aligned}
\int_0^{2\pi} \mathcal{L}_s f^{(k+1)}d\phi &=&0,\\
\int_0^{2\pi} \int_0^{\pi} f^{(k+1)} d\theta d\phi &=&0,\end{aligned}$$ to determine non-singular solutions for $F^{(k+1)}(\theta)$. ]{}
Specifically the first two terms in the expansions are given: $$\begin{aligned}
f^{(1)}&=&-2\lambda\cos\phi\sin\theta, \\
f^{(2)}
&=&\frac{2}{3}\lambda^2(1-3\cos^2\theta)
+4\lambda\sin\theta\sin\phi
+\frac{3}{2}\lambda^2\cos 2\phi \sin^2\theta.\end{aligned}$$
From equation (\[eq:def\_p\]), we thus can compute mean swimming at large $\sigma$ correct to $O(1/\sigma^2)$: $$\begin{aligned}
q^\xi&=&\int_{\phi=0}^{2\pi} \int_{\theta=0}^{\pi} f \sin^2\theta\cos\phi d\theta d\phi=-\frac{2\lambda}{ 3\sigma},\\
q^\eta&=&\int_{\phi=0}^{2\pi} \int_{\theta=0}^{\pi} f \sin^2\theta\sin\phi d\theta d\phi=\frac{4\lambda}{ 3\sigma^2},\\
q^\zeta&=&\int_{\phi=0}^{2\pi} \int_{\theta=0}^{\pi} f \cos\theta \sin\theta d\theta d\phi=0.\end{aligned}$$ When converting back to the global co-ordinates we note that $$\begin{aligned}
\hat{\mathbf{i}} =\mathbf{e}_r,\quad \hat{\mathbf{j}}=-\mathbf{e}_z,\quad\hat{ \mathbf{k}}=\mathbf{e}_\psi.\end{aligned}$$ and so with respect to $\mathbf{e}_r, \mathbf{e}_\psi,\mathbf{e}_z$ unit vectors the mean swimming is given by $$\begin{aligned}
\mathbf{q}&=&-(\frac{2\lambda}{ 3\sigma},0,\frac{4\lambda}{ 3\sigma^2})^T.\end{aligned}$$
Calculation of $\mathbf{b}$, and diffusion tensor, $\mathbf{D}$. {#calculation-of-mathbfb-and-diffusion-tensor-mathbfd.}
----------------------------------------------------------------
We now apply similar techniques to calculate the vector field $\mathbf{b}$. Substituting the expansion for $\mathbf{b}$ into equation (\[eq:b\_eqn\_with\_flow\]) we obtain the following iterative scheme for computing the expansion: $$\begin{aligned}
\label{eq:b_vector_it}
\frac{\partial \mathbf{b}^{(k+1)}}{\partial \phi}-2\mathbf{b}^{(k+1)}.\hat{\mathbf{G}}
&=&\left( 4\pi (\mathbf{p}-\mathbf{q}) f \right)^{(k)}
+\mathcal{L}_s \mathbf{b}^{(k)}.\end{aligned}$$ For the simple shear flow with $\hat{G_{ij}}= \delta_{i1}\delta_{j2}$, taking the dot product with $\hat{\mathbf{i}}$ yields: $$\begin{aligned}
\label{eq:bx_it}
\frac{\partial b_\xi^{(k+1)}}{\partial \phi}&=&\left( 4\pi (\sin\theta\cos\phi-q^\xi) f \right)^{(k)}
+\mathcal{L}_s b_\xi^{(k)}\end{aligned}$$
The method follows as for $f$:
1. [ Integrate equation (\[eq:bx\_it\]) $$\begin{aligned}
b_\xi^{(k+1)}&=&\int _0^\phi \left( 4\pi (\sin\theta\cos\phi-q^\xi) f \right)^{(k)}
+\mathcal{L}_s b_\xi^{(k)}d\phi + B_\xi^{(k+1)}(\theta).\end{aligned}$$ ]{}
2. [ Impose periodicity of $b_\xi^{(k+2)}$ & integral constraint $$\begin{aligned}
\int_0^{2\pi}\left(4\pi (\sin\theta\cos\phi-q^\xi) f \right)^{(k+1)} +\mathcal{L}_s b_\xi^{(k+1)}d\phi &=&0,\\
\int_0^{2\pi} \int_0^{\pi} b_\xi^{(k+1)} d\theta d\phi &=&0,\end{aligned}$$ to determine non-singular solutions for $B_\xi^{(k+1)}(\theta)$. ]{}
We obtain the following expressions: $$\begin{aligned}
b_\xi^{(0)}&=&0,\\
b_\xi^{(1)}&=&\lambda(1-3\cos^2\theta)/36+\sin\theta\sin\phi,\\
b_\xi^{(2)}&=&B_\xi^{(21)}(\theta)\cos\phi+B_\xi^{(22)}(\theta)\sin 2\phi.\end{aligned}$$
Taking the dot product of Eq. \[eq:b\_vector\_it\] with $\hat{\mathbf{j}}$ yields: $$\begin{aligned}
\label{eq:by_it}
\frac{\partial b_\eta^{(k+1)}}{\partial \phi}&=&2b_\xi^{(k+1)}+\left( 4\pi (\sin\theta\sin\phi-q^\eta) f \right)^{(k)}
+\mathcal{L}_s b_\eta^{(k)}\end{aligned}$$
1. [ Integrate Eq. \[eq:by\_it\] $$\begin{aligned}
b_\eta^{(k+1)}&=&\int _0^\phi 2b_\xi^{(k+1)}+\left( 4\pi (\sin\theta\sin\phi-q^\eta) f \right)^{(k)}
+\mathcal{L}_s b_\eta^{(k)}d\phi + B_\eta^{(k+1)}(\theta).\end{aligned}$$ ]{}
2. [ Impose periodicity of $b_\eta^{(k+2)}$ & integral constraint $$\begin{aligned}
\int_0^{2\pi}2b_\xi^{(k+2)}+\left(4\pi (\sin\theta\sin\phi-q^\eta) f\right)^{(k+1)} + \mathcal{L}_s b_\eta^{(k+1)}d\phi &=&0,\\
\int_0^{2\pi} \int_0^{\pi} b_\eta^{(k+1)} d\theta d\phi &=&0,\end{aligned}$$ to determine non-singular solutions for $B_\eta^{(k+1)}(\theta)$. Note that calculation of $b_\eta^{(2)}$ requires calculation of $b_\xi^{(3)}$ which requires calculation of $f^{(3)}$. These calculations were performed using Maple, and files are available from the authors on request. ]{}
We obtain the following expressions: $$\begin{aligned}
b_\eta^{(0)}&=&\lambda(1-3\cos^2\theta)/108\\
b_\eta^{(1)}&=&B_\eta^{(11)}(\theta)\cos\phi\\
b_\eta^{(2)}&=&B_\eta^{(10)}(\theta)+B_\eta^{(21)}(\theta)\sin\phi+B_\eta^{(22)}(\theta)\cos 2\phi\end{aligned}$$
Taking the dot product of equation (\[eq:b\_vector\_it\]) with $\hat{\mathbf{k}}$ yields: $$\begin{aligned}
\label{eq:bz_it}
\frac{\partial b_\zeta^{(k+1)}}{\partial \phi}&=& 4\pi f ^{(k)} \cos\phi +\mathcal{L}_s b_\zeta^{(k)},\end{aligned}$$ which when combined with periodicity and the integral constrain yield the following expressions: $$\begin{aligned}
b_\zeta^{(0)}&=&\frac{1}{2}\cos\theta\\
b_\zeta^{(1)}&=&-\frac{3}{4}\lambda\sin(2\theta)\cos\phi\\
b_\zeta^{(2)}&=&B_\zeta^{(10)}(\theta)+B_\zeta^{(21)}(\theta)\sin\phi+B_\zeta^{(22)}(\theta)\cos 2\phi\end{aligned}$$
From equation (\[eq:pos\_def\_diffusion\]) we can now compute the diffusion tensor correct to $O(1/\sigma^2)$ : $$\begin{aligned}
D^{\xi\xi}&=&\frac{1}{\sigma^2}(\frac{2}{3}+\frac{1}{270 }\lambda^2)\\
D^{\eta\eta}&=&\frac{\lambda^2}{2430}+\frac{1}{\sigma^2}(6-\frac{2\lambda^2}{243}-\frac{41\lambda^4}{25515})\\
D^{\zeta\zeta}&=&\frac{1}{6}-\frac{5}{18\sigma^2}\lambda^2\\
D^{\xi\eta}=D^{\eta\xi}&=&\frac{1}{810 \sigma}\lambda^2\end{aligned}$$ all other entries are zero. When converting back to the global co-ordinates we note that $$\begin{aligned}
\hat{\mathbf{i}} =\mathbf{e}_r,\quad \hat{\mathbf{j}}=-\mathbf{e}_z,\quad\hat{ \mathbf{k}}=\mathbf{e}_\psi.\end{aligned}$$ and so with respect to $\mathbf{e}_r, \mathbf{e}_\psi,\mathbf{e}_z$ unit vectors the diffusion tensor is given by $$\begin{aligned}
\mathbf{D}&=&
\left(
\begin{array}{ccc}
\frac{d_1}{\sigma^2}&0&-\frac{d_2}{ \sigma }\\
0&\frac{1}{6}-\frac{d_5}{\sigma^2}&0\\
-\frac{d_2}{ \sigma }&0&d_3+\frac{d_4}{\sigma^2}
\end{array}
\right),\end{aligned}$$ where $$\begin{aligned}
d_1&=&\frac{2}{3}+\frac{1}{270 }\lambda^2\\
d_2&=&\frac{\lambda^2}{810 }\\
d_3&=&\frac{\lambda^2}{2430}\\
d_4&=&6-\frac{2\lambda^2}{243}-\frac{41\lambda^4}{25515}\\
d_5&=&\frac{5}{18}\lambda^2.\end{aligned}$$
Fokker-Planck calculation of diffusion {#eq:App_FP_diff}
======================================
For the Fokker-Planck approximation,[@Pedley:1990] the diffusion tensor non-dimensionalised on $V_s^2/d_r$ is given by $$\begin{aligned}
\mathbf{D}_{F}=\tau d_r \int_\mathbf{p}(\mathbf{p}-\mathbf{q})^2 f(\mathbf{p} ) d\mathbf{p},\end{aligned}$$ where $\tau$ is a directional correlation time estimated from experimental data. Although the quantity $\tau$ may vary with both $\lambda$ and the shear, $\sigma$, for simplicity it is typically assumed to be independent of the shear. Asymptotic results for the diffusion tensor for weak shear[@Pedley:1990] ($\sigma\ll1$) and strong shear [@Bees:1998a] ($\sigma\gg1$) are available. With this choice of non-dimensionalisation and using the notation of this paper, the $\sigma\ll1$ result correct to $O(\sigma^2)$ is given by $$\begin{aligned}
D^{rr}_{F}&=&\tau d_r \frac{K_1}{\lambda } ,\quad
D^{rz}_{F}=\tau d_r\frac{J_2-K_1J_1}{\lambda} \sigma, \quad
D^{zz}_{F}=\tau d_r K_2 ,\end{aligned}$$ where $K_2$ and $J_2$ are specified functions of $\lambda$[@Pedley:1990].The $\sigma\gg1$ result correct to $O(1/\sigma^3)$ is given by $$\begin{aligned}
D^{rr}_{F}&=&\tau d_r \left(\frac{1}{3}-\frac{\lambda^2}{5 \sigma^2} \right),\quad
D^{rz}_{F}=0, \quad
D^{zz}_{F}=\tau d_r \left(\frac{1}{3}-\frac{7\lambda^2}{45 \sigma^2} \right) .\end{aligned}$$
To compare the Fokker-Planck approximation to the generalized Taylor method we choose $\tau$ so that the two alternative calculations for the horizontal component of the diffusion agree when the shear is zero. Specifically, when $\sigma=0$ the generalized Taylor method yields $D^{rr}_G=\frac{J_1}{\lambda^2}$ and thus for the horizontal component of diffusion to agree we take $\tau d_r= \frac{J_1}{\lambda K_1}$. For the specific gyrotactic bias $\lambda=2.2$, this yields $\tau d_r =0.36$. Taking this value of $\tau$ also provides a value for the vertical component of diffusion, $D^{zz}_{F}=\tau d_r K_2 =0.056 $, which is only a slight deviation from the generalized Taylor method, $D^{zz}_G= \frac{L_1}{\lambda}= 0.050$. Clearly, by this careful choice of $\tau$, the Fokker-Planck and generalized Taylor dispersion methods should agree when the shear is weak. However, as the shear increases we expect the two theories to give diverging predictions because, for example, in the FP approach $D^{rr}_F$ approaches $\frac{1}{3}\tau d_r$, whereas in the GTD approach, $D^{rr}_G$ tends to zero at large shear.
As for the generalized Taylor method, we fit simple functions to the curves of diffusion against $\sigma$ obtained with the Fokker-Planck method[@Pedley:1990; @Bees:1998a]. The specific functions were given by $$\begin{aligned}
\label{eq:q_sigma_fit_FP}
D^{rr}_{F}(\sigma)&=&P(\sigma; \mathbf{a}^{rr}_{F}, \mathbf{b}^{rr}_{F}),\quad
D^{rz}_{F}(\sigma)=- \sigma P(\sigma; \mathbf{a}^{rz}_{F}, \mathbf{b}^{rz}_{F}) \quad
D^{zz}_{F}(\sigma)=P(\sigma; \mathbf{a}^{zz}_{F}, \mathbf{b}^{zz}_{F}).\end{aligned}$$ where the rational function $P(\sigma; \mathbf{a},\mathbf{b})$ is defined by equation \[eq:P\_rat\_func\] and the choice of $ \mathbf{a}$ coefficients is described in table \[tab:func\_fits\_FP\].
$a_{0}$ $a_{2}$ $a_{4}$
------------------------ ---------------------------------------- ------------------------------------------------------------------------------ ---------------------------------------
$ \mathbf{a}^{rr}_{F}$ $\frac{ J_1}{\lambda^2}$ $-\frac{J_1\lambda}{5K_1}b^{rr}_{4,F}+\frac{J_1}{3K_1\lambda}b^{rr}_{2,F}$ $\frac{J_1}{3K_1\lambda}b^{rr}_{4,F}$
$ \mathbf{a}^{zz}_{F}$ $\frac{ K_2J_1}{K_1\lambda}$ $-\frac{7J_1\lambda}{45K_1}b^{zz}_{4,F}+\frac{J_1}{3K_1\lambda}b^{zz}_{2,F}$ $\frac{J_1}{3K_1\lambda}b^{rr}_{4,F}$
$ \mathbf{a}^{rz}_{F}$ $\frac{(K_1J_1-J_2)J_1}{K_1\lambda^2}$ $0$ $0$
: \[tab:func\_fits\_FP\] In order to obtain the simplest functional fits whilst ensuring the asymptotic results are satisfied, the $ \mathbf{a}$ coefficients are as specified.
Coefficients of functional fits {#eq:App_functional fits}
===============================
The fit coefficients for $\lambda=2.2$ for the mean swimming and diffusion are given by:
$a_{0}$ $a_{2}$ $a_{4}$ $b_{2}$ $b_{4}$
------------------------ ----------------------- ----------------------- ----------------------- ------------------------- ---------------------------
$\mathbf{a}^r$ $2.05 \times 10^{-1}$ $1.86 \times 10^{-2}$ $0$ $1.74 \times10 ^{-1}$ $1.27 \times 10^{-2}$
$\mathbf{a}^z$ $5.7 \times 10^{-1}$ $3.66 \times 10^{-2}$ $0$ $1.75 \times 10 ^{-1}$ $1.25 \times 10^{-2}$
$ \mathbf{a}^{rr}_G$ $9.30 \times 10^{-2}$ $1.11 \times 10^{-4}$ $0$ $1.19\times 10 ^{-1}$ $1.63 \times 10^{-4}$
$ \mathbf{a}^{zz}_G$ $5.00 \times 10^{-2}$ $1.11 \times 10^{-1}$ $3.71 \times 10^{-5}$ $1.01 \times 10 ^{-1}$ $ 1.86 \times 10^{-2}$
$ \mathbf{a}^{rz}_G$ $9.17\times 10^{-2}$ $1.56\times 10^{-4}$ $0$ $2.81 \times 10 ^{-1}$ $2.62\times 10^{-2} $
$ \mathbf{a}^{rr}_{F}$ $9.30 \times 10^{-2}$ $5.73\times 10^{-4}$ $1.85\times 10^{-3}$ $4.96\times 10 ^{-2}$ $1.54 \times 10^{-2}$
$ \mathbf{a}^{zz}_{F}$ $5.60 \times 10^{-2}$ $3.23 \times 10^{-2}$ $1.70 \times 10^{-5}$ $2.70\times10 ^{-1}$ $1.42 \times 10^{-4}$
$ \mathbf{a}^{rz}_{F}$ $1.58 \times 10^{-2}$ $0$ $0$ $9.61\times 10^{-2}$ $7.88 \times 10^{-2}$
|
---
abstract: 'We introduce a two-parameter family of strongly-correlated wave functions for bosons and fermions in lattices. One parameter, $q$, is connected to the filling fraction. The other one, $\eta$, allows us to interpolate between the lattice limit ($\eta=1$) and the continuum limit ($\eta\to 0^+$) of families of states appearing in the context of the fractional quantum Hall effect or the Calogero-Sutherland model. We give evidence that the main physical properties along the interpolation remain the same. Finally, in the lattice limit, we derive parent Hamiltonians for those wave functions and in 1D, we determine part of the low energy spectrum.'
author:
- 'Hong-Hao Tu'
- 'Anne E. B. Nielsen'
- 'J. Ignacio Cirac'
- 'Germ[á]{}n Sierra'
title: 'Lattice Laughlin States of Bosons and Fermions at Filling Fractions $1/q$'
---
*Introduction.–* The fractional quantum Hall (FQH) effect has attracted a longstanding interest in physics. 2D electrons displaying such an effect form incompressible quantum liquids with a bulk gap, gapless edge states, and quasiparticle excitations with fractional charge and fractional statistics. Their properties are not amenable to the conventional Ginzburg-Landau theory; however, they can be thoroughly analyzed thanks to the discovery of analytical wave functions, which provide good approximations to some of the quantum states responsible for the FQH effect. An important family of such states is the Laughlin states [@Laughlin-1983] $$\label{eq:psiq}
\Psi_q (\{Z\})=\prod_{i<j}(Z_{i}-Z_{j})^{q}\exp \left(
-\sum_{l}|Z_{l}|^{2}/4\right),$$ where $Z_{i}$ is the position in the complex plane of the $i$th electron and $\nu =1/q$ is the filling fraction, i.e., the ratio between the number of electrons and the number of flux quanta. From a modern viewpoint, the Laughlin states belong to the so-called topological phases [@Wen-1990; @Wen-Niu-1990], an exotic class of gapped phases whose full classification is still an outstanding open problem.
In the FQH setups, the Laughlin states arise due to the strong interactions between the electrons in the fractionally filled lowest Landau level. In that case, the size of the electron wave packets is at least one order of magnitude larger than the lattice spacing and thus the lattice effects are usually negligible [@JainBook]. A natural question is whether Laughlin states (or their variants) can appear in lattice models without Landau levels. In the late eighties, Kalmeyer and Laughlin (KL) proposed a state [@KL1987; @KL1989; @Laughlin-1989] that is a lattice version of the bosonic Laughlin state with $q=2$. This state has been shown to share some of the most defining properties of its continuum counterpart, like the fractional statistics of quasiparticle excitations [@KLCSL] and the presence of chiral edge states [@XGWen-1991]. Thus, the continuum and lattice version of the bosonic Laughlin state with $q=2$ seem to be closely connected, although it is not clear what such a connection is. In [@Scaffidi], it has been shown that an interpolation Hamiltonian between a $q=2$ Laughlin-like lattice state and the continuum $q=2$ Laughlin state can be obtained by choosing bases that allow both states to be expressed in the same Hilbert space, although with different base kets. A more direct interpolation, in which the lattice spacing is continuously changed, has been considered in [@hafezi], but was found to be valid only for sufficiently small lattice filling factors. A similar situation is encountered in 1D, where the Calogero-Sutherland (CS) model [@Calogero-1969; @Sutherland-1971], which is defined in the continuum, seems to be closely related to the Haldane-Shastry lattice model [@Haldane-1988; @Shastry-1988], although it is not obvious how to transform one into the other.
A very useful description of FQH wave functions in the continuum has been introduced by Moore and Read in [@Moore-Read-1991], where they wrote selected FQH wave functions in terms of correlators of the corresponding edge conformal field theories (CFTs). Recently, for certain lattice systems in 1D and 2D, strongly correlated spin wave functions have also been written in terms of CFT correlators [@Ignacio-German-2010; @nsc-2011; @nsc-2012; @Tu-2013]. This, in particular, has made it possible to construct parent Hamiltonians and to build in a systematic form simple wave functions with topological properties. We note also that parent Hamiltonians of the KL state have been found in [@Schroeter-2007; @Thomale-2009; @Kapit-2010; @nsc-2012; @Bauer-2013; @nsc-2013].
In this Letter, we provide an explicit connection between the continuum Laughlin/CS states on the one side and a set of lattice Laughlin/CS states on the other for all filling factors $1/q$. We do this by introducing a family of *lattice* wave functions for hardcore bosons and fermions, which is defined on arbitrary lattices in 1D and 2D and allows us to continuously interpolate between the two limits. We also provide numerical evidence that the states remain within the same phase for all values of the interpolation parameter, so that the interpolation is meaningful. In 1D, we show that the states are critical and describe Tomonaga-Luttinger liquids (TLLs) with Luttinger parameter $K=1/q$, and in 2D we find that the states have topological entanglement entropy (TEE) $-\ln(q)/2$. The wave functions are constructed from conformal fields, and we use the CFT properties of the states to derive parent Hamiltonians for the wave functions in the lattice limit in both 1D and 2D and for general $q$. In 1D, the parent Hamiltonians are closely related to Haldane’s inverse-square model [@Haldane-1988], and we find that *part* of the spectrum is given by integer eigenvalues described by a simple formula.
*CFT wave functions.–* Let us consider a lattice with lattice sites at the positions $z_{j}$, $j=1,2,\ldots ,N$, in the complex plane. The local basis at site $j$ is labeled by $|n_{j}\rangle $, where $n_{j}\in
\{0,1\}$ is the number of particles at the site. The family of wave functions we propose (later on referred to as CFT states) take the form of the following chiral correlators of vertex operators: $$\Psi (n_{1},\ldots ,n_{N})\propto\langle V_{n_{1}}(z_{1})\ldots
V_{n_{N}}(z_{N})\rangle , \label{eq:iMPS}$$ where $$V_{n_{j}}(z_{j})=\chi_j^{n_j}e^{i\pi \sum_{k(<j)} \eta_kn_{j}}:e^{i(qn_{j}-\eta_j )\phi (z_{j})/\sqrt{q}}:. \label{eq:Vertex}$$ Here, $\phi (z)$ is a chiral bosonic field from the $c=1$ free-boson CFT, $:\ldots :$ denotes normal ordering, $\chi_j$ are phase factors that do not depend on $n_j$, $q$ is a positive integer, and $\eta_j$ are positive parameters with average $N^{-1}\sum_j\eta_j=\eta\in(0,1]$. The charge neutrality condition $\sum_i(qn_i-\eta_i)=0$ of the CFT correlators fixes the number of particles to $\sum_{i=1}^{N}n_{i}=\eta N/q\equiv M$, which must hence be an integer, and it follows that $\eta /q$ is the lattice filling fraction. $\eta$ is therefore the parameter that interpolates between the continuum limit ($\eta\to0^+$), with infinitely many lattice sites per particle, and the lattice limit ($\eta=1$), in which the lattice filling fraction $\eta/q$ equals the Laughlin filling fraction $1/q$. When varying $\eta$, we shall always take all $\eta_j$ to scale linearly with $\eta$, such that $\eta_j/\eta_l$ remain constant. Evaluating the vacuum expectation value of the product of vertex operators in (\[eq:iMPS\]) [@CFTbook] yields a Jastrow wave function $$\Psi (n_{1},\ldots ,n_{N})\propto \delta
_{n}\prod_{i<j}(z_{i}-z_{j})^{qn_{i}n_{j}}\prod_{l}f_N(z_{l})^{n_{l}},
\label{eq:Laughlin}$$ where $\delta _{n}=1$ if $\sum_{i=1}^{N}n_{i}=\eta N/q$ and zero otherwise and $f_N(z_{l})\equiv \chi_{l}\prod_{j(\neq l)}(z_{l}-z_{j})^{-\eta_j} =\chi_{l}\exp[-\sum_{j(\neq l)}\eta_j\ln(z_{l}-z_{j})]$.
![(Color online) Illustration of the interpolation between the lattice limit ($\eta=1$) and the continuum limit ($\eta\to0^+$) for a uniform lattice in 1D and a square lattice in 2D. The interpolation is done, while keeping the area per particle $aN/M$ fixed, where $a$ is the average area per site. (a) In 1D, the lattice is defined by $z_j=e^{2\pi ij/N}$, which fixes the area of site $j$ to $a_j=2\pi/N$ $\forall j$, so that $a\equiv N^{-1}\sum_ja_j=2\pi/N$. The scaling parameter is therefore $\eta=qM/N=qMa/(2\pi)$. (b) In 2D, the lattice is defined on a disk with radius $R_\mathcal{D}\to\infty$, and we choose $a=2\pi qM/N$, since this fixes the area per particle to $2\pi q$ as in the Laughlin wave functions. The scaling parameter is therefore $\eta=qM/N=a/(2\pi)$. Transformations between different lattices, including the two displayed on the right, is obtained by transforming $z_j$.[]{data-label="fig:lattice"}](lattice){width="\columnwidth"}
*Relation to the CS and Laughlin wave functions.–* Let us demonstrate how the CFT states are related to several familiar wave functions in the continuum. We first consider the 1D periodic chain, where the lattice sites are uniformly distributed on a unit circle, i.e., $z_{j}=e^{2\pi ij/N}$, and we choose $\eta_j=\eta$ $\forall j$. In this case, we obtain analytically that $f_N(z_{l})\propto \chi_lz_{l}^\eta$, and we can therefore write the state (\[eq:Laughlin\]) as a product of the wave function $\Psi_{\textrm{CS}}\propto \delta_{n}\prod_{i<j}(z_{i}-z_{j})^{qn_{i}n_{j}} \prod_{l}z_{l}^{-q(M-1)n_{l}/2}$ and the gauge factor $\prod_{l}(\chi_l z_{l}^{\eta+q(M-1)/2})^{n_l}$. In the continuum limit, where $N\to\infty$, $\eta \to 0^{+}$, and $\eta N$ stays fixed to keep the number of particles $M$ and the area of the lattice constant (see Fig. \[fig:lattice\](a)), the lattice spacing goes to zero, and $\Psi_{\textrm{CS}}$ turns into the ground-state wave function of the CS model [@Calogero-1969; @Sutherland-1971] for bosons (even $q$) and fermions (odd $q$). The gauge factor can be set to unity by choosing $\chi_l=z_{l}^{-\eta-q(M-1)/2}$ if we like, but we note that its presence does not affect properties such as the particle-particle correlation function and the entanglement entropy. The CFT states thus allow us to define a lattice version of the CS wave functions and to interpolate between the lattice and the continuum limit of the model.
We next consider an arbitrary lattice in 2D, which is defined on a disk $\mathcal{D}$ of radius $R_\mathcal{D}\to\infty$. We define the area $a_j$ of site $j$ to be the area of the region consisting of all points in $\mathcal{D}$ that are closer to $z_j$ than to any of the other lattice sites. Let us note that $|f_N(z_{l})|=\exp[-\sum_{j(\neq l)}\eta_j\ln(|z_{l}-z_{j}|)]$. If we choose $\eta_j=a_j/(2\pi)$ and consider the continuum limit $\eta\to0^+$ (as illustrated for a square lattice in Fig. \[fig:lattice\](b)), we can replace the sum over $j$ by the integral $\int_\mathcal{D}d^2z \ln(|z_l-z|)/(2\pi)$. In the thermodynamic limit $R_\mathcal{D}\to\infty$ this integral evaluates to $|z_l|^2/4+\textrm{constant}$, where the constant does not depend on $z_l$. Note, however, that $\sum_{j(\neq l)}\eta_j\ln(|z_{l}-z_{j}|)$ and $\kappa^{-2}\sum_{j(\neq l)}\kappa^2\eta_j\ln(|\kappa z_{l}-\kappa z_{j}|)$, where $\kappa$ is a positive constant, only differ by a $z_l$-independent constant for $R_\mathcal{D}\to\infty$. If $\eta_j$ is not small, we can choose $\kappa$ very small, transform the resulting sum into an integral, and again conclude that $\sum_{j(\neq l)}\eta_j\ln(|z_{l}-z_{j}|) =|z_l|^2/4+\textrm{constant}$. For all 2D lattices in the thermodynamic limit, we therefore obtain $$f_N(z_l)\propto \chi_l e^{-ig_l}e^{-|z_l|^{2}/4} \quad (N\textrm{ large}), \label{eq:fz}$$ where $g_l\equiv\operatorname{Im}[\sum_{j(\neq l)}\eta_j\ln(z_{l}-z_{j})]$ is a real number. In Fig. \[fig:fz\], we find numerically for different lattices that is an accurate approximation even if $N$ is only moderately large. Choosing $\chi_l=e^{ig_l}$ and inserting into , we observe that the CFT states coincide with the Laughlin states , except that the possible particle positions are restricted to the coordinates of the lattice sites. By changing the number of lattice sites per particle, we can thus interpolate between the Laughlin states in the continuum and Laughlin-like states on lattices.
![(Color online) Numerical demonstration that is approximately valid even for a moderate number of lattice sites $N$ for the square (a), the triangular (b), and the hexagonal (c) lattice with a circular edge. $x=|z_j|^2/4$, $y=-\ln[|f_N(z_j)|]+\textrm{constant}$, and the black lines in the background are the curve $y=x$.[]{data-label="fig:fz"}](fz){width="\columnwidth"}
*Continuous interpolation.–* We next demonstrate that important properties of the states stay the same as a function of the interpolation parameter, which indicates that the states remain within the same phase when interpolated between the lattice limit and the continuum limit. We first consider the uniform lattice in 1D and show that is well-described by the TLL theory in this case. The Rényi entropy $S_{L}^{(\alpha )}=\ln (\operatorname{Tr}(\rho _{L}^{\alpha }))/(1-\alpha )$ of a TLL, where $\rho _{L}$ is the reduced density operator of $L$ successive sites in the chain, is expected to be [@Calabrese-2010] $$S_{L}^{(\alpha )}=S_{L,\text{CFT}}^{(\alpha )}+\frac{f_{\alpha }\cos (2Lk_\textrm{F})}{|2\sin (k_\textrm{F})\sin (\pi L/N)N/\pi |^{2K/\alpha }} \label{eq:SL}$$ for $\ln (|2\sin (k_\textrm{F})\sin (\pi L/N)N/\pi |)\gg \alpha $, where $K$ is the Luttinger parameter, $k_\textrm{F}=\eta\pi/q$ is the Fermi momentum, $$S_{L,\text{CFT}}^{(\alpha )}=(c/6)(1+1/\alpha )\ln (\sin (\pi L/N)N/\pi
)+c_{\alpha }^{\prime }, \label{eq:SLCFT}$$ $c$ is the central charge, and $f_{\alpha }$ and $c_{\alpha }^{\prime }$ are nonuniversal constants. Fixing $c=1$ and using $f_{\alpha }$, $K$, and $c_{\alpha }^{\prime }$ as fitting parameters, we find that the entanglement entropy of (\[eq:Laughlin\]), indeed, follows (\[eq:SL\]) as illustrated for $\eta=1$ in Fig. \[fig:entcor\](a). The expected TLL behavior of the particle-particle correlation function $C(k)=\langle n_{i}n_{i+k}\rangle-\langle n_{i}\rangle \langle n_{i+k}\rangle $ is [@Cabra-2004] $$C(k)=\frac{A\cos (2kk_\textrm{F})}{|\sin (\pi k/N)N/\pi |^{2K}}+\frac{K}{2\pi
^{2}|\sin (\pi k/N)N/\pi |^{2}} \label{eq:cor}$$ for large $k$, where $A$ is a nonuniversal constant, and we find that this expression provides a good fit as illustrated for $\eta=1$ in Fig. \[fig:entcor\](b). The values of $K$ extracted from the entropy and correlation function computations are shown as a function of the interpolation parameter in Fig. \[fig:entcor\](c), and these results suggest that $K=1/q$ independent of $\eta$. We note that the observed behavior coincides with the properties of the free boson CFT with radius $R=\sqrt{q}$, which is the low-energy effective theory for the Calogero-Sutherland model with rational coupling constant $q$ [@Kawakami-1991].
![(Color online) (a) Deviation of the Rényi entropy with index $\protect\alpha=2$ of a block of $L$ consecutive sites from the lowest order CFT expression and (b) particle-particle correlation function of the CFT state for a uniform 1D lattice in the lattice limit for $q=3$ (top) and $q=4$ (bottom) obtained from Monte Carlo simulations. The fits are based on Eqs. and , respectively, and allow us to extract the Luttinger parameter $K$, which is shown for $M=50$ as a function of the interpolation parameter $\eta$ in inset (c) \[‘Ent’ (‘Cor’) means extracted from the entropy (correlator) fit\]. Since and are valid for large $L$ and $k$, respectively, we exclude the first $2q/\eta$ points when computing the fits.[]{data-label="fig:entcor"}](corent){width="\columnwidth"}
The Laughlin states in the continuum are topological states with TEE $-\ln(q)/2$, and in Fig. \[fig:tee\] we find that this value remains unchanged when interpolating the state to the lattice limit. The TEE $-\gamma $ is computed by mapping the state on an $R\times L$ square lattice to the cylinder, cutting the cylinder in two halves, computing the Rényi entropy of one of the halves as a function of the number of sites $L$ along the cut, and utilizing that the Rényi entropy follows the behavior $S_{L}^{(2)}=\xi L-\gamma $ for large $R$ and $L$, where $\xi$ is a nonuniversal constant [@Jiang-2012]. The mapping to the cylinder amounts to choosing $z_{j}=\exp (2\pi (r_{j}+il_{j})/L)$, where $r_{j}\in \{-R/2+1/2,-R/2+3/2,\ldots ,R/2-1/2\}$ and $l_{j}\in \{1,2,\ldots,L\}$. The CFT states in the lattice limit are therefore continuously connected to the Laughlin states in the continuum.
![(Color online) Rényi entropy of the CFT state with $q=3$ (left) and $q=4$ (right) obtained from Monte Carlo simulations. The state is defined on an $R\times L$ square lattice on the cylinder, and the cut divides it into two $R/2\times L$ lattices, where $L$ is the number of lattice sites in the periodic direction. The fits are of the form $S_{L}^{(2)}=\xi L-\protect\gamma$, where $\xi$ and $\protect\gamma$ are fitting parameters, and are weighted so that points with larger error bars count less. Starting from above, $\protect\eta$ and $R$ are, respectively, $\protect\eta=1,0.694,0.391,0.25,0.111$ and $R=10,12,16,20,30$ for the five data sets, and the number of particles is $M=\eta RL/q$. The TEE $-\protect\gamma $ is seen to be independent of $\protect\eta $. The insets are enlarged views, and the red arrows point at the value $-\ln (q)/2$, which is the TEE of the Laughlin states in the continuum.[]{data-label="fig:tee"}](TEE){width="\columnwidth"}
*Parent Hamiltonian.–* For $\eta_j=1$ $\forall j$, the vertex operators constructing the wave functions (\[eq:iMPS\]) can be identified as primary fields of a free-boson CFT compactified on a circle of radius $R=\sqrt{q}$. For $q=2$, the CFT is the SU(2)$_{1}$ Wess-Zumino-Witten (WZW) model. For $q=3$, the CFT has a hidden supersymmetry and can be identified as the $\mathcal{N}=2$ superconformal field theory [@Moore-Read-1991]. For integer $q$, the rationality of these CFTs ensures the existence of null fields. This is very useful, because null fields can be used for deriving parent Hamiltonians as demonstrated for the case of WZW models in [@nsc-2011]. Here, we identify a suitable set of null fields from which we derive decoupling equations. After some algebra [@SuppMat], this procedure gives us a set of operators, which annihilate the wave functions (\[eq:iMPS\]) at $\eta_j=1$. These operators include $\Upsilon =\sum_{i=1}^{N}\tilde{d}_{i}$, where $\tilde{d}_{i}=\chi_i^{-1}d_i$ and $d_i$ denotes the fermionic (hardcore bosonic) annihilation operator for odd (even) $q$, and $$\Lambda _{i}=(q-2)\tilde{d}_{i}+\sum_{j(\neq i)}w_{ij}[\tilde{d}_{j}-\tilde{d}_{i}(qn_{j}-1)],
\label{eq:annihilator}$$ where $w_{ij}\equiv (z_{i}+z_{j})/(z_{i}-z_{j})$. Since $\Upsilon|\Psi\rangle=\Lambda _{i}|\Psi\rangle=0$ $\forall i$, the positive semi-definite Hermitian operators $\Upsilon^\dagger\Upsilon$ and $\Lambda _{i}^{\dagger }\Lambda _{i}$ ($i=1,\ldots ,N$) have the wave functions (\[eq:iMPS\]) with $\eta_j=1$ and $z_j$ arbitrary as their zero-energy ground state. Thus, these operators can be used to construct both 1D and 2D parent Hamiltonians for which the wave functions (\[eq:iMPS\]) with $\eta_j=1$ are exact ground states. For the states with $\eta_j \neq 1$, we have not achieved to construct parent Hamiltonians, which is still an interesting open problem.
In the following, we focus on a 1D parent Hamiltonian obtained for $z_j=e^{2\pi ij/N}$, which turns out to have a particularly simple form. Specifically, we consider $H_{\mathrm{1D}}=\frac{1}{2}\sum_{i}(\Lambda_{i}^{\dagger }\Lambda _{i}-q\Gamma _{i}^{\dagger }\Gamma _{i})+\frac{q-2}{2}
\Upsilon ^{\dagger }\Upsilon +E_{0}$, where $\Gamma _{i}=\tilde{d}_{i}\Lambda _{i}=\sum_{j(\neq i)}w_{ij}\tilde{d}_{i}\tilde{d}_{j}$ and $E_{0}=-\frac{q-1}{6q}N[3N+(q-8)]$ is the eigenenergy of (\[eq:Laughlin\]). This choice yields a parent Hamiltonian with purely two-body interactions $$H_{\mathrm{1D}}=\sum_{i\neq j}
[(q-2)w_{ij}-w_{ij}^{2}]\tilde{d}_{i}^{\dagger }\tilde{d}_{j}-
\frac{q(q-1)}{2}\sum_{i\neq j}w_{ij}^{2}n_{i}n_{j}.
\label{eq:1DHamiltonian}$$ While the $q=2$ Hamiltonian recovers the spin-1/2 Haldane-Shastry model [@Haldane-1988; @Shastry-1988], the Hamiltonians with $q\geq 3$ differ from the Haldane’s inverse-square Hamiltonians [@Haldane-1988] by an extra hopping term. By diagonalizing the Hamiltonian (\[eq:1DHamiltonian\]) numerically for small $N$, we confirm that the wave functions (\[eq:iMPS\]) are indeed their unique ground states. Additionally, we observe that $H_{\mathrm{1D}}$ always has integer eigenvalues besides non-integer ones, which is an interesting feature already arising in Haldane’s model [@Haldane-1988]. Motivated by Haldane’s results, we have found that, after subtracting a constant, *part of* the integer eigenvalues take the form $E=\sum_{\{m_{k}\}}2m_{k}(m_{k}+q-2-N)$, where $\{m_{k}\}$ is a set of $M$ pseudomomenta ($M$: number of particles) satisfying $m_{k}\in \lbrack 0,N-1]$ and $m_{k+1}\geq m_{k}+q$. This formula captures the essential low-lying part of the energy spectrum. Similar to Haldane’s model, one can prove analytically that the Jastrow wave functions $\Psi _{\mathrm{1D}}^{J}(n_{1},\ldots,n_{N}) =\delta_n\prod_{i<j}(z_{i}-z_{j})^{qn_{i}n_{j}}\prod_{l}(\chi_lz_{l}^J)^{n_{l}}$, where $\delta_n=1$ for $\sum_in_i=M$ and zero otherwise and $1\leq J\leq N-q(M-1)-1$, are exact eigenstates of (\[eq:1DHamiltonian\]) and are a subclass of those eigenstates with integer eigenvalues.
*Conclusion.–* The present work combines several known models into a common framework with an underlying CFT structure and shows how the Laughlin states and the CS wave functions can be continuously transformed into lattice wave functions with similar properties. The CFT structure provides useful tools for deriving properties of the states analytically, and, in particular, enables us to derive parent Hamiltonians of the states in the lattice limit. Analytical wave functions play an important role in the investigation of the FQH effect in the continuum, and the model proposed here may similarly be used for analyzing FQH properties in lattice systems. Our present work also provides a method to discretize continuum FQH states in a way that is amenable to projected entangled-pair state description [@PEPS], and thus it provides an alternative approach to the one recently introduced based on infinite matrix product states using discrete Landau level orbitals [@Zaletel-2012; @Estienne-2013a; @Estienne-2013b].
*Acknowledgment.–* We thank the Benasque Center of Sciences, where part of this work has been done, for their hospitality. This work has been supported by the EU project SIQS, the DFG cluster of excellence NIM, FIS2012-33642, QUITEMAD (CAM), and the Severo Ochoa Program.
[99]{}
R. B. Laughlin, Phys. Rev. Lett. **50**, 1395 (1983).
X.-G. Wen, Int. J. Mod. Phys. B **4**, 239 (1990).
X.-G. Wen and Q. Niu, Phys. Rev. B **41**, 9377 (1990).
J. K. Jain, *Composite Fermions* (Cambridge University Press, Cambridge, 2007).
V. Kalmeyer and R. B. Laughlin, Phys. Rev. Lett. **59**, 2095 (1987).
V. Kalmeyer and R. B. Laughlin, Phys. Rev. B **39**, 11879 (1989).
R. B. Laughlin, Ann. Phys. **191**, 163 (1989).
R. B. Laughlin and Z. Zou, Phys. Rev. B **41**, 664 (1990).
X.-G. Wen, Phys. Rev. B **43**, 11025 (1991).
T. Scaffidi and G. M[ö]{}ller, Phys. Rev. Lett. **109**, 246805 (2012).
M. Hafezi, A. S. S[ø]{}rensen, E. Demler, and M. D. Lukin, Phys. Rev. A **76**, 023613 (2007).
F. Calogero, J. Math. Phys. **10**, 2197 (1969).
B. Sutherland, J. Math. Phys. **12**, 246 (1971).
F. D. M. Haldane, Phys. Rev. Lett. **60**, 635 (1988).
B. S. Shastry, Phys. Rev. Lett. **60**, 639 (1988).
G. Moore and N. Read, Nucl. Phys. B **360**, 362 (1991).
J. I. Cirac and G. Sierra, Phys. Rev. B **81**, 104431 (2010).
A. E. B. Nielsen, J. I. Cirac, and G. Sierra, J. Stat. Mech. (2011) P11014.
A. E. B. Nielsen, J. I. Cirac, and G. Sierra, Phys. Rev. Lett. **108**, 257206 (2012).
H.-H. Tu, Phys. Rev. B **87**, 041103(R) (2013).
D. F. Schroeter, E. Kapit, R. Thomale, and M. Greiter, Phys. Rev. Lett. **99**, 097202 (2007).
R. Thomale, E. Kapit, D. F. Schroeter, and M. Greiter, Phys. Rev. B **80**, 104406 (2009).
E. Kapit and E. Mueller, Phys. Rev. Lett. **105**, 215303 (2010).
B. Bauer, B. P. Keller, M. Dolfi, S. Trebst, and A. W. W. Ludwig, arXiv:1303.6963.
A. E. B. Nielsen, G. Sierra, and J. I. Cirac, arXiv:1304.0717.
P. Di Francesco, P. Mathieu, and D. Sénéchal, *Conformal Field Theory* (Springer, New York, 1997).
P. Calabrese, M. Campostrini, F. Essler, and B. Nienhuis, Phys. Rev. Lett. **104**, 095701 (2010).
D. C. Cabra and P. Pujol, Lect. Notes Phys. **645**, 253 (2004).
N. Kawakami and S.-K. Yang, Phys. Rev. Lett. **67**, 2493 (1991).
H.-C. Jiang, Z. Wang, and L. Balents, Nature Phys. **8**, 902 (2012).
See Supplemental Material for the derivation of parent Hamiltonians.
F. Verstraete and J. I. Cirac, cond-mat/0407066; Phys. Rev. A **70**, 060302 (2004).
M. P. Zaletel and R. S. K. Mong, Phys. Rev. B **86**, 245305 (2012).
B. Estienne, Z. Papi[ć]{}, N. Regnault, and B. A. Bernevig, Phys. Rev. B **87**, 161112(R) (2013).
B. Estienne, N. Regnault, and B. A. Bernevig, arXiv:1311.2936.
**Supplemental material**
Operators annihilating the lattice Laughlin states
==================================================
In this section, we derive operators that annihilate the state (2) in the main text for $\eta=1$. We first assume $\chi_j=1$ and consider the CFT wave functions defined by$$\Psi _{n_{1},\ldots ,n_{N}}(z_{1},\ldots ,z_{N})=\langle
V_{n_{1}}(z_{1})V_{n_{2}}(z_{2})\cdots V_{n_{N}}(z_{N})\rangle ,
\label{eq:Laughlin}$$where$$V_{n_{j}=1}(z_{j})=e^{i\pi (j-1)}V_{+}(z_{j})\text{ \ \ \ \ \ }V_{n_{j}=0}(z_{j})=V_{-}(z_{j}).$$Here $V_{+}(z)=e^{i(q-1)\phi (z)/\sqrt{q}}$ and $V_{-}(z)=e^{-i\phi (z)/\sqrt{q}}$.
For the $c=1$ free-boson CFT with compactification radius $R=\sqrt{q}$, it is convenient to define two chiral currents,$$G^{\pm }(z)=e^{\pm i\sqrt{q}\phi (z)},$$besides the U(1) current $J(z)=\frac{i}{\sqrt{q}}\partial \phi (z)$. For $q=2 $, these currents form the SU(2)$_{1}$ Kac-Moody algebra. For $q=3$, together with the energy-momentum tensor, the currents form the $\mathcal{N}=2$ superconformal current algebra.
To construct the parent Hamiltonian of (\[eq:Laughlin\]), we need to derive decoupling equations satisfied by the CFT correlator ([eq:Laughlin]{}) using null fields. Let us first consider the null field$$\begin{aligned}
\chi _{1}(w) &=&\oint_{w}\frac{dz}{2\pi i}\frac{1}{z-w}[G^{+}(z)V_{-}(w)-qJ(z)V_{+}(w)] \notag \\
&=&\oint_{w}\frac{dz}{2\pi i}\frac{1}{z-w}[e^{i\sqrt{q}\phi (z)}e^{-i\phi
(w)/\sqrt{q}}-\sqrt{q}i\partial \phi (z)e^{i(q-1)\phi (w)/\sqrt{q}}] \notag
\\
&=&\oint_{w}\frac{dz}{2\pi i}\frac{1}{z-w}[\frac{1}{z-w}e^{i\sqrt{q}\phi
(z)-i\phi (w)/\sqrt{q}}-\sqrt{q}i\partial \phi (w)e^{i(q-1)\phi (w)/\sqrt{q}}] \notag \\
&=&\oint_{w}\frac{dz}{2\pi i}\frac{1}{z-w}[\sqrt{q}i\partial \phi
(w)e^{i(q-1)\phi (w)/\sqrt{q}}-\sqrt{q}i\partial \phi (w)e^{i(q-1)\phi (w)/\sqrt{q}}] \notag \\
&=&0.\end{aligned}$$By replacing the vertex operator at site $i$ by the null field $\chi
_{1}(z_{i})$, the chiral correlator vanishes$$\begin{aligned}
0 &=&\langle V_{n_{1}}(z_{1})\cdots \chi _{1}(z_{i})\cdots
V_{n_{N}}(z_{N})\rangle \\
&=&\oint_{z_{i}}\frac{dz}{2\pi i}\frac{1}{z-z_{i}}\langle
V_{n_{1}}(z_{1})\cdots \lbrack G^{+}(z)V_{-}(z_{i})-qJ(z)V_{+}(z_{i})]\cdots
V_{n_{N}}(z_{N})\rangle \\
&=&-\sum_{j=1(\neq i)}^{N}\oint_{z_{j}}\frac{dz}{2\pi i}\frac{1}{z-z_{i}}\langle V_{n_{1}}(z_{1})\cdots \lbrack
G^{+}(z)V_{-}(z_{i})-qJ(z)V_{+}(z_{i})]\cdots V_{n_{N}}(z_{N})\rangle ,\end{aligned}$$where we have deformed the integral contour in the last step. To proceed we use the operator product expansions (OPEs)$$\begin{aligned}
G^{+}(z)V_{n}(w) &\sim &\frac{\sum_{n^{\prime }}(d)_{nn^{\prime }}}{z-w}V_{n^{\prime }}(w), \\
J(z)V_{n}(w) &\sim &\frac{1}{q}\frac{\sum_{n^{\prime }}(qd^{\dagger
}d-1)_{nn^{\prime }}}{z-w}V_{n^{\prime }}(w),\end{aligned}$$where the particle annihilation and creation operators are defined as $d=\begin{pmatrix}
0 & 0 \\
1 & 0\end{pmatrix}$ and $d^{\dagger }=\begin{pmatrix}
0 & 1 \\
0 & 0\end{pmatrix}$, respectively. Applying the OPEs, the chiral correlator with null field $\chi _{1}(z_{i})$ yields the following decoupling equation:$$\begin{aligned}
0 &=&\langle V_{n_{1}}(z_{1})\cdots \chi _{1}(z_{i})\cdots
V_{n_{N}}(z_{N})\rangle \notag \\
&=&-\sum_{j=1(\neq i)}^{N}\oint_{z_{j}}\frac{dz}{2\pi i}\frac{1}{z-z_{i}}\langle V_{n_{1}}(z_{1})\cdots \lbrack
G^{+}(z)V_{-}(z_{i})-qJ(z)V_{+}(z_{i})]\cdots V_{n_{N}}(z_{N})\rangle
\notag \\
&=&-\sum_{j=1(\neq i)}^{N}\oint_{z_{j}}\frac{dz}{2\pi i}\frac{1}{z-z_{i}}\frac{\sum_{n_{j}^{\prime }}(d)_{n_{j}n_{j}^{\prime }}}{z-z_{j}}\langle
V_{n_{1}}(z_{1})\cdots V_{n_{j}^{\prime }}(z_{j})\cdots V_{-}(z_{i})\cdots
V_{n_{N}}(z_{N})\rangle \notag \\
&&+\sum_{j=1(\neq i)}^{N}\oint_{z_{j}}\frac{dz}{2\pi i}\frac{1}{z-z_{i}}\frac{\sum_{n_{j}^{\prime }}(qd^{\dagger }d-1)_{n_{j}n_{j}^{\prime }}}{z-z_{j}}\langle V_{n_{1}}(z_{1})\cdots V_{n_{j}^{\prime }}(z_{j})\cdots
V_{+}(z_{i})\cdots V_{n_{N}}(z_{N})\rangle \notag \\
&=&\sum_{j=1(\neq i)}^{N}\frac{1}{z_{i}-z_{j}}\sum_{n_{j}^{\prime
}}(d)_{n_{j}n_{j}^{\prime }}\langle V_{n_{1}}(z_{1})\cdots V_{n_{j}^{\prime
}}(z_{j})\cdots V_{-}(z_{i})\cdots V_{n_{N}}(z_{N})\rangle \notag \\
&&-\sum_{j=1(\neq i)}^{N}\frac{1}{z_{i}-z_{j}}\sum_{n_{j}^{\prime
}}(qd^{\dagger }d-1)_{n_{j}n_{j}^{\prime }}\langle V_{n_{1}}(z_{1})\cdots
V_{n_{j}^{\prime }}(z_{j})\cdots V_{+}(z_{i})\cdots V_{n_{N}}(z_{N})\rangle .\end{aligned}$$Based on the above decoupling equation, we obtain an operator $\Lambda
_{i}^{\prime }$$$\Lambda _{i}^{\prime }=\sum_{j=1(\neq i)}^{N}\frac{1}{z_{i}-z_{j}}[d_{i}^{\dagger }d_{j}-n_{i}(qn_{j}-1)],$$where $n_{j}=d_{j}^{\dagger }d_{j}$, and which annihilates the wave function (\[eq:Laughlin\]), i.e., $\Lambda _{i}^{\prime }|\Psi \rangle =0$ $\forall
i=1,\ldots ,N$.
Similarly, decoupling equations can be derived from another two null fields$$\begin{aligned}
\chi _{2}(w) &=&\oint_{w}\frac{dz}{2\pi i}\frac{1}{z-w}G^{+}(z)V_{+}(w)=0, \\
\chi _{3}(w) &=&\oint_{w}\frac{dz}{2\pi i}G^{+}(z)V_{+}(w)=0,\end{aligned}$$and we obtain two additional operators annihilating the wave function ([eq:Laughlin]{})$$\begin{aligned}
\Lambda _{i}^{\prime \prime } &=&\sum_{j=1(\neq i)}^{N}\frac{1}{z_{i}-z_{j}}n_{i}d_{j}, \\
\Upsilon &=&\sum_{i=1}^{N}d_{i}.\end{aligned}$$These operators can be combined into new operators annihilating ([eq:Laughlin]{}) $$\begin{aligned}
d_{i}\Lambda _{i}^{\prime }+\Lambda _{i}^{\prime \prime } &=&\sum_{j=1(\neq
i)}^{N}\frac{1}{z_{i}-z_{j}}[d_{j}-d_{i}(qn_{j}-1)], \\
d_{i}\Lambda _{i}^{\prime \prime } &=&\sum_{j=1(\neq i)}^{N}\frac{1}{z_{i}-z_{j}}d_{i}d_{j}.\end{aligned}$$
Defining $w_{ij}=\frac{z_{i}+z_{j}}{z_{i}-z_{j}}$, the operator $\Lambda
_{i}=(q-2)d_{i}+\sum_{j=1(\neq i)}^{N}w_{ij}[d_{j}-d_{i}(qn_{j}-1)]$ can be written as$$\begin{aligned}
\Lambda _{i} &=&(q-2)d_{i}+\sum_{j=1(\neq i)}^{N}\left( \frac{2z_{i}}{z_{i}-z_{j}}-1\right) [d_{j}-d_{i}(qn_{j}-1)] \\
&=&(q-2)d_{i}+2z_{i}(d_{i}\Lambda _{i}^{\prime }+\Lambda _{i}^{\prime \prime
})-\sum_{j=1(\neq i)}^{N}[d_{j}-d_{i}(qn_{j}-1)] \\
&=&(q-2)d_{i}+2z_{i}(d_{i}\Lambda _{i}^{\prime }+\Lambda _{i}^{\prime \prime
})-(\Upsilon -d_{i})+d_{i}\left[ \sum_{j=1}^{N}(qn_{j}-1)-(qn_{i}-1)\right]
\\
&=&2z_{i}(d_{i}\Lambda _{i}^{\prime }+\Lambda _{i}^{\prime \prime
})-\Upsilon +d_{i}\sum_{j=1}^{N}(qn_{j}-1).\end{aligned}$$Note that the wave function (\[eq:Laughlin\]) has filling fraction $\nu
=1/q$, i.e., $\sum_{j=1}^{N}(qn_{j}-1)|\Psi \rangle =0$. Thus, we have proven that $\Lambda _{i}|\Psi \rangle =0$ $\forall i=1,\ldots ,N$. Since $\Lambda _{i}|\Psi \rangle =0$, it is straightforward to prove that $\Gamma _{i}|\Psi \rangle =0$, where $\Gamma _{i}$ is given by $\Gamma
_{i}=d_{i}\Lambda _{i}=\sum_{j=1(\neq i)}^{N}w_{ij}d_{i}d_{j}$.
The wave function in the main text for $\eta=1$ differs from by the factor $\prod_j\chi_j^{n_j}$. This can, however, easily be taken into account by multiplying the above operators with $\prod_j\chi_j^{-n_j}$ from the right and $\prod_j\chi_j^{n_j}$ from the left, which amounts to replacing $d_j$ by $\tilde{d}_j=\chi_j^{-1}d_j$.
1D parent Hamiltonian
=====================
In this section, we use $\Lambda _{i}$ to construct a 1D uniform Hamiltonian, where the lattice sites form a unit circle, i.e., $z_{j}=e^{i2\pi j/N}$.
Since $\sum_{j(\neq i)}w_{ij}=0$ in 1D uniform case, the form of $\Lambda
_{i}$ can be simplified as $$\Lambda _{i}=(q-2)d_{i}+\sum_{j(\neq i)}w_{ij}(d_{j}-qd_{i}n_{j}).$$Then, the positive-semidefinite operators annihilating the wave functions are given by$$\begin{aligned}
\Lambda _{i}^{\dagger }\Lambda _{i} &=&(q-2)^{2}d_{i}^{\dagger
}d_{i}+(q-2)\sum_{j(\neq i)}w_{ij}(d_{i}^{\dagger
}d_{j}-qn_{i}n_{j})-(q-2)\sum_{j(\neq i)}w_{ij}(d_{j}^{\dagger
}d_{i}-qn_{i}n_{j}) \\
&&-\sum_{j(\neq i)}w_{ij}^{2}(d_{j}^{\dagger }-qd_{i}^{\dagger
}n_{j})(d_{j}-qd_{i}n_{j})-\sum_{j\neq l(\neq i)}w_{ij}w_{il}(d_{l}^{\dagger
}-qd_{i}^{\dagger }n_{l})(d_{j}-qd_{i}n_{j}) \\
&=&(q-2)^{2}n_{i}+(q-2)\sum_{j(\neq i)}w_{ij}(d_{i}^{\dagger
}d_{j}-d_{j}^{\dagger }d_{i}) \\
&&-\sum_{j(\neq i)}w_{ij}^{2}(n_{j}+q^{2}n_{i}n_{j})-\sum_{j\neq l(\neq
i)}w_{ij}w_{il}[d_{l}^{\dagger }d_{j}-q(d_{j}^{\dagger }d_{i}+d_{i}^{\dagger
}d_{j})n_{l}+q^{2}n_{i}n_{j}n_{l}].\end{aligned}$$By using the useful identities$$\begin{aligned}
\sum_{i(\neq j)}w_{ij}^{2} &=&-\frac{(N-1)(N-2)}{3}, \\
\sum_{i(\neq j,l)}w_{ij}w_{il} &=&(N-2)+2w_{jl}^{2},\end{aligned}$$and fixing the filling fraction $\sum_{i}n_{i}=N/q$ in the system, we obtain$$\begin{aligned}
\sum_{i}\Lambda _{i}^{\dagger }\Lambda _{i} &=&(q-2)^{2}\frac{N}{q}+2(q-2)\sum_{i\neq j}w_{ij}d_{i}^{\dagger }d_{j}+\frac{(N-1)(N-2)}{3}\sum_{j}n_{j}-q^{2}\sum_{i\neq j}w_{ij}^{2}n_{i}n_{j} \\
&&-\sum_{j\neq l}[(N-2)+2w_{jl}^{2}]d_{l}^{\dagger }d_{j}+q\sum_{i\neq j\neq
l}w_{ij}w_{il}[(d_{j}^{\dagger }d_{i}+d_{i}^{\dagger
}d_{j})n_{l}-qn_{i}n_{j}n_{l}] \\
&=&(q-2)^{2}\frac{N}{q}+\frac{N(N-1)(N-2)}{3q}+2(q-2)\sum_{i\neq
j}w_{ij}d_{i}^{\dagger }d_{j}-q^{2}\sum_{i\neq j}w_{ij}^{2}n_{i}n_{j} \\
&&-\sum_{j\neq l}[(N-2)+2w_{jl}^{2}]d_{l}^{\dagger }d_{j}+q\sum_{i\neq j\neq
l}w_{ij}w_{il}[(d_{j}^{\dagger }d_{i}+d_{i}^{\dagger
}d_{j})n_{l}-qn_{i}n_{j}n_{l}].\end{aligned}$$
The above expression can be further simplified by using$$\sum_{j\neq l}d_{l}^{\dagger }d_{j}=\Upsilon ^{\dagger }\Upsilon -\frac{N}{q}$$and$$\begin{aligned}
\sum_{i\neq j\neq l}w_{ij}w_{il}n_{i}n_{j}n_{l} &=&\frac{1}{3}\sum_{i\neq
j\neq l}(w_{ij}w_{il}+w_{ji}w_{jl}+w_{li}w_{lj})n_{i}n_{j}n_{l} \\
&=&\frac{1}{3}\sum_{i\neq j\neq l}n_{i}n_{j}n_{l} \\
&=&\frac{N(N-q)(N-2q)}{3q^{3}},\end{aligned}$$where we have used the cyclic identity$$w_{ij}w_{il}+w_{ji}w_{jl}+w_{li}w_{lj}=1.$$Then, we obtain$$\begin{aligned}
\sum_{i}\Lambda _{i}^{\dagger }\Lambda _{i} &=&(q-2)^{2}\frac{N}{q}+\frac{N(N-1)(N-2)}{3q}+2(q-2)\sum_{i\neq j}w_{ij}d_{i}^{\dagger
}d_{j}-q^{2}\sum_{i\neq j}w_{ij}^{2}n_{i}n_{j} \notag \\
&&-(N-2)(\Upsilon ^{\dagger }\Upsilon -\frac{N}{q})-2\sum_{i\neq
j}w_{ij}^{2}d_{i}^{\dagger }d_{j}+q\sum_{i\neq j\neq
l}w_{ij}w_{il}(d_{j}^{\dagger }d_{i}+d_{i}^{\dagger }d_{j})n_{l}-q^{2}\frac{N(N-q)(N-2q)}{3q^{3}} \notag \\
&=&2\sum_{i\neq j}[(q-2)w_{ij}-w_{ij}^{2}]d_{i}^{\dagger
}d_{j}-q^{2}\sum_{i\neq j}w_{ij}^{2}n_{i}n_{j}+q\sum_{i\neq j\neq
l}w_{ij}w_{il}(d_{j}^{\dagger }d_{i}+d_{i}^{\dagger
}d_{j})n_{l}-(N-2)\Upsilon ^{\dagger }\Upsilon \notag \\
&&+\frac{N}{3q}[3qN+(q^{2}-12q+8)].\end{aligned}$$
Now we construct positive-semidefinite operators from the operator $\Gamma
_{i}=\sum_{j(\neq i)}w_{ij}d_{i}d_{j}$$$\begin{aligned}
\Gamma _{i}^{\dagger }\Gamma _{i} &=&-\sum_{j,l(\neq
i)}w_{ij}w_{il}d_{l}^{\dagger }d_{j}n_{i} \\
&=&-\sum_{j(\neq i)}w_{ij}^{2}n_{i}n_{j}-\sum_{j\neq l(\neq
i)}w_{ij}w_{il}d_{l}^{\dagger }d_{j}n_{i},\end{aligned}$$and$$\sum_{i}\Gamma _{i}^{\dagger }\Gamma _{i}=-\sum_{i\neq
j}w_{ij}^{2}n_{i}n_{j}-\sum_{i\neq j\neq l}w_{lj}w_{li}d_{i}^{\dagger
}d_{j}n_{l}.$$
Note that $\sum_{i}\Lambda _{i}^{\dagger }\Lambda _{i}$ and $\sum_{i}\Gamma
_{i}^{\dagger }\Gamma _{i}$ both contain three-body interaction terms. However, we observe that, the following combination eliminates the three-body terms by using the cyclic identity:$$\begin{aligned}
&&\sum_{i}\Lambda _{i}^{\dagger }\Lambda _{i}-q\sum_{i}\Gamma _{i}^{\dagger
}\Gamma _{i} \\
&=&2\sum_{i\neq j}[(q-2)w_{ij}-w_{ij}^{2}]d_{i}^{\dagger
}d_{j}-(q^{2}-q)\sum_{i\neq j}w_{ij}^{2}n_{i}n_{j}-(N-2)\Upsilon ^{\dagger
}\Upsilon \\
&&+q\sum_{i\neq j\neq
l}(w_{ij}w_{il}+w_{ji}w_{jl}+w_{lj}w_{li})d_{i}^{\dagger }d_{j}n_{l}+\frac{N}{3q}[3qN+(q^{2}-12q+8)] \\
&=&2\sum_{i\neq j}[(q-2)w_{ij}-w_{ij}^{2}]d_{i}^{\dagger
}d_{j}-(q^{2}-q)\sum_{i\neq j}w_{ij}^{2}n_{i}n_{j}-(N-2)\Upsilon ^{\dagger
}\Upsilon \\
&&+q\sum_{i\neq j\neq l}d_{i}^{\dagger }d_{j}n_{l}+\frac{N}{3q}[3qN+(q^{2}-12q+8)] \\
&=&2\sum_{i\neq j}[(q-2)w_{ij}-w_{ij}^{2}]d_{i}^{\dagger
}d_{j}-(q^{2}-q)\sum_{i\neq j}w_{ij}^{2}n_{i}n_{j}-(q-2)\Upsilon ^{\dagger
}\Upsilon +\frac{q-1}{3q}N[3N+(q-8)],\end{aligned}$$where we have used$$\begin{aligned}
\sum_{i\neq j\neq l}d_{i}^{\dagger }d_{j}n_{l} &=&\sum_{i\neq
j}d_{i}^{\dagger }d_{j}(\frac{N}{q}-n_{i}-n_{j}) \\
&=&(\frac{N}{q}-1)\sum_{i\neq j}d_{i}^{\dagger }d_{j} \\
&=&(\frac{N}{q}-1)\Upsilon ^{\dagger }\Upsilon -\frac{N}{q}(\frac{N}{q}-1).\end{aligned}$$
Finally, we define the 1D parent Hamiltonian as$$\begin{aligned}
H_{\mathrm{1D}} &=&\frac{1}{2}\sum_{i}\Lambda _{i}^{\dagger }\Lambda _{i}-\frac{q}{2}\sum_{i}\Gamma _{i}^{\dagger }\Gamma _{i}+\frac{q-2}{2}\Upsilon
^{\dagger }\Upsilon -E_{0} \notag \\
&=&\sum_{i\neq j}[(q-2)w_{ij}-w_{ij}^{2}]d_{i}^{\dagger }d_{j}-\frac{1}{2}(q^{2}-q)\sum_{i\neq j}w_{ij}^{2}n_{i}n_{j},\end{aligned}$$where $E_{0}$ is the ground-state energy of $H_{\mathrm{1D}}$ $$E_{0}=-\frac{q-1}{6q}N[3N+(q-8)].$$ If $\chi_j\neq1$, $d_j$ should be replaced by $\tilde{d}_j=\chi_j^{-1}d_j$.
|
---
abstract: 'On the basis of *ab-initio* total-energy electronic-structure calculations, we find that interface localized electron states at the SiC/SiO$_2$ interface emerge in the energy region between 0.3 eV below and 1.2 eV above the bulk conduction-band minimum (CBM) of SiC, being sensitive to the sequence of atomic bilayers in SiC near the interface. These new interface states unrecognized in the past are due to the peculiar characteristics of the CBM states which are distributed along the crystallographic channels. We also find that the electron doping modifies the energetics among the different stacking structures. Implication for performance of electron devices fabricated on different SiC surfaces are discussed.'
author:
- 'Yu-ichiro Matsushita'
- Atsushi Oshiyama
title: 'A novel intrinsic interface state controlled by atomic stacking sequence at interfaces of SiC/SiO$_2$'
---
High-efficiency power electronic devices play an important role in realization of the energy-saving society. To increase the efficiencies of the power devices, low energy-loss semiconductor materials are necessary. SiC has attracted much attention as a possible next-generation power semiconductor due to its prominent material properties such as high breakdown electric field (10 times larger than Si) and high thermal conductance (3 times larger than Si) [@Kimoto; @Baliga]. Another benefit of SiC power semiconductor is the utility of its natively oxidized thin films, SiO$_2$, for the fabrication of metal-oxide-semiconductor field-effect transistors (MOSFETs), assuring the good connectivity with Si technology [@Kimoto].
SiC-MOSFET devices have been already available commercially. However, they still face a severe problem that the mobility of the devices is far from the theoretical values due to the huge density of interface levels at the SiC/SiO$_2$ with the concentration of 10$^{13}$ - 10$^{14}$ cm$^{-2}$ eV$^{-1}$ [@Kimoto; @Yoshioka; @Kobayashi]. The levels appearing in the gap within the range of 0.3 eV below the CBM indeed cause the substantial reduction of the electron mobility [@Kimoto]. Many theoretical and experimental efforts have been done to identify those interface levels and carbon-related defects are suspicious of the mobility killers [@Afanasev; @Kikuchi; @Gali1; @Gali2; @Gali3; @Kobayashi2]. However, no consensus is reached yet. In this Letter, we show that, neither defects nor impurities, but the imperfection in the stacking sequence of the atomic layers causes interface levels below the CBM.
SiC is a tetrahedrally bonded semiconductor in which atomic bilayers consisting of Si and C atoms are stacked along the bond direction. Different stacking sequences lead to different crystal structures called polytypes and each structure is labeled by the stacking sequence: The most frequently obtained structure is 4H-SiC whose stacking sequence is ABCB of 4-bilayer periodicity with hexagonal symmetry. Although the local atomic structures of the polytypes are idential to each other, their electronic properties, in particular the band gaps, are known to differ from one to another [@Kimoto]. As we have clarified in Ref. , this is due to the surprisingly interesting character of the conduction band minimum (CBM): i.e., the wavefunction of the CBM is not distributed around the atomic sites but extended or floats in the interstitial channels generally existing in the tetrahedrally bonded structures [@Matsushita4]. This *floating* nature renders the energy level of the CBM being strongly affected by the length of the internal channel which is peculiar to each polytype.
From moment to moment during the thermal oxidation of 4H-SiC (0001) surface, only two types of the stacking termination at the interface are possible in case of the layer-by-layer oxidation: One is a cubic interface (BCBA-stack/SiO$_2$), and the other is a hexagonal interface (ABCB-stack/SiO$_2$). In addition to those interfaces, we here consider the stacking fault at the interface. Actually, it is experimentally reported that the stacking sequences are transformed at the SiC surface, i.e., ABCA-stacking order [@Starke]. The similar stacking imperfection is likely to occur also at the interface, leading to the stacking-fault interface (ABCA-stack/SiO$_2$). As deduced from the *floating* nature explained above, the variation of the stacking sequence near the interface leads to the variation of the channel length there and hereby varies the energy level of the CBM at the interface. In this Letter, based on the density-functional theory (DFT) [@DFT; @HK], we find that the stacking-fault interface induces a level in the gap at 0.3 eV below the CBM, thus being a strong and intrinsic candidate for the mobility killer. We also find that such stacking-fault interface structure is energetically favorable in the negatively charged interface.
Full geometry optimization for all systems was performed using the Vienna *ab-initio* Simulation Package (VASP) with PAW pseudo potential method [@VASP1; @VASP2] using PBE exchange-correlation potential [@GGA-PBE] in the generalized-gradient approximation(GGA). In this study, we have considered the (0001) surface and adopted a SiC slab model consisting of 8 bilayers with the $\sqrt{3} \times \sqrt{3}$ periodicity in the lateral plane. A vacuum region of 20-Å thickness is enough to avoid fictitious interactions between the adjacent slabs. The bottom surface atoms of the slab is fixed to the bulk crystallographic positions and terminated by H atoms to compensate for the missing bonds, whereas the rest of the system is allowed to evolve and relax freely. The energy cutoff of 400eV and a $\Gamma$-centered Monkhorst-Pack $5\times 5\times 1$ $k$-point grid were used. These parameters are adopted after the examination of the accuracy within 8 meV per atom in total energy. The structural optimization has been done with a tolerance of $10^{-1}$ eVÅ$^{-1}$.
![ (Color online). Schematic pictures of the local density of states (LDOS) along the direction perpendicular to the interface (z-direction) for the three possible interface structures of SiC/SiO$_2$: (a) The cubic interface (BCBA-stack/SiO$_2$), (b) the hexagonal interface (ABCB-stack/SiO$_2$), and (c) the stacking-fault interface (ABCA-stack/SiO$_2$). The stacking sequence near the interface is shown by the letters. The red letters denote the region in which the interstitial channel is connected. []{data-label="Fig1"}](Fig1.png){width="1.0\linewidth"}
\[Energy\_comparison\]
Surface Stacking non-doped electron-doped
------------------------ -------------- ----------------
cubic (BCBA/) 0 87
hexagonal (ABCB/ ) 33 196
stacking-fault (ABCA/) 48 0
: Calculated total energies of the non-doped and electron-doped SiC (0001) surfaces with the three different bilayer atomic sequences with respect to the most stable structure in unit of meV per 54-atom unit cell.
We have investigated the three interfaces with different stacking sequences near the interface: i.e., the cubic (BCBA-stack/SiO$_2$), the hexagonal (ABCB-stack/SiO$_2$) and the stacking-fault (ABCA-stack/SiO$_2$) interfaces, as shown in Fig. \[Fig1\]. It is noteworthy that the channel lengths near the interface in the cubic, the hexagonal and the stacking-fault interfaces are 3, 2, and 4, respectively, in unit of the bilayer thickness. When the wavefunction is confined in a shorter channel space, the corresponding energy level of the CBM is expected to shift upward because of the quantum confinement. This leads to the distinct energy diagrams for the three interface structures, schematically shown in Fig. \[Fig1\] and also quantitatively revealed below in Figs. \[Fig2\] and \[Fig4\]. In particular, the interface state appears below the CBM in the stacking-fault interface.
To validate our argument above, we start with the energetics among the structures with different bilayer stacking. For the purpose, we consider the cubic, the hexagonal and the stacking-fault SiC slabs in which the topmost atomic layer is terminated by H atoms to mimic the SiO$_2$ layers. The calculated total energies are shown in the second column of Table. I. The cubic sequence is the lowest, the hexagonal the second lowest, and the stacking-fault the highest. However, the total energy difference is small, less than 1 meV per atom, indicating that the stacking imperfection is likely to occur in real situations.
![(Color online). Calculated LDOS for the cubic-stacking (BCBA-stack) (a), the hexagonal-stacking (ABCB-stack) (b), and the stacking-fault (ABCA-stack) (c) SiC surfaces as a function of the energy and the z-coordinate perpendicular to the surface. The magnitude of LDOS is presented by the color code shown in the legend. The left and right sides correspond to the bottom and top surfaces, respectively, of the SiC slab. The origin of the energy is set to be the valence band top at the top-surface region. The right panels in (b) and (c) are enlarged LDOS near the top surface and the dashed line is the guide for eyes to make the position of the bulk CBM clearly (see text). Below each LDOS, corresponding atomic configuration is illustrated where blue, brown and white balls depict Si, C and H atoms, respectively. []{data-label="Fig2"}](Fig2_improved.png){width="0.7\linewidth"}
Characteristics of electron states near surfaces or interfaces manifest themselves in the local density of states (LDOS) which is defined as $$\begin{aligned}
{\rm LDOS}(\epsilon,z)=\int d{\bf r}_\perp \sum_{n {\bf k}}\delta(\epsilon-\epsilon_{n{\bf k}})|\phi_{n{\bf k}}({\bf r})|^2,\end{aligned}$$ where $d{\bf r}_\perp$ represents a two-dimensional vector in the plane parallel to the surface or the interface, $\epsilon$ is an energy, $\phi_{n{\bf k}}({\bf r})$ is a wavefunction (Kohn-Sham orbital in DFT), and $\epsilon_{n{\bf k}}$ is an eigenvalue. Figure \[Fig2\] shows calculated LDOS for the three different surfaces, i.e., the cubic and the hexagonal stacking and the stacking-fault surfaces of SiC(0001). In the valence band region (negative energy region in Fig. \[Fig2\]), we observe spiky spectra representing the eigenstates of the valence bands. Their positions in real space correspond to the positions of the atomic layers. This means that valence electrons are distributed around atoms. On the other hand, the conduction bands have no such spiky structures in LDOS. This reflects the fact that the conduction electrons are distributed in internal channel space broadly. The calculated energy gap represented by the dark blue region in the slab in Fig. \[Fig2\] is 2.3 eV, smaller than the experimental value of 3.3 eV for 4H-SiC, in our GGA calculations.
First, we notice that the band lineup along the direction perpendicular to the surface is slanted. This is due to the internal dielectric polarization induced by the low symmetry of the 4H-SiC. The dashed lines in the right panels in Fig. \[Fig2\] represent the slanted band lineup caused by such internal electric field. Second, comparing the Figs. \[Fig2\](a) and (b), we have found that the hexagonal-stacking (ABCB/) surface induces a surface state located about 1.2 eV above the bulk CBM. This interface state is caused by the quantum confinement of the bulk CBM state in the interstitial channel near the surface: The length of the channel is shorter near the hexagonal surface as stated above. More importantly, in the stacking-fault (ABCA/) surface, the interface state caused by the modulation of the length of the interstitial channel is located at 0.3 eV below the bulk CBM \[Fig. \[Fig2\](c)\]. This unequivocally clarifies that the stacking difference near the surface changes the surface properties considerably.
Band bending near the surface or the interface as well as the polarity of SiC may cause electron doping in the conduction-band states in Fig. \[Fig2\]. This may change the energetics among the different stacking-sequence structures. We have actually performed calculations for the electron-doped surfaces with the concentration of $4\times 10^{14}$ cm$^{-2}$. In our calculations, the doped electrons occupy the surface conduction states and modifies the energetics (See the third column of Table I). The surface structure with the stacking-fault (ABCA/) is lower in total energy than those of the cubic (BCBA/) and the hexagonal (ABCB/) surfaces by 0.1 eV and 0.2 eV, respectively.
![(Color online). Calculated energy barrier of the stacking transformation from the hexagonal-stacking (ABCB/) to the stacking-fault (ABCA/) structures on SiC(0001) surface. []{data-label="Fig3"}](Fig3.png){width="0.6\linewidth"}
We have clarified above that the new surface states appear near the CBM in the region between CBM $-$ 0.3 eV (the case of the stacking-fault surface) $\sim$ CBM + 1.0 eV (the case of the hexagonal surface), depending on the atomic-bilayer stacking near the surface. This stacking-dependent appearance of the electron state is found also in the SiC/SiO$_2$ interface (see below). We have also clarified that the structures with different stacking sequences have comparable total energies. Then we have next calculated the energy barrier of the transformation between the two different stacking sequences, i.e, from the hexagonal stacking structure (ABCB/) to the stacking-fault structure (ABCA/) for the electron-doped system, using the nudged-elastic-band (NEB) method[@NEB]. Through the transformation, each atom of the topmost layer moves only 1.9 [Å]{} in the top atomic plane. The calculated energy for the transformation is shown in Fig. \[Fig3\]. As stated above, the final stacking-fault structure is more stable than the initial hexagonal-stacking structure, showing an exothermic reaction pathway. We have further clarified that the energy barrier for this stacking transformation is 0.8 eV per surface atom. We have also calculated the energy barriers for the stacking transformations without electron doping and found that they are about 1 eV per atom.
![(Color online). Calculated LDOS for the cubic-stacking (BCBA/SiO$_2$) (a), the hexagonal-stacking (ABCB/SiO$_2$) (b), and the stacking-fault (ABCA/SiO$_2$) (c) SiC/SiO$_2$ interfaces as a function of the energy and the z-coordinate perpendicular to the interface. The magnitude of LDOS is presented by the color code shown in the legend. The left and right sides correspond to the SiC and SiO$_2$ regions. The origin of the energy is set to be the valence band top at the top-surface region of SiC. The right panels in (b) and (c) are enlarged LDOS near the interface and the dashed line is the guide for eyes to make the position of the bulk CBM of SiC clearly (see text). Below each LDOS, corresponding atomic configuration is illustrated where blue, brown, red and white balls depicts Si, C , O and H atoms, respectively. []{data-label="Fig4"}](Fig4_improved.png){width="0.7\linewidth"}
The substantial modification of the electron states near the CBM of SiC due to the stacking difference of atomic bilayers has been also found in SiC/SiO$_2$ interface by our GGA calculations. We take the most stable crystalline form of SiO$_2$, $\alpha$-quartz, to model real amorphous SiO$_2$, and have optimized the structures for the interface structures with the three different stacking sequences: i.e., the cubic-stacking (BCBA/SiO$_2$), the hexagonal-stacking (ABCB/SiO$_2$), and the stacking-fault (ABCA/SiO$_2$). The Si dangling bonds emerged at the interface have been terminated by H atoms. Fig. \[Fig4\] shows the calculated LDOS for the three interface structures. Our calculations show that the valence-band offset is 1.5 eV common to the three interface structures. In contrast, the electron states are sensitive near the CBM to the bilayer stacking sequence: There emerges the new electron state which is distributed at the interface and is located at 0.3 eV below (the stacking-fault interface) and 1.0 eV above (the hexagonal stacking interface) the bulk CBM. Correspondingly, the band offset of the CBM takes values from 0.6 to 1.6 eV. We emphasize that this variation of the electronic structure and even the appearance of the interface levels in the gap are due to the *floating* nature of the CBM state of SiC.
Imperfection of atomic stacking is commonly observed in tetrahedrally bonded semiconductors. This planar imperfection has been thought to play a minor role in electronic structure. However, We have found, for the SiC(0001) surface or the interface, that this stacking sequence determines the length of the internal channel and thus induces an interface state which is crucial in the performance of MOSFET devices. In SiC MOSFET, non-polar surfaces such as $(11\bar{2}0)$- or $(1\bar{1}00)$-face are occasionally used for the device fabrication. In those non-polar surfaces, the lengths of the cannels are infinite, thus being independent of the bilayer stacking along the (0001) direction. Hence, the interface state near the CBM is not expected to emerge by the stacking modulation on the non-polar surface. From this viewpoint, the $(11\bar{2}0)$ and $(1\bar{1}00)$ surfaces are expected to have advantages than the (0001) surface to fabricate high-performance SiC devices.
To summarize, on the basis of the density-functional calculations, we have elucidated that the imperfection of the atomic-stacking sequence near the SiC/SiO$_2$ interface induces interface levels at 0.3 eV below the conduction band bottom of SiC, thereby proposing the stacking fault as an intrinsic killer of the carrier mobility. We have also shown that the stacking-fault interface structure has comparable total energy with the perfect-stacking structures and even becomes the most stable upon electron doping. Underlying physics of all these findings is the *floating* nature of the conduction-band state of SiC.
We thank discussion with Professor Kenji Shiraishi. Computations were performed mainly at the Center for Computational Science, University of Tsukuba, and the Supercomputer Center at the Institute for Solid State Physics, The University of Tokyo. Y.M. acknowledges the support from JSPS Grant-in-Aid for Young Scientists (B) (Grant Number 16K18075).
[38]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{} () @noop [****, ()]{} @noop [****, ()]{} [****, ()](http://dx.doi.org/10.1063/1.4946863) [****, ()](\doibase
10.1002/1521-396X(199707)162:1<321::AID-PSSA321>3.0.CO;2-F) @noop [****, ()]{} [****, ()](\doibase
10.1103/PhysRevB.71.235321) [****, ()](\doibase
10.1103/PhysRevB.72.115323) [****, ()](http://stacks.iop.org/0022-3727/40/i=20/a=S09) @noop [ ()]{} [****, ()](\doibase 10.1103/PhysRevLett.112.136403) [****, ()](\doibase 10.1103/PhysRevLett.108.246404) **, @noop [Ph.D. thesis]{}, (), [****, ()](\doibase 10.7566/JPSJ.86.054702) [****, ()](\doibase
10.1103/PhysRevLett.82.2107) [****, ()](\doibase 10.1103/PhysRev.136.B864) [****, ()](\doibase 10.1103/PhysRev.140.A1133) [****, ()](\doibase 10.1103/PhysRevB.54.11169) [****, ()](\doibase 10.1103/PhysRevB.59.1758) [****, ()](\doibase 10.1103/PhysRevLett.77.3865) @noop [****, ()]{}
|
---
abstract: 'A density perturbation produced in an underdense plasma was used to improve the quality of electron bunches produced in the laser-plasma wakefield acceleration scheme. Quasi-monoenergetic electrons were generated by controlled injection in the longitudinal density gradients of the density perturbation. By tuning the position of the density perturbation along the laser propagation axis, a fine control of the electron energy from a mean value of $60$ MeV to $120$ MeV has been demonstrated with a relative energy-spread of $15 \pm 3.6\%$, divergence of $4 \pm 0.8$ mrad and charge of $6 \pm 1.8$ pC.'
author:
- 'P. Brijesh'
- 'C. Thaury'
- 'K. Ta Phuoc'
- 'S. Corde'
- 'G. Lambert'
- 'V. Malka'
- 'S.P.D. Mangles'
- 'M. Bloom'
- 'S. Kneip'
title: Tuning the electron energy by controlling the density perturbation position in laser plasma accelerators
---
INTRODUCTION
============
#### {#section .unnumbered}
Higher energy gains, reduced energy spread, smaller emittance and better stability of laser- plasma accelerated electrons[@Tajima1979PRL] are critical issues to address for future development and applications of compact particle accelerators and radiation sources[@Malka2008Nature]. Plasma density perturbation as a means of controlling electron acceleration[@Takada1984AppPhyLett; @Bulanov1993LaserPhy] and electron injection[@Bulanov1998PRE] in laser-generated plasma wakefield is an active research topic. Radial density gradients can modify the structure of wakefields and influence the process of injection by wavebreaking through a dependence of the plasma wavelength on the transverse co-ordinate[@Bulanov1997PRL]. Injection of electrons into a narrow phase space region of the wakefield is necessary for generating accelerated electron bunches with good quality in terms of energy spread and divergence. Decreasing density profile along the laser propagation direction can lead to controlled injection of electrons and is expected to generate electron bunches with better beam quality in comparison to self-injection in a homogeneous plasma by reducing the threshold of injection within a narrow phase region of the wakefield[@Bulanov1998PRE; @Fubiani2006PRE].
#### {#section-1 .unnumbered}
Injection in a longitudinally inhomogeneous plasma can offer the flexibility of a simpler experimental configuration and less stringent spatio-temporal synchronisation requirements as compared to other controlled injection techniques based on secondary laser pulses in orthogonal[@Umstadter1996PRL] and counterpropagating[@Faure2006Nature] geometries or with external magnetic-fields[@Vieira2011PRL]. Recently electrons injected in the density gradient at the exit of a gas- jet[@Hemker2002PRSTAB; @Geddes2008PRL] have been post-accelerated in a capillary-discharge based secondary accelerating stage[@Gonsalves2011Nature]. The original proposal for density-gradient injection was based on density scale lengths greater than the plasma wavelength[@Bulanov1998PRE]. Steep gradients in density, with scale lengths shorter than the plasma wavelength, can also lead to electron injection[@Suk2001PRL; @Suk2004JOSAB]. Such sharp density gradients have been experimentally generated by shock-fronts created with a knife-edge obstructing the flow from the gas-jet nozzle and used to improve the quality of accelerated electrons[@Koyama2009NIMA; @Schmid2010PRST].
#### {#section-2 .unnumbered}
Recent experimental studies[@Faure2010PoP] validated the use of plasma perturbation in the form of a density-depleted channel for injecting electrons[@Hafz2003IEEE; @Kim2004PRE] into the wakefield of a pump beam propagating across the channel walls. In this article, we report on results extending that experiment by changing the position of the plasma channel along the laser wakefield axis. The density depleted channel was created with a machining laser beam that propagates orthogonal to the pump beam. By changing the plasma channel position, the injection location was varied and thereby the subsequent accelerating distance. This method allowed for inducing controlled electron injection and generating quasi-monoenergetic electron beams with a fine control of their energy. The variation of electron energy with acceleration length in our modified configuration compared to previous experiments[@Hsieh2006PRL], was similarly used to estimate the accelerating field strength. It was observed that a threshold plasma length prior to depletion region was necessary for density-gradient injection to be effective whereas the final electron beam parameters such as energy-spread, divergence and charge were independent of injection location.
EXPERIMENTAL SETUP
==================
#### {#section-3 .unnumbered}
The experiments were performed at the Laboratoire d’Optique Appliquée with the $30$ TW, $30$ fs, $10$ Hz, $0.82$ $\mu$m, Ti:Sapphire “Salle-Jaune" laser system[@Pittman2002AppPhyB]. The pump and machining beams, propagating orthogonal to each other, are focused onto a supersonic Helium gas-jet ejected from a $3$ mm diameter conic nozzle[@Semushin2001RSI]. The density profile as characterised by Michelson interferometry has a plateau of length $2.1$ mm with $700$ $\mu$m density gradients at the edges[@Faure2010PoP]. The pump beam with an energy of approximately $0.9$ J is focused by a $70$ cm focal length spherical mirror (f$\#\approx 12$) and the machining beam with an energy of $100$ mJ is focused by a cylindrical lens system, with a tunable tim[e-d]{}elay between the two beams. The FWHM spot size of the pump beam was estimated to be $14$ $\mu$m $\times$ $18$ $\mu$m with a peak-intensity of approximately $4.3$ $\times$ $10^{18}$ W/cm$^2$(normalised vector potential $a_{0}$ $\approx$ $1.5$). The cylindrical focusing system for the machining beam consists of two cylindrical lenses (focal lengths of $500$ mm and $400$ mm), placed in series such that it generates a line focus of tunable length (FWHM $\sim$ $100$-$200$ $\mu$m) by varying the separation between the two lenses. The line focus of the machining beam is oriented in a direction orthogonal to the plane defined by the pump and machining beam axis, whereas the transverse width (FWHM spot- size $\approx$ $30$ $\mu$m) of the line is aligned along the pump pulse propagation direction. The peak- intensity in the line focus estimated to be around $3.4$ $\times$ $10^{16}$ W/cm$^2$ ($a_{0}$ $\approx$ $0.1$). The schematic experimental setup consisting of the machining and pump beam along with a probe beam (picked off from the pump beam) for Nomarski plasma interferometry[@Benattar1979RSI] is shown in Fig. \[fig:schematic:setup\].
![Schematic figure of the experimental setup : The machining beam, which temporally precedes the pump and propagates orthogonally to the pump beam axis, is focused by a pair of cylindrical lens onto the gas- jet. A probe beam whose axis is angled to machining beam propagation direction, is used for Wollaston-prism based plasma interferometry. Spatial location of the machining focus with respect to the pump beam axis can be tuned by the motorised mirror (M$_{\tiny{\textrm{R}}})$. Accelerated electrons from the gas-jet are dispersed by the magnet and detected by a Lanex phosphor screen.[]{data-label="fig:schematic:setup"}](schematic_setup.pdf){width="8.5cm"}
#### {#section-4 .unnumbered}
The machining laser pulse ionizes the gas-jet and creates a hot plasma localized in the line focal volume that hydrodynamically expands into the surrounding neutral gas. This leads to the formation of a density depleted channel with an inner lower density region surrounded by an expanding higher plasma density channel wall[@Milchberg1993PRL; @Volfbeyn1999PoP]. Therefore the pump beam sees a longitudinal density- gradient at the edges of the channel as it propagates in a direction perpendicular to the machining beam. The density depletion at the focus of the machining beam creates the axial density gradient (for the pump beam) that induces injection of electrons into the wakefield generated by pump pulse. A schematic picture of the experimental target configuration with the preformed density-depletion region is shown in Fig. \[fig:schematic:tgtconfig\]. The position of the density depleted channel and thereby the length ($\L_{2}$) of the interaction distance following the injection position was tuned by the laterally scanning the machining focus (along the pump beam axis) with a motorised mirror (M$_{\tiny{\textrm{R}}}$) placed after cylindrical lens system and before the final focus. The aspect ratio of the line focus with an approximate length of $200$ $\mu$m (FWHM) in the vertical direction ($Y$-axis) and a spot size of $30$ $\mu$m (FWHM) along the pump pulse propagation direction ensures that the strongest density gradient as seen by the plasma wakefield of the pump pulse is predominantly longitudinal ($Z$ - axis).
![Schematic figure of the experimental target configuration : Line focus (MF) of the machining beam temporally delayed and propagating perpendicular to the pump beam, creates a preformed density-depleted channel in the gas-jet. Axial density gradients along the channel walls induce controlled injection of electrons into the wakefield of the pump beam. The position of density-depleted channel and thereby the length of the plasma interaction region before ($L_{1}$) and after ($L_{2}$) the density depleted zone was varied by laterally scanning the machining focus (MF-scan) along the pump beam axis.[]{data-label="fig:schematic:tgtconfig"}](schematicCivb.pdf){width="8.5cm"}
#### {#section-5 .unnumbered}
In our experiments, the time delay between the pump and the machining laser pulse was fixed at $2$ ns following earlier experiments[@Faure2010PoP] where the timing had been optimized to obtain the strongest density gradients. Nomarski interferometry allowed us to measure precisely ($\pm$ $50$ $\mu$m) the axial location of the density depletion from the position of the distortion in the interferogram fringes (Fig. \[fig:interferogram\]) arising due to the presence of the density channel in the path of the probe beam.
![Interferometric image of the gas-jet : Phase shift due to the plasma generated by the pump pulse (propagating right to left in the figure) leads to curved fringes. The density-depleted zone created by the machining beam propagating perpendicular (into the plane of figure) to the pump beam gives rise to the distortion in the fringes visible in the central region of the interferogram.[]{data-label="fig:interferogram"}](interf34_2.pdf){width="8.5cm"}
The axial width of the depletion zone estimated from the dimensions of the distorted region was approximately $100$-$200$ $\mu$m. However it was not possible to retrieve the longitudinal density profile in the distorted region from the interferometers due to the large phase shift caused by the machined plasma. The plasma wavelength in our experiments is estimated to be approximately $12$-$15$ $\mu$m corresponding to densities of $5-8\times$ $10^{18}$ cm$^{-3}$. Since our conditions are similar to that in Ref. $18$, the density-gradient scale length is likewise expected to be similar ($\sim 30~\mu$m) and therefore the change in longitudinal density can be considered as gradual compared to plasma wavelength. As is evident from the integrity of the curved fringes throughout the gas-jet in the left side of the interferogram (Z $>$ $1.4$ mm), the density-depletion or injection zone in the path of the pump beam does not appear to disrupt its subsequent propagation and self-guiding. The pump pulse appears to be guided for lengths greater than $980\mu$m (Rayleigh range for a Gaussian focal spot size of $16$ $\mu$m) both before and after the depletion zone. Controlled injection of electrons into the plasma wakefield occurs in the density- gradients of the density-depleted region and acceleration in the subsequent homogeneous plasma.
#### {#section-6 .unnumbered}
The energy of the accelerated electron bunch exiting the gas-jet was measured with a magnetic spectrometer consisting of $1.1$ Tesla, $10$ cm magnet and a Lanex phosphor screen imaged onto a $16$-bit CCD camera. The spectrometer energy resolution was $2.3\%$ at $100$ MeV. The electron energy spectrum and absolute charge is obtained by post-processing the recorded data taking into account the calibration of the diagnostic[@Glinec2006RSI].
RESULTS & DISCUSSION
====================
#### {#section-7 .unnumbered}
In our experiments, electrons are trapped and accelerated by the strong electric fields of the nonlinear plasma wave (plasma bubble)[@Pukhov2002AppPhyB] excited by the intense and ultrashort pump laser pulse propagating in the gas-jet. Depending on specific plasma conditions, injection into the bubble can occur either by self-trapping or by controlled injection due to the preformed density-perturbation. The acceleration of electrons occurs in the matched plasma bubble regime of resonant laser wakefield acceleration [@Malka2002Science; @Mangles2004Nature; @Faure2004Nature](pulse length $\leq$ plasma wavelength/2) wherein the laser focal spot size is comparable to the bubble radius which is approximately of the order of half a plasma wavelength[@Lu2007PRSTAB]. In order to differentiate between self-injection and density-gradient injection, electron spectrum was first recorded with only the pump beam focused onto the gas-jet. The density was reduced to minimize self-injection as much as possible without complete reduction of the detected charge. In these conditions (electron densities of about $5$ - $8$ $\times$ ${10}^{18}$ cm$^{-3}$), self- injection occurs occasionally, resulting in the production of a poor quality electron beam with a broadband electron energy distribution. The Lanex image and the corresponding electron spectrum for one such target shot are shown in Fig. \[fig:filename\_pumponlyspectra\]. The spectrum of the self-injected electrons is consistently characterized by a large energy spread, low-energy dark current and considerable fluctuations in the spectral profile with high backround level on consecutive shots. The mean value of maximum electron energy (defined as cut-off edge in the logarithm of the spectral profile) was around $170-175$ MeV.
![Raw image of the electron beam on the Lanex screen (top) and lineout of the corresponding electron spectrum (bottom) obtained from a homogeneous plasma with only the pump beam. Electrons are generated by self-injection with a large energy spread for a plasma density of approximately $8$ $\times$ ${10}^{18}$ cm$^{-3}$.[]{data-label="fig:filename_pumponlyspectra"}](filename_227tirimage.pdf "fig:"){width="9.5cm" height="1.6cm"} ![Raw image of the electron beam on the Lanex screen (top) and lineout of the corresponding electron spectrum (bottom) obtained from a homogeneous plasma with only the pump beam. Electrons are generated by self-injection with a large energy spread for a plasma density of approximately $8$ $\times$ ${10}^{18}$ cm$^{-3}$.[]{data-label="fig:filename_pumponlyspectra"}](bsd_pump.pdf "fig:"){width="8.5cm" height="5.5cm"}
![Raw images of the electron beam on the Lanex screen obtained by injection at different axial locations ($Z_{m}$) of the machining focus. $Z_{m}$ = (a) $1.6$ mm (b) $1.8$ mm (c) $1.9$ mm (d) $2$ mm (e) $2.2$ mm (f) $2.5$ mm from the entrance of the gas-jet.[]{data-label="fig:spectrastack"}](stackspectra2.pdf){width="8.5cm"}
The electron signal in the high-energy tail ($> 175$ MeV) of the spectrum (Fig. \[fig:filename\_pumponlyspectra\]) is due to the high background level. For plasma densities much below the self-injection threshold, there were no distinct electron peaks with significant charge and the detected electron distribution was very close to the background level. The presence or absence of density-depleted region under these conditions did not have any significant effect on the electron spectrum because the densities are too low to excite a wakefield of sufficient amplitude that can trap and accelerate electrons. Self-injection at low densities would require laser system with greater power. At higher densities ($\sim$ $5$ - $8$ $\times$ $10^{18}$ cm$^{-3}$), there is an increased probability of intermittent self - trapping of electrons with a poor accelerated beam quality. However for the same experimental conditions, firing the machining beam resulted in significant improvement of electron spectrum. The effect of the depletion region, in the form of localized electron injection at the density-gradients, dominates over any occasional self - injection. In contrast to the case of self - injection in a homogeneous plasma, the presence of the preformed density perturbation, leads to the acceleration of electrons with low energy spread, indicating the benefits of controlled injection for a fixed laser power.
#### {#section-8 .unnumbered}
The axial location of the depletion region was varied by translating the position of the machining focus from the entrance to the exit of the gas-jet along the direction of the pump laser-axis. Electron spectrum data was recorded by scanning the location of the depletion region in steps of approximately $0.1$ mm and keeping all other experimental conditions unchanged. Electron beam images on the Lanex screen, obtained on selected shots for the scanned axial locations ($Z_{m}$) of the machining focus in the range of $1.6$ mm to $2.5$ mm, are shown in Fig. \[fig:spectrastack\]. By changing the location of the density depletion and thereby the subsequent plasma interaction length, the final energy of the accelerated electrons is observed to be tunable. The spectra at different longitudinal locations ($Z_{m}$) of the machining focus are shown in Fig.\[fig:bsdspectrum\]. The relative energy spread ($\Delta{E_{\small{{{F}{W}{H}{M}}}}}/E_{peak}$) of the quasi-monoenergetic peaks is around $3\%$, limited by electron spectrometer in these few selected shots. Moreover the peak signal level in the density-gradient injected spectrum is approximately ten times higher than in the case of the self-injected electron spectrum. The improvement in electron beam quality with the machining beam highlights the advantages of controlled injection over uncontrolled self-injection, besides offering the flexibilty of tuning the electron energy with a single gas-jet in this particular experimental geometry.
![Experimental quasi-monoenergetic electron spectrum with relative energy spread ($\Delta{E_{\small{{{F}{W}{H}{M}}}}}/E_{peak}$) of around $3\%$ obtained by densit[y-g]{}radient injection for three different axial location (Z$_{m}$) of the machining focus. Z$_{m}=$ (a) $2.5$ mm (b) $2.2$ mm (c) $2.0$ mm from the entrance of the gas-jet. The injected charge is (a) $0.2$ pC (b) $1$ pC (c) $2.2$ pC and the plasma density is approximately $8$ $\times$ ${10}^{18}$ cm$^{-3}$.[]{data-label="fig:bsdspectrum"}](bsd_mach.pdf){width="8.5cm"}
#### {#section-9 .unnumbered}
Quasi-monoenergetic electrons were not detected when the depletion region was placed closer to the entrance in the first-half of the gas-jet, indicating that there is a threshold pump-pulse propagation distance after which the electrons begin to get injected in the longitudinal gradients of the depletion zone. The threshold length for our experimental conditions in the case of density-gradient injection was found to be approximately $1.4$ mm from the entrance of the gas-jet. Initially the focussed pump laser pulse has intensity ($a_{0} \sim 1.5$) and parameters (spot size $\sim$ $16$ $\mu$m, pulse length $\sim$ $9$ $\mu$m) that are far from the matched, plasma bubble regime of resonant laser wakefield acceleration. For the matched regime at our plasma densities, the pump spot size has to be approximately equal to the bubble radius ($\simeq$ $6-10$ $\mu$m). Therefore a long interaction distance is needed for the laser pulse to be sufficiently compressed transversally and longitudinally in order to drive a suitable nonlinear plasma wave for trapping electrons. Through an interplay of self-focusing, pulse-shortening and self- steepening, the spot size and the temporal duration of the pump laser pulse evolves to reach the matched regime after propagating for a certain axial distance from the focal position[@Malka2002Science; @Faure2005PRL; @AGR2007PRL]. Quasi-static WAKE simulations[@Mora1997PoP] reveal that for a pump laser focus location in the range of \[$0-700$\] $\mu$m on the edge of the gas-jet, the initial normalized laser amplitude ($a_{0}$) of $1.5$ increases to a maximum value ($a_{l}$) of about $3.2-3.6$ after a propagation distance of approximately $1+$/$-0.1$ mm, close to the experimentally observed threshold length. The increased laser amplitude is due to initial focal spot size and pulse duration compressing to approximately $8~\mu$m and $22$ fs respectively. Though the exact values of the final laser parameters can change with the initial focus location, they are approximately close to the matched regime and favours the generation of a plasma bubble that can trap electrons as it traverses the density-depleted region. When the depletion region was placed closer to the exit of the gas-jet, quasi-monoenergetic electrons were observed with lower peak energy compared to the case when focused at the center. For propagation lengths ($L_{1}$) in the range of $1.4$ mm to $1.8$ mm, the spectrum had greater instability compared to case of lengths greater than $1.8$ mm presumably due to conditions being close to the matched regime or thresholds of electron injection. For the data set obtained with the machining beam, the probability of injection for which a quasi-monoenergetic electron distribution has been measured was approximately $50\%$. In the other $50\%$ of case, no electrons were observed or when they were measured exhibited a broad energy distribution with a lower total charge. The shot to shot stability could be improved in the future by tuning the delay between the pump and the machining pulse (or the machining laser energy). The probability of injection could also be improved by better control over laser conditions that were perhaps sub-optimal during this particular experiment.
#### {#section-10 .unnumbered}
In Fig. \[fig:energylength\], the final electron energy is plotted as a function of the density-depletion position $L_{1}$. The data points are the mean of accumulated data from multiple shots and the straight line is a fit over the data corresponding to the $L_{1}$ in the range $1.9$ mm to $2.45$ mm. Since the axial location of machining focus ($L_{1}$) determines the plasma interaction length ($L_{2}$) following the depletion region (see Fig. \[fig:schematic:tgtconfig\]), the maximum possible acceleration length ($L_{acc}$) in a $3$ mm gas-jet approximately equals $L_{2} \approx 3-L_{1}$ mm. Note that for our experimental conditions, the net acceleration length is less than the maximum possible value since the density-gradient injection is effective only for $L_{1} \geq $ $1.4$ mm. The linear region in the graph (Fig. \[fig:energylength\]) quantifies the tunability of electron energy with acceleration length. The final electron energy on an average varied from $120$ MeV to $60$ MeV for an acceleration length varying from $1.2$ mm to $0.6$ mm equivalent to an acceleration gradient of $100$ GeV/m. This value is similar to that measured recently in colliding pulse injection[@Corde2011PRL] but lower than that expected from theory[@Lu2007PRSTAB]. The scaling law predicts an acceleration gradient ($\sim 48{a_{l}}^{1/2}
{n_{e}}^{1/2}$) of approximately $192-256$ GeV/m for our parameters, which is greater than the experimental measurement by a factor of $2-2.5$. This could be probably due to a decrease of the laser intensity in the second half of the gas jet or the deformation of the bubble resulting from laser pulse evolution that reduces the electron energy gain. The acceleration length ($L_{acc}$) in our case is limited to about $1$ mm. For machining focus locations ($L_{1}$) prior to $1.9$ mm, there appears to be a trends towards saturation of the mean electron energy. In this region, the spectrum is unstable with larger fluctuations in peak energy compared to the linear region. In some shots multiple peaked spectrum with energies as high as $140-170$ MeV in the highest energy peak, whereas in certain shots single peaks with much lower energies were observed. This could be due to $a_{o}$ evolution during laser propagation and the possibility of occurrence of multiple bunch injection on the densit[y-g]{}radient. The energy spectrum data was also analysed by plotting the maximum cut- off energy and a similar trend was observed with a slightly greater slope in the linear region.
![Variation of the mean energy of the quasi-monoenergetic electrons with the axial location of the machining focus (density-depleted zone) that induces density-gradient injection. Energy was tunable from a maximum of $120$ MeV to $60$ MeV by varying the location of the machining focus in the range of $1.9$ mm to $2.45$ mm from the entrance of the gas-jet. Dots are the mean of accumulated data from multiple shots and error bars are the standard error of the mean.[]{data-label="fig:energylength"}](energyscaling.pdf){width="8.5cm" height="5.5cm"}
#### {#section-11 .unnumbered}
Finally the variation of various electron beam parameters such as relative energy spread, divergence and charge within the peaks of the spectrum were analysed as a function of the axial location of the machining focus (Fig. \[fig:parameters\]). The divergence, injected charge and relative energy spread is relatively constant across the acceleration length with approximate mean values ($\pm$ std. error) of $4 \pm
0.8$ mrad, $6 \pm 1.8$ pC and $15 \pm 3.6\%$ respectively.
![Variation of the divergence, charge and relative energy spread of the quasi-monoenergetic electrons with the axial location of the machining focus. The divergence, injected charge and relative energy spread is relatively constant with approximate mean values ($\pm$ std. error) of $4 \pm 0.8$ mrad, $6 \pm 1.8$ pC and $15 \pm 3.6\%$ respectively. Dots are the mean of accumulated data from multiple shots and error bars are the standard error of the mean.[]{data-label="fig:parameters"}](filename_parameters.pdf){width="8.5cm" height="6.0cm"}
CONCLUSION
==========
In summary, quasi-mononenergetic electrons were generated by using the longitudinal density gradients of a plasma density perturbation to induce controlled injection of electrons into the plasma wakefield. The density perturbation in the form of a density depleted plasma channel was created by a secondary machining beam. Final energy of the accelerated electrons was tuned from a maximum of $120$ MeV to $60$ MeV by varying the axial position of the density perturbation and thereby the injection location and the subsequent plasma interaction length. A threshold plasma length prior to depletion region was required for density-gradient injection to be effective whereas the final electron beam parameters such as energy-spread, divergence and charge were observed to be independent of the injection location. Controlled injection in a longitudinally inhomogeneous plasma appears better than sel[f-i]{}njection in a homogeneous plasma in terms of final electron-beam quality for the same experimental conditions and offers the flexibility of tuning the electron energy with a single gas-jet. In future, accurate measurements of the density profile in the density-gradient injection scheme will allow for benchmarking numerical simulations to optimize the experimental parameters needed for generating high-quality electron beams.
ACKNOWLEDGMENTS {#acknowledgments .unnumbered}
===============
We thank J.P. Goddet and A. Tafzi for the operation of the laser system. We acknowledge the support of the European Research Council for funding the PARIS ERC project (Contract No. 226424), EC FP7 LASERLAB- EUROPE/LAPTECH (Contract No. 228334) and EU Access to Research Infrastructures Programme Project LASERLAB-EUROPE II.
[10]{}
T. Tajima and J.M. Dawson. Laser electron accelerator. , 43(4):267–270, 1979.
V. Malka, J. Faure, Y.A. Gauduel, E. Lefebvre, A. Rousse, and K.T. Phuoc. Principles and applications of compact laser–plasma accelerators. , 4(6):447–453, 2008.
Y. Takada, N. Nakano, and H. Kuroda. Electron acceleration by laser driven plasma waves in inhomogeneous plasmas. , 45(3):300–302, 1984.
S.V. Bulanov, V.I. Kirsanov, F. Pegoraro, and A.S. Sakharov. Charged particle and photon acceleration by wake field plasma waves in nonuniform plasmas. , 3(6):1078–1087, 1993.
S. Bulanov, N. Naumova, F. Pegoraro, and J. Sakai. Particle injection into the wave acceleration phase due to nonlinear wake wave breaking. , 58(5):5257–5260, 1998.
S.V. Bulanov, F. Pegoraro, A.M. Pukhov, and A.S. Sakharov. Transverse-wake wave breaking. , 78(22):4205–4208, 1997.
G. Fubiani, E. Esarey, C.B. Schroeder, and W.P. Leemans. Improvement of electron beam quality in optical injection schemes using negative plasma density gradients. , 73(2):026402, 2006.
D. Umstadter, J. K. Kim, and E. Dodd. Laser injection of ultrashort electron pulses into wakefield plasma waves. , 76:2073–2076, 1996.
J. Faure, C. Rechatin, A. Norlin, A. Lifschitz, Y. Glinec, and V. Malka. Controlled injection and acceleration of electrons in plasma wakefields by colliding laser pulses. , 444(7120):737–739, 2006.
J. Vieira, S.F. Martins, V.B. Pathak, R.A. Fonseca, W.B. Mori, and L.O. Silva. Magnetic control of particle injection in plasma based accelerators. , 106:225001, 2011.
R. G. Hemker, N. M. Hafz, and M. Uesaka. Computer simulations of a single-laser double-gas-jet wakefield accelerator concept. , 5(4):041301, 2002.
C.G.R. Geddes, K. Nakamura, G.R. Plateau, C. Toth, E. Cormier-Michel, E. Esarey, C.B. Schroeder, J.R. Cary, and W.P. Leemans. Plasma-density-gradient injection of low absolute-momentum-spread electron bunches. , 100(21):215004, 2008.
A.J. Gonsalves, K. Nakamura, C. Lin, D. Panasenko, S. Shiraishi, T. Sokollik, C. Benedetti, C.B. Schroeder, C.G.R Geddes, J. van Tilborg, et al. Tunable laser plasma accelerator based on longitudinal density tailoring. , 2011.
H. Suk, N. Barov, J.B. Rosenzweig, and E. Esarey. Plasma electron trapping and acceleration in a plasma wake field using a density transition. , 86(6):1011–1014, 2001.
H. Suk, H.J. Lee, and I.S. Ko. Generation of high-energy electrons by a femtosecond terawatt laser propagating through a sharp downward density transition. , 21(7):1391–1396, 2004.
K. Koyama, A. Yamazaki, A. Maekawa, M. Uesaka, T. Hosokai, M. Miyashita, S. Masuda, and E. Miura. Laser-plasma electron accelerator for all-optical inverse [C]{}ompton [X]{}-ray source. , 608(1):S51–S53, 2009.
K. Schmid, A. Buck, C.M.S. Sears, J.M Mikhailova, R. Tautz, D. Herrmann, M. Geissler, F. Krausz, and L. Veisz. Density-transition based electron injector for laser driven wakefield accelerators. , 13(9):091301, 2010.
J. Faure, C. Rechatin, O. Lundh, L. Ammoura, and V. Malka. Injection and acceleration of quasimonoenergetic relativistic electron beams using density gradients at the edges of a plasma channel. , 17:083107, 2010.
N. Hafz, H.J. Lee, J.U. Kim, G.H. Kim, H. Suk, and J. Lee. Femtosecond [X]{}-ray generation via the [T]{}homson scattering of a [T]{}erawatt laser from electron bunches produced from the [L]{}[W]{}[F]{}[A]{} utilizing a plasma density transition. , 31(6):1388–1394, 2003.
J.U. Kim, N. Hafz, and H. Suk. Electron trapping and acceleration across a parabolic plasma density profile. , 69(2):026409, 2004.
C.-T. Hsieh, C.-M. Huang, C.-L. Chang, Y.-C. Ho, Y.-S. Chen, J.-Y. Lin, J. Wang, and S.-Y. Chen. Tomography of injection and acceleration of monoenergetic electrons in a laser-wakefield accelerator. , 96:095001, 2006.
M. Pittman, S. Ferr[é]{}, J.P. Rousseau, L. Notebaert, J.P. Chambaret, and G. Ch[é]{}riaux. Design and characterization of a near-diffraction-limited femtosecond 100-[T]{}[W]{} 10-[H]{}z high-intensity laser system. , 74(6):529–535, 2002.
S. Semushin and V. Malka. High density gas jet nozzle design for laser target production. , 72:2961, 2001.
R. Benattar, C. Popovics, and R. Sigel. Polarized light interferometer for laser fusion studies. , 50(12):1583–1586, 1979.
C. G. Durfee and H. M. Milchberg. Light pipe for high intensity laser pulses. , 71:2409–2412, 1993.
P. Volfbeyn, E. Esarey, and W. P. Leemans. . , 6(5):2269–2277, 1999.
Y. Glinec, J. Faure, A. Guemnie-Tafo, V. Malka, H. Monard, J.P. Larbre, V. De Waele, J.L. Marignier, and M. Mostafavi. Absolute calibration for a broad range single shot electron spectrometer. , 77:103301, 2006.
A. Pukhov and J. Meyer-ter Vehn. Laser wake field acceleration: the highly non-linear broken-wave regime. , 74:355–361, 2002.
V. Malka, S. Fritzler, E. Lefebvre, M. M. Aleonard, F. Burgy, J. P. Chambaret, J. F. Chemin, K. Krushelnick, G. Malka, S. P. D. Mangles, Z. Najmudin, M. Pittman, J. P. Rousseau, J. N. Scheurer, B. Walton, and A. E. Dangor. Electron acceleration by a wake field forced by an intense ultrashort laser pulse. , 298(5598):1596–1600, 2002.
S.P.D. Mangles, C.D. Murphy, Z. Najmudin, A.G.R. Thomas, J.L. Collier, A.E. Dangor, E.J. Divall, P.S. Foster, J.G. Gallacher, C.J. Hooker, et al. Monoenergetic beams of relativistic electrons from intense laser–plasma interactions. , 431(7008):535–538, 2004.
J. Faure, Y. Glinec, A. Pukhov, S. Kiselev, S. Gordienko, E. Lefebvre, J.P. Rousseau, F. Burgy, and V. Malka. A laser–plasma accelerator producing monoenergetic electron beams. , 431(7008):541–544, 2004.
W. Lu, M. Tzoufras, C. Joshi, F. S. Tsung, W. B. Mori, J. Vieira, R. A. Fonseca, and L. O. Silva. Generating multi-gev electron bunches using single stage laser wakefield acceleration in a 3[D]{} nonlinear regime. , 10:061301, 2007.
J. Faure, Y. Glinec, J. J. Santos, F. Ewald, J.-P. Rousseau, S. Kiselev, A. Pukhov, T. Hosokai, and V. Malka. Observation of laser-pulse shortening in nonlinear plasma waves. , 95:205003, 2005.
A. G. R. Thomas, Z. Najmudin, S. P. D. Mangles, C. D. Murphy, A. E. Dangor, C. Kamperidis, K. L. Lancaster, W. B. Mori, P. A. Norreys, W. Rozmus, and K. Krushelnick. Effect of laser-focusing conditions on propagation and monoenergetic electron production in laser-wakefield accelerators. , 98:095004, 2007.
P. Mora and Jr. T. M. Antonsen. Kinetic modeling of intense, short laser pulses propagating in tenuous plasmas. , 4(1):217–229, 1997.
S. Corde, K. Ta Phuoc, R. Fitour, J. Faure, A. Tafzi, J. P. Goddet, V. Malka, and A. Rousse. Controlled betatron x-ray radiation from tunable optically injected electrons. , 107:255003, 2011.
|
---
abstract: 'We study in details the distribution of mass in galaxy cluster CL0024+1654 inferred using the method of strong gravitational lensing by Tyson [*et al.*]{} (1998). We show that a linear correlation exists between total, visible and dark matter distributions on log-log scale with consistent coefficients. The shape and parameters of log-log-linear correlation are not affected significantly whether one uses projected or volume mass densities but is consistent with $\kappa=2-5$ visible/dark ratio. We also show and analyze in depth so called alignment properties of the above-mentioned profiles. We show that log-log-linear correlation and alignments can all be understood in terms of thermodynamic/hydrodynamic equilibrium with gravitational potential growing almost linearly in the region of interest. We then analyze the hypothesis of thermal equilibrium on the base of the existing data about CL0024 cluster. If the presence of log-log-linear correlation and alignments were interpreted thermodynamically, this would indicate the mass of the dark matter particle 2-5 times smaller than that of atomic hydrogen, thus giving range for the mass of dark matter particle between 200MeV and 1000MeV.'
address: 'Department of Physics, North Carolina State University, Raleigh, North Carolina 27695-8202'
author:
- 'Yuriy Mishchenko and Chueng-Ryong Ji'
title: 'Distribution of mass in galaxy cluster CL0024 and the particle mass of dark matter.'
---
Introduction
============
In the past decade the field of astronomy and astrophysics has experienced a period of rapid development thanks to the operation of Hubble Space Telescope and a wide spread of new faster computers. This development brought a series of brilliant insights in the puzzles of the structure and history of the universe. At this time the growing number of precision measurements of galactic rotation curves and observations of gravitational lensing in galaxies and galaxy clusters combined with extensive computer simulations shed new light on the problem of hidden, or dark, matter in the universe and its role in the universe evolution. Still, little is known about the microscopic properties and composition of the dark matter. Many possibilities have been put forward by various extensions of the Standard Model but yet existence of any of such particles is to be confirmed experimentally.
Recently it had been pointed out that the distribution of dark and visible matter in spiral galaxies with various luminosities and galaxy clusters exhibit strikingly similar correlations with almost identical parameters [@ji]. Specifically, it was found that in all of these systems the mass densities of visible and dark matter are linearly correlated on log-log scale. The proportionality coefficient in this correlation appears to be [*universal*]{} and equal to $\kappa\sim 3-5$. This conclusion had been drawn from the analysis of dark and visible matter distribution in the galaxy cluster CL0024 [@tyson] and synthetic mass model for spiral galaxy rotation curves of Persic and Salucci [@persic].
In this paper we are going to further sharpen our focus on the properties of the mass distribution in galaxy cluster CL0024. We will investigate in details presence and properties of log-log-linear correlations, alignment properties and their thermodynamic significance and interpretation.
We begin in the next section (Section \[secII\]) with discussion of the properties of the projected mass density profiles $\Sigma(R)$ derived by Tyson [*et al.*]{} in 1998 and focus on the log-log-linear correlation between them. We then continue with analysis of the volume density profiles $\rho(R)$ inferred from $\Sigma(R)$ via inverse Abel transformation and show that the log-log-linear correlation is present also between the volume density profiles. We comment on the significance of these correlations and their thermodynamic interpretation. Finally, we review the alignment properties of the mass profiles in the galaxy cluster CL0024, mentioned earlier in the literature, and show that they imply exponential behavior for the radial mass density $\Sigma(R)$ in the region of interest. In Section \[secIII\] we present analysis of the thermal state of the matter in the galaxy cluster based on the strong gravitational lensing study carried out by Tyson [*et al.*]{} Summary and conclusions follow in Section \[conclusions\].
Mass distribution in the galaxy cluster CL0024 {#secII}
==============================================
That the mass distribution in galaxies or galaxy clusters can be measured using gravitational lensing had been pointed out long time ago [@straumann]. Gravitational lensing is one of the consequences of Einstein’s theory of gravity (General Relativity) in which the light from a distant object is bent by a heavy galaxy or galaxy cluster to produce “fake” images of the original object. The amount of this distortion is very tiny and for an observable effect to be seen the objects’ alignment should be quite perfect. Still, with billions of galaxies in the sky, a number of gravitational lenses had been discovered in recent years providing excellent tests of the Theory of General Relativity. One of the most famous examples of gravitational lensing is the object known as Einstein’s cross (or Q2237+030), in which the light from a distant quasar is disrupted by a low-redshift galaxy to produce four images in the form of a cross surrounding the galaxy’s nucleus.
Galaxy clusters are one of the most interesting objects for gravitational lensing studies because of their unsurpassed mass and large extent. In fact, precision measurement of gravitational lensing in a galaxy cluster is capable to produce a detailed map of gravitating mass distribution inside the cluster and thus provide valuable information about its structure. In the last decade many clusters exhibiting gravitational lensing have been found and their “gravitating” mass distributions have been analyzed [@tyson; @wu; @kneib; @abdelsalam; @bezecourt; @white]. It was generally shown that a large portion of mass in galaxy clusters is not associated with the luminous galaxies and form a smooth extended distribution. However, practically never such mass maps, obtained from lensing, have been compared with the distribution of the visible matter inside the corresponding clusters. The study of strong gravitational lensing in galaxy cluster CL0024 by Tyson [*et al.*]{} in 1998 [@tyson] is distinguished in that the detailed maps both for total and visible mass in the cluster were constructed and presented.
The galaxy cluster CL0024+1654 is a remarkable instance of strong gravitational lensing in which multiple images of a background galaxy with distinctive spectrum are formed. In 1998, an analysis of Hubble telescope images of this cluster was carried out by Tyson [*et al.*]{} and the mass profile of the cluster was obtained from strong gravitational lensing. Tyson [*et al.*]{} found that the vast majority of the mass in CL0024 is not associated with the galaxies and forms a smooth elliptical distribution, slightly shallower than isothermal sphere, with a soft core of $r_{core}=35\pm3 h^{-1} kpc$, where $h$ is the normalized Hubble constant. No evidence of in-falling massive clumps had been found for the dark component. The projected dark matter density profile was fit well by a power-law model $$\Sigma(y)=\frac{K(1+\eta y^2)}{(1+y^2)^{2-\eta}},$$ where $y=r/r_{core}$, $K=7900\pm100 h M_\odot pc^{-2}$, $r_{core}=35\pm3 h^{-1}kpc$ and $\eta=0.57\pm0.02$. The primary conclusion was that the cusped mass profile for the dark component, suggested by many-body simulations within the Cold Dark Matter model [@navaro], was inconsistent with the observed results. Along with the total mass distribution Tyson [*et al.*]{} also presented the radial distribution of the visible matter density and of the visible light density.
It was recently noticed that the mass profiles obtained in Ref.[@tyson] possess interesting correlation properties [@ji]. Specifically, if the mass profile of the dark matter and that of the visible matter are plotted one vs. the other on log-log scale, the linear correlation between the two becomes apparent which implies $$\log \Sigma_v \approx \kappa_{vd} \log \Sigma_d.$$
Remarkably, such behavior of mass profiles should be expected on thermodynamical grounds. For example, for isothermal distribution of self-gravitating gas it is known that the mass profiles satisfy $$\label{equi}
\begin{array}{c}
\log \rho_v \sim e^{-\frac{\mu_v \Phi(r)}{T}}, \\
\log \rho_d \sim e^{-\frac{\mu_d \Phi(r)}{T}},
\end{array}$$ where $\mu_i$ is the molar mass for the corresponding component and $\Phi(r)$ is the gravitational potential at position $r$ [@hatsopoulos]. Then, independent from the details of the gas radial distribution, $\log \rho_v \sim \log \rho_d$. With somewhat more intricate calculation, the same conclusion can be drawn for a self-gravitating gas just in hydrodynamic and not in full thermal equilibrium. Consider a multi-component gas cloud in hydrodynamic equilibrium, then for the density of each component we can write $$\label{dynamicequi}
\frac{d p_i(r)}{d r}+\rho_i(r) \frac{d \Phi(r)}{dr}=0,$$ where $p_i(r)=\rho_i(r) T_i(r) / \mu_i$ is the partial pressure for component $i$. After a simple manipulation we obtain $$\label{diff}
\frac{T_i(r)}{\mu_i} \frac{d \log \rho_i(r)}{dr}+\frac 1\mu_i\frac{dT_i(r)}{dr}+\frac{d\Phi(r)}{dr}=0$$ and $$\log\rho_i(r)=C-\mu_i \int \large[ d\log T_i(r) +\frac{d\Phi(r)} {T_i(r)} \large].$$ This tells us that $\log \rho_v \sim \log \rho_d \sim \int (d\log T + \frac 1T d\Phi )$ in case the components have temperatures which are locally similar or equal. Thus, the above mentioned log-log-linear correlations may contain important information about the microscopic properties of the dark matter. In this paper we would like to investigate in more details the properties of the mass profiles in the galaxy cluster CL0024. Since in cosmology huge distances and small heat transfer rates seem to interfere with the processes of thermal equilibration, thus reducing the likelihood of thermal equilibrium, such study would be most certainly beneficial.
In Ref.[@tyson] the projected radial density profiles for the total mass and the visible matter were originally presented on log-log scale and mass profile for the dark matter could be straightforwardly deduced from these. The total mass and the dark matter form smooth density distribution monotonously decreasing with distance. Similar behavior is observed for the visible matter except for a flat segment at distance of about $100 h^{-1}kpc$ \[see Fig.(\[fig-1\])\]. As can be seen from Fig.(\[fig-1\]), at this distance $\Sigma_v(R)$ is approximately constant for about $35 h^{-1}kpc$ while all the other profiles continue to fall. While we do not know the origin of such peculiar anomaly, we note that outside of this flat region the mass profile for visible matter has essentially the same behavior. In fact, if we cut out this flat segment and shift the tail of the visible matter distribution to make a continuous curve, we would observe that the obtained profile is almost a perfect straight line on $\log \Sigma - R$ scale \[data shown with triangles in Fig.(\[fig-1\])\]. Upon inspection, one notices that, actually, on $\log \Sigma-R$ scale all profiles fall linearly with the distance, which implies behavior $\Sigma_i (R) \sim e^{-a_i R}$.
As was already mentioned, it can be noticed that mass distributions of dark and visible matter in CL0024 exhibit linear correlation on log-log scale. Similar property can be observed also for other pairs of mass profiles. For example, if log of the total mass is plotted vs. log of the dark mass, the linear correlation becomes very prominent. The same is true for the total and the visible matter mass densities. In this latter case the anomalous flat segment becomes more prominent; however, if the visible matter mass profile is corrected for this region, the points on log-log scale form almost a perfect line \[data shown with triangles in the diagram in Fig.(\[fig-2\])\]. The relations between different mass profiles can be further analyzed by looking at different segments of the diagrams in Fig.(\[fig-2\]) and Fig.(\[fig-3\]). We found that the correlation coefficients tend to decrease somewhat if one moves away from the center of the cluster. We obtained following correlation coefficients, where the error estimate is related to their variation with distance: $\kappa_{vt} \sim 2.45^{ + 0.1 }_{- 0.45}$, $\kappa_{td} \sim 1.5^{+0.35}_{-0.35}$, $\kappa_{vd} \sim 3.6^{+0.8}_{-0.65}$, and $$\label{correlation}
\log \Sigma_i \approx a_{ij} + \kappa_{ij} \log \Sigma_j.$$ We also note that all three coefficients are consistent with each other and imply the concentration by mass of the dark matter about 80%. For example, from thermodynamic interpretation of $\kappa_{ij}\approx ( \mu_i /\mu_j)(T_j / T_i) $ we would expect $\kappa_{vt}\kappa_{td}\approx \kappa_{vd}$ and this is indeed the case. Furthermore, we can write for the total mass density $$M(r) = m_d(r) + m_v(r) \approx a_d e^{-\mu_d \Phi (r)/ T} + a_v e^{-\mu_v \Phi(r)/T},$$ where $m_{d (v)}$ is density of dark (visible) matter and $\mu_{d (v)}$ is the molar mass of dark (visible) matter. Then, for variation $\delta M(r)$ we obtain $$\begin{array}{c}
\delta M(r)\approx m_d(r) (-\mu_d \delta\Phi / T) + m_v(r) (-\mu_v \delta\Phi / T) \\
\approx - M(r) \large( \frac{m_d(r)}{M(r)} \mu_d + \frac{m_v(r)}{M(r)} \mu_v \large) \delta\Phi / T \\
\approx M(r) (-\mu_{tot} \delta\Phi / T),
\end{array}$$ where $$\label{totmolar}
\mu_{tot}=\frac{m_d(r)}{M(r)} \mu_d + \frac{m_v(r)}{M(r)} \mu_v$$ is effective total molar mass. By dividing Eq.(\[totmolar\]) with $\mu_v$ or $\mu_d$ and taking into account that, for example, $\mu_{tot}/\mu_d = \kappa_{td}$, one can immediately verify that all of the above-mentioned coefficients $\kappa$ satisfy Eq.(\[totmolar\]) with the dark matter concentration by mass $m_d/M\approx 0.8$, consistent with direct observation from the data by Tyson [*et al.*]{} \[see Fig.(\[fig-X\])\]
The existence of log-log-linear correlation in CL0024 becomes even more striking once one realizes that similar relation exists in spiral galaxies. In spiral galaxies the presence of dark matter reveals itself through the anomalous behavior of the Rotation Curves (RC) which for large distances do not fall as $1/\sqrt{R}$, according to the Kepler’s law, but remain roughly flat. Such behavior had been analyzed in the literature and fit by synthetic model of Persic and Salucci [@persic] in which a number of RC where categorized by galaxy luminosity and reproduced with superior accuracy with a simple analytic model (Universal RC) . In this model the distribution of visible matter in the galaxy is described by exponential thin disk with surface density $$I_v(r) \sim e^{-3.2 r/R_{opt}},$$ which is known to fit well the surface brightness of galaxy disks, and dark matter is represented with a spherical flat-core distribution. We noticed that the URC for all luminosities can also be successfully described by a spherical dark hallo with density $$\rho_d (r) \sim e^{-a_d r/R_{opt}},$$ in which case $\log I_v(r)$ and $\log \rho_d(r)$ are linearly related with $\log I_v(R) \approx \kappa_{vd} \log\rho_d(R)$. Surprisingly, the coefficient of such relation varies very insignificantly with galaxy luminosity and corresponds to $\kappa_{vd}^{spiral} \sim 2.6-3.8$. This is consistent with $\kappa_{vd}$ obtained from the analysis of log-log-linear correlation between the mass profiles in the galaxy cluster CL0024. Considering that such log-log-linear correlation is, in fact, expected from thermo- or hydrodynamic equilibrium, the above observation is most indicative of the presence of a sort of equilibrium between the dark and the visible component in spiral galaxies and galaxy clusters.
Let us now concentrate on the volume densities of visible and dark matter in CL0024. Note that the log-log-linear correlation observed above was obtained for the projected mass densities $\Sigma_i(R)$, which are the quantities actually measured in the gravitational lensing. At the same time $\rho_i(R)$, appearing in Eq.(\[equi\]), is the volume mass density related to $\Sigma_i(R)$ via relation $$\label{Abel}
\Sigma(r,\theta)=\int dz \rho (r,\theta,z).$$ The projected mass density in general does not uniquely specify the volume density. Only with additional constraints, e.g. the spherical symmetry, Eq.(\[Abel\]) can be inverted; in spherically symmetric case this procedure is known as inverse Abel transformation [@Abel]. As is shown elsewhere, if mass distribution $\rho(R)$ decreases fast enough, the projected mass density can be approximately represented as $$\label{projected}
\Sigma (r) \approx h(r) \rho(r),$$ where $h(r)$ is a slow varying ($h(r) \sim r^\alpha$) effective depth and $\rho(r)$ is the density maximum along the line of integration in Eq.(\[projected\]) [@ji_new]. The correlation between $\log \Sigma_i(R)$ then implies $$\begin{array}{rl}
\log \Sigma_i(r) &\approx \alpha_i \log r + \log \rho_i(r) \sim \\
&\kappa_{ij} \log \Sigma_j(r) \approx \kappa_{ij} (\alpha_j \log r + \log \rho_j(r))
\end{array}$$ and, if $ \log \rho(r) $ varies quicker than $\log r$, we get linear correspondence $$\log \rho_i(r) \approx \kappa_{ij} \log \rho_j(r).$$ Since we have observed before that the mass profiles $\Sigma_i(R)$ in the galaxy cluster CL0024 decrease exponentially with distance, we do expect that linear correlation will be present between $\rho$’s as well. We can estimate the error introduced in our treatment because of the use of projected mass density $\Sigma(R)$ in place of the volume density $\rho(R)$ by inverting directly the experimental profiles $\Sigma(R)$.
Unfortunately, the inverse Abel transformation requires knowledge of $\Sigma(R)$ for arbitrary large $R$ while experimentally we only know a limited segment $R\leq R_{max}\approx 120 h^{-1}kpc$ for visible and dark components and $R\leq R_{max}\approx 220 h^{-1}kpc$ for total mass profile. Certain strategy must be used to extrapolate the experimental profiles to $R>R_{max}$. In this study we adopt three main strategies: cutting the integration in the inverse Abel transformation to $R<R_{max}$, thus taking $\Sigma(R>R_{max})=0$; fitting profiles with a sum of exponentials $\Sigma(R)=\sum a_i e^{-R/R_i}$ or fitting profiles with a sum of power-law functions $\Sigma(R)=\sum a_i (1+R/R_i)^{b_i}$. As should be expected, in every approach obtained radial profiles are similar for $R$ small; however, at larger distances a significant model-dependence becomes increasingly apparent. In fact, obtained mass profiles are only model-independent for the central region of the galaxy cluster out to about $R_{max}/2$ \[see Fig.(\[fig-4\])\]. At the same time the central part of these profiles is most sensitive to the noise in $\Sigma(R)$, so that we inevitably obtain large errors and noise in $\rho(R)$ practically for all values of $R$ \[see Fig.(\[fig-5\])\].
While data-points after inverse Abel transformation are more noisy, the linear correlation still can be seen on the log-log scale between any two mass profiles \[see Fig.(\[fig-6\])\]. The parameters of this correlation are somewhat dependent on the way the original $\Sigma$-distributions were extrapolated, but such variation is well within acceptable bounds. Taking into account this variation, for the parameters of log-log-linear correlation between volume density profiles we obtain $\kappa_{vt}\sim 1.2-2.1$; $\kappa_{td}\sim 1.0-2.0$ and $\kappa_{vd}\sim 2.1-3.4$. As we can see, the correlation coefficients are close and generally somewhat less from those derived from the projected mass densities; the average error introduced using the projected mass density $\Sigma(R)$ in place of volume mass density $\rho(R)$ is of the order of 30%.
To understand the origin and magnitude of this error, let’s consider effective depth defined by $\Sigma(R)=h(R) \rho(R)$. Then $$\log \Sigma(R) = \log h(R) + \log \rho(R),$$ and if $\rho(R)$ falls off rapidly with $R$, one expects that $\log h(R)$ will vary relatively slowly with distance and thus the linear correlation between $\log \Sigma(R)$ would imply a linear correlation between $\log \rho(R)$. Nonetheless, variation of $\log h(R)$ would affect the estimate for $\kappa$ and introduce an error of the order of $\Delta\log h(R) / \Delta \log \rho(R)$. For example, for the profile of dark matter in the region of interest $\Delta \log h\approx 0.5$ and $\Delta \log \rho\approx 1.5$. As is expected, the bias introduced by this in our estimate of $\kappa_{vd}$ is of the order of $\Delta \log h / \Delta \log \rho \approx 30\%$.
Let us now turn our attention to the alignment properties of the mass profiles in CL0024. In Ref.[@ji] it was noticed that all mass profiles can be aligned very well by either log-scaling \[$\log \Sigma_i(r) \sim \kappa_{ij} \log \Sigma_j(r)$\] or distance-scaling \[$\log \Sigma_i(r) \sim \log \Sigma_j (r e^{-z_{ij}})]$, as can be seen from Fig.(\[fig-7\]). For the parameters of such alignment we find that the dark matter profile can be aligned with the total mass profile by rescaling with $\kappa_{td} \sim 1.25-1.45$, the mass profile for visible matter can be aligned with that for total mass with $\kappa_{vt} \sim 2.0-2.5$ and the visible mass density profile can be aligned with that for dark matter with $\kappa_{vd} \sim 2.5-4$. These values are similar to those obtained from log-log-linear correlation. In fact, it becomes obvious that log-scaling alignment is directly related to the existence of log-log-linear correlation and vice-versa.
Similarly, all profiles can be aligned by distance-rescaling $\Sigma(r)\rightarrow \Sigma(r\cdot e^{-z})$. We found that the best fit parameters are $z_{vd}\approx -0.15- -0.25$ for visible-to-dark alignment, $z_{vt}\approx 0.45- 0.5$ for visible-to-total alignment and $z_{dt}\approx 0.6-0.9$ for dark-to-total alignment. The implication of these alignment properties is less obvious; however, it can be shown that together with log-scaling they imply $$\log \Sigma_i (r) = -a 10^{\gamma ( \log(r) - z_i) } + b_i=-a (\frac r{r_i})^\gamma + b_i.$$ From the parameters $\kappa$ and $z$ listed above we find $\gamma \approx 0.8-1.0$. This conclusion is consistent with our earlier observation that the projected mass density in galaxy cluster CL0024 fall exponentially with distance; $$\Sigma_i(r) = B_i e^{-a (r/r_i)^\gamma}, \gamma \approx 1.$$ If this result to be interpreted thermodynamically, $\log \rho(R) \sim \Phi(R)$, that would imply that the gravitational potential is rising in the region of interest almost linearly. Let us note that the same alignment properties are observed for the volume mass densities as well. All three volume density profiles can be aligned very well with the above-mentioned scaling transformations \[see Fig.(\[fig-8\])\]. The parameters of these alignments are $\kappa_{td}\approx 1.5-1.8 $, $\kappa_{vt}\approx 1.7-1.9$, $\kappa_{vd}\approx 2.8-3.4$ and $z_{vd}\approx -0.35- -0.5$, $z_{vt}\approx 0.4- 0.7$ and $z_{dt}\approx 0.6- 0.8$, similar to those obtained from $\Sigma(R)$.
Mass profiles and the thermal state of matter {#secIII}
=============================================
As was mentioned above, the log-log-linear correlation between mass profiles of the galaxy cluster CL0024 is a very remarkable property, especially given that a similar observation holds for a variety of spiral galaxies and that such correlation is actually expected in case of thermodynamic or hydrodynamic equilibrium. The most tempting interpretation of such correlation is through the thermo- (hydro-) dynamic equilibrium, in which case the $\kappa_{vd}\approx 3.0$ should be related to the ratio of the molar masses for visible and dark matter [@ji]. This assertion, however, is difficult to understand theoretically from what is known or assumed today about the properties of the dark matter [@ji_new]. While there exist no direct data indicative of the thermal state of the dark matter, the commonly accepted notion is that the dark matter is extremely weakly interacting and, thus, may well be thermally isolated from the visible matter. Here we would like to see if one can obtain any information about the thermal state of the matter in galaxy cluster CL0024 from the actual mass profiles provided by Tyson [*et al.*]{}.
According to Eq.(\[equi\]) and Eq.(\[dynamicequi\]), the temperature of matter may be estimated from $\rho(R)$ using $$\label{bound}
\log\rho(R)-\log \rho(0)= \frac \mu T ( \Phi(0)-\Phi(R) )$$ or from solving the differential Eq.(\[diff\]) with respect to $T(R)$ with $\rho(R)$ known. In the latter approach one needs to overcome the difficulty that the result of integration in Eq.(\[diff\]) depends on unknown integration constant, for example $T(R_{max})$. This problem may be partially avoided by integrating Eq.(\[diff\]) from large distance toward smaller $r$ to reduce dependence on the unknown boundary condition $T(R_{max})$ or by using $T(R_{max})$ close to that obtained from Eq.(\[bound\]). In either case it can be observed that in the region of interest the dependence on boundary condition $T(R_{max})$ is weak.
If the above idea is applied to the galaxy cluster CL0024 with the assumption $\rho(R) \sim \Sigma (R)$, as above, the temperature extracted in this way indicates that the dark matter has smooth temperature profile cooling outward, while the visible matter is mostly isothermal with temperature somewhat decreasing in the center of the galaxy cluster. The estimate for the temperature of the visible matter $T_v\approx 5\cdot 10^8 K$ and its decrease toward the center of the cluster is consistent with our understanding of X-ray clusters and the effects of radiative cooling in their centers [@sarazin]. Results obtained in this way, however, are only qualitatively reliable.
Since $\log \rho(R) \approx -\log h(R) + \log \Sigma(R)$, in the case when the temperature profile $T(R)$ is almost isothermal even small correction $\log h(R)$ may be significant. Generally, we expect the “effective” depth $\log h(r)$ to increase the temperature of the distribution $\Sigma(R)$ in the center of the cluster relative to the temperature associated with $\rho(R)$. Obviously, that may have a dramatic effect on our results making, for example, the temperature profile for the dark matter isothermal or breaking isothermality of the visible matter.
Thus, we do need to consider the above approach applied directly to the volume density profiles $\rho(R)$. As we have noticed before, the data points after this transformation are significantly more noisy. This problem is even more serious here than in the case of log-log-linear correlations since the resultant $T(R)$-profile is particularly sensitive to the noise in $\rho(R)$. In fact, the volume density profile obtained with $\Sigma(R>R_{max})=0$ is so noisy that hardly any conclusion can be obtained from it at all. As can be seen in Fig. (\[fig-1a\]), the only conclusion we can draw is that the temperature of the visible matter is definitely decreasing toward the center of the cluster and that the “effective” temperature obtained from the total distribution is decreasing in the outer regions of the galaxy cluster.
Better results can be obtained when the mass profiles $\Sigma(R)$ are smoothened and extrapolated with a linear combination of exponential or power-law functions. For both fits we obtain a similar result that the visible matter is cooling significantly in the center of the cluster, consistent with the strong radiative cooling in X-ray clusters mentioned above. At the same time for the dark matter we obtain the distribution close to “isothermal” which typically cools very slightly in the center. It becomes obvious, therefore, that the local thermal equilibrium between dark and visible matter is significantly broken, at least, in the center of the galaxy cluster. Unfortunately, while these qualitative conclusions are similar for both exponential and power-law extrapolations, the quantitative details differ dramatically between the two, especially for the outer regions of the cluster. The integration of Eq.(\[diff\]) using exponential extrapolation produces temperature distributions for the dark and the visible components with almost constant ratio $\kappa_{vd} \approx 1.8$. This ratio gets larger toward the center of the galaxy cluster as the visible matter cools rapidly and the dark matter fails to follow the suit. The result of analysis for this case, thus, supports the hypothesis of overall thermal equilibrium. It implies that, although the ratios $T/\mu$ for visible and dark matter vary with distance, the ratio $\kappa_{vd}=(T_d/T_v)( \mu_v/\mu_d)$ remains constant for a wide range of values of $R$. The thermal equilibrium is only broken in the central part of the cluster where the rapid radiative cooling of the visible component is essential.
However, the conclusion obtained using the power-law extrapolation is exactly the opposite. In this approach we do find that at large distances the ratios $T/\mu$ for the visible and the dark matter is practically the same with $\kappa_{vd} \approx 1$. This would be consistent with our understanding of the cluster formation. According to our current understanding, the primary heat source in galaxy clusters is gravitational heating. Then, as originally cold gas collapses thereby forming galaxy cluster, the gas molecules acquire kinetic energy $\Delta E_i \approx \mu_i \Delta \Phi$, where $\Delta \Phi$ is the change in gravitational potential during the collapse. Later this kinetic energy is transfered to thermal energy of the matter inside the cluster, so that $T_i \approx \mu_i \Delta\Phi$ and thus $T_i /\mu_i \approx \Delta \Phi$ is the same for all kinds of the particles in the gas cloud. We expect therefore the mass distributions for different components to be similar and $\kappa_{vd}\approx 1$, consistent with the result obtained with power-law extrapolation. $\kappa$ is not equal to 1 only in the center of the cluster where the visible distribution cools rapidly while the dark matter distribution remains almost isothermal. The temperature of the visible matter in both approaches comes out $T_v\approx 10^8 K$.
We must conclude, therefore, that the data on the distribution of mass in CL0024 are not sufficient neither to support nor to reject the hypothesis of thermal equilibrium. Only general conclusion can be drawn from this analysis. In particular, the temperature extracted from the mass profile of the visible matter is of the order of $5\cdot 10^7-1.5\cdot 10^8 K$ and is consistent with X-ray data on galaxy clusters. The temperature of the visible component drops significantly in the center of the cluster and also falls slowly at large distances. The dark matter distribution appears to be close to isothermal with indications of slight cooling in the center. We believe that such behavior may be indicative of nonvanishing thermal connection between the dark and the visible components in the cluster. Still, the data are too uncertain to make this assertion any more definite.
The assumption of the local thermal equilibrium is most certainly violated, at least in the central region, where the effect of radiative cooling of the visible component is the strongest. In the other parts of the galaxy cluster, however, we can neither confirm nor reject the hypothesis of thermal equilibrium as conclusion depends strongly on the way to interpolate the available data to large values of $R$.
Conclusion {#conclusions}
==========
In this work we extend the analysis of the visible matter, dark matter and total projected mass profiles derived for the galaxy cluster CL0024 from strong gravitational lensing by Tyson [*et al.*]{}. We observe that the linear correlation exists between each pair of mass profiles on log-log scale. This linear correlation is preserved whether one uses the projected mass profiles \[$\Sigma(R)$\] or the volume density profiles \[$\rho(R)$\]. The difference between the coefficients of log-log-linear correlations obtained for $\Sigma(R)$ and $\rho(R)$ is small and is of the order of 30%. We obtained following values for the parameters of log-log-linear correlation between dark and visible mass distribution profiles $$\begin{array}{l}
\kappa_{vd} \sim 2.9-4.4 $ (from projected mass profiles $\Sigma), \\
\kappa_{vd}\sim 2.1-3.4 $ (from volume density profiles $\rho), \\
\kappa_{vd}\sim 2.2-4.0 $ (from alignment properties for $\Sigma), \\
\kappa_{vd}\sim 2.8-3.3 $ (from alignment properties for $\rho).
\end{array}$$ The correlation coefficients for other pairs of mass profiles were also listed in this paper. We also analyzed in details the alignment properties of the mass profiles in the galaxy cluster. We found that the log-scaling and the distance-scaling, mentioned earlier in the literature, imply (and are consistent with) exponential behavior of the mass profiles in the region of interest $$\rho \sim e^{-a (r/r0)^\gamma}, $ $\gamma \approx 0.8-1.0.$$
These properties of the mass profiles are striking, especially since similar correlations are observed in the spiral galaxies with $\kappa_{vd}\sim 2.6-3.8$. Given huge differences between the systems where such correlations are observed, it appears that this cannot have an accidental character but is related to actual properties of dark and visible matter. Interestingly, such behavior of the mass profiles in self-gravitating system is expected in the case of thermo- or hydrodynamic equilibrium. Whenever the thermal states of the components in such system are similar, one expects $$\log \rho_v \sim \log \rho_d.$$ It is therefore tempting to interpret the observed correlations as a sign of equilibrium between the dark and the visible components in the galaxy cluster in which case the ratio $\kappa_{vd}$ corresponds to the molar mass ratio of visible and dark matter $$\frac{\mu_v}{\mu_d} \approx 2.1 - 4.4.$$ This suggests the mass of the dark matter particle between $\mu_d \approx 200-1000 MeV$. To our knowledge, there are no candidates within this mass range in the current extensions of Standard Model. Massive neutrinos and axions have experimental mass limits $<25$MeV for the heaviest $\tau$-neutrino [@hu] which is well below our range. The most favorable candidates for non-baryonic dark matter, such as Weakly Interacting Massive Particle (WIMP) with the mass anywhere between 10GeV and 1TeV and the SUSY lightest particle, [*e.g.*]{} neutralino, with the mass above 30GeV [@khalil], are far above the range obtained in our study. In fact, the closeness of $\mu_d$ to QCD-energy scale $\Lambda_{QCD}$ may indicate that dark matter is ultimately related to QCD phenomena, such as quark and gluon condensations. Whether this is true and what kind of connection may exist between the dark matter and QCD is an interesting topic for further discussion.
The concept of thermo- (hydro-) dynamic equilibrium between dark and visible matter encounters significant theoretical difficulties in trying to understand the possible source of equilibrium between, supposedly, extremely weakly interacting dark and visible components. For that reason we further analyzed what kind of information about the thermal properties of the matter in the galaxy cluster CL0024 can be extracted from available measurement. In our analysis we found that for visible matter the average temperature is of the order of $T_v \approx 10^8 K$, consistent with what we know about the temperature of the intergalactic gas in X-ray clusters. We also found that the temperature of the visible matter drops significantly in the center of the cluster, also consistent with what we know about radiative cooling in X-ray clusters. For the dark matter we obtained almost isothermal temperature profile with signs of slight cooling in the center of the cluster. We believe that such behavior may indicate existence of thermal connection between dark and visible matter. At the same time we found that the available data are insufficient to either confirm or reject hypothesis of thermal equilibrium as the conclusion depends strongly on the way the data are smoothened and extrapolated at large distances. While in the central part of the cluster the equilibrium is most certainly broken by rapid radiative cooling of the visible component, in the other parts of the galaxy cluster one can find signs consistent either with full thermal equilibrium or full thermal isolation between the dark and the visible components - depending on the way the projected mass density profiles are extrapolated.
We must conclude therefore that no reliable quantitative conclusion about the thermal state of matter in the galaxy cluster CL0024 can be reached on the basis of known mass density profiles.
The observed similarities in the relative behavior of dark and visible matter in spiral galaxies and galaxy cluster CL0024 are very puzzling and deserve further investigations. While the precision of individual mass distribution measurements in galaxy clusters may be insufficient to obtain certain conclusions about the origin of such behavior, the synthetic analysis of few such studies may be helpful. One needs to see if the log-log-linear correlation can be found in other galaxy clusters and if its coefficients have similar values to those we have observed here.
[99]{} Y. Mishchenko and C.-R. Ji, Phys. Rev. D [**68**]{}, 063503 (2003); astro-ph/0301503. J.A. Tyson [*et al.*]{}, astro-ph/9801193. M.Persic, P.Salucci, F.Stel:MNRAS, 281, 27 (1996). N. Straumann, astro-ph/0108255. X-P. Wu, T. Chiueh, L-Z. Fang and Y-J. Xue, asto-ph/9808179. J-P. Kneib, astro-ph/0009385. H. M. AbdelSalam, P.Saha, astro-ph/9806244. J.Bezecourt & all, astro-ph/0001513. M. White, L. Hernquist, V. Springel, astro-ph/0107023. J.Navaro, C.Frenk, S.White; ApJ, 462, 563 (1996). G.N.Hatsopoulos, J.H.Kennan, [*Principles of general thermodynamics*]{}, Wiley, pp.513-515, 1965. J. Binney and S. Tremaine Galactic Dynamics. Princeton, NJ: Princeton University Press, p. 651, 1987 Y. Mishchenko and C.-R. Ji, in preparation. C. Sarazin X-ray emission from clusters of galaxies. Cambridge: Cambridge University Press, p. 252, 1988. W.Hu, D.Eisenstein, M.Tegmark, Phys. Rev. Lett. [**80**]{}, 5255-5258 (1998); astro-ph/9712057. S.Khalil, C.Munoz, Contemp. Phys. [**43**]{}, 51-62 (2002); astro-ph/0110122.
|
---
abstract: 'We prove a model theoretic Baire category theorem for $\tilde\tau_{low}^f$-sets in a countable simple theory in which the extension property is first-order and show some of its applications.'
author:
- Ziv Shami
title: A model theoretic Baire category theorem for simple theories
---
\[section\] \[theorem\][Lemma]{} \[theorem\][Definition]{} \[theorem\][Fact]{} \[theorem\][Conclusion]{} \[theorem\][Corollary]{} \[theorem\][Remark]{} \[theorem\][Remark]{} \[theorem\][Question]{} \[theorem\][Conjecture]{} \[theorem\][Proposition]{} \[theorem\][Notation]{} \[theorem\][Claim]{} \[theorem\][Subclaim]{} \[theorem\][Example]{} \[theorem\][Problem]{}
\#1 [[\#1 ]{}]{} Å[[A]{}]{}
Introduction
============
The goal of this paper is to generalize a result from \[S1\] and to give some applications. In \[S1\] The first step for proving supersimplicity of countable unidimensional simple theories that eliminates hyperimaginaries is to show the existence of an unbounded type-definable $\tau^f$-open set of bounded finite $SU_{se}$-rank (for definition see section 4). In this paper we develop a general framework for this kind of result. The proof is similar to the proof in \[S1\], however it has some important consequences, e.g. in a countable wnfcp theory if for every non-algebraic element $a$ (even in some fixed $\tilde\tau_{low}^f$-set) there is $a'\in acl(a)\backslash
acl(\emptyset)$ of finite $SU$-rank, then there exists a weakly-minimal formula. As far as we know, this is new even in the stable case (i.e. when $T$ is a countable nfcp theory).
Preliminaries
=============
The forking topology is introduced in \[S0\] and is a variant of Hrushovski’s and Pillay’s topologies from \[H0\] and \[P\] respectively:
\[tau definition\] *Let $A\subseteq \CC$ and let $x$ be a finite tuple of variables.\
1) An invariant set $\UU$ over $A$ is said to be *a basic $\tau^f$-open set over $A$ *if there is $\phi(x,y)\in L(A)$ such that $$\UU=\{a \vert \phi(a,y)\ \mbox{forks\ over}\ A \}.$$ 2) An invariant set $\UU$ over $A$ is said to be *a basic $\tau^f_\infty$-open set over $A$ *if $\UU$ is a type-definable $\tau^f$-open set over $A$.*****
Note that the family of basic $\tau^f$-open sets over $A$ is closed under finite intersections, thus form a basis for a unique topology on $S_x(A)$. Likewise, we define the $\tau^f_\infty$-topologies.\
Recall, the following definition from \[S0\] whose roots are in \[H0\].
\[projection closed\] *We say that *the $\tau^f$-topologies over $A$ are closed under projections ($T$ is PCFT over $A$) *if for every $\tau^f$-open set $\UU(x,y)$ over $A$ the set $\exists y \UU(x,y)$ is a $\tau^f$-open set over $A$. We say that *the $\tau^f$-topologies are closed under projections ($T$ is PCFT) *if they are over every set $A$.*****
Recall that a formula $\phi(x,y)\in L$ is *low in $x$ *if there exists $k<\omega$ such that for every $\emptyset$-indiscernible sequence $(b_i \vert i<\omega)$, the set $\{\phi(x,b_i) \vert
i<\omega\}$ is inconsistent iff every subset of it of size $k$ is inconsistent. $T$ is low if every $\phi(x,y)$ is low in $x$.**
\[low\_disjunction\] Assume $\phi(x,t)\in L$ is low in $t$ and $\psi(y,v)\in L$ is low in $v$ ($x\cap y$ or $t\cap v$ may not be $\emptyset$). Then $\theta(xy,tv)\equiv\phi(x,t)\vee \psi(y,v)$ is low in $tv$.
Let $k_1<\omega$ be a witness that $\phi(x,t)$ is low in $t$ and let $k_2<\omega$ be a witness that $\psi(y,v)$ is low in $v$. Let $k=k_1+k_2-1$. By adding dummy variables we may assume $x=y$ and $t=v$ (as tuples of variables). Let $(a_i \vert\ i<\omega)$ be indiscernible such that $\{ \phi(a_i,t)\vee \psi(a_i,t)\vert i<\omega\}$ is inconsistent. Thus, every subset of $\{
\phi(a_i,t)\vert i<\omega\}$ of size $k_1$ is inconsistent, and every subset of $\{\psi(a_i,t)\vert
i<\omega\}$ of size $k_2$ is inconsistent. Thus every subset of size $k$ of $\{ \phi(a_i,t)\vee \psi(a_i,t)\vert i<\omega\}$ is inconsistent.\
In \[BPV, Proposition 4.5\] the authors proved the following equivalence which, for convenience, we will use as a definition (their definition involves extension with respect to pairs of models of $T$).
\[foext\] The extension property is first-order in $T$ iff for every formulas $\phi(x,y),\psi(y,z)\in L$ the relation $Q_{\phi,\psi}$ defined by: $$Q_{\phi,\psi}(a)\mbox{\ iff}\ \phi(x,b)\mbox{ doesn't\ fork\
over}\ a\ \mbox{for\ every}\ b\models \psi(y,a)$$ is type-definable (here $a$ can be an infinite tuple from $\CC$ whose sorts are fixed). We say that $T$ has wnfcp if $T$ is low and the extension property is first-order in $T$.
$[S1]$\[ext pcft\] Suppose the extension property is first-order in $T$. Then $T$ is PCFT.
We say that an $A$-invariant set $\UU$ has finite $SU$-rank if $SU(a/A)<\omega$ for all $a\in\UU$, and has bounded finite $SU$-rank if there exists $n<\omega$ such that $SU(a/A)\leq n$ for all $a\in\UU$. The existence of a $\tau^f$-open set of bounded finite $SU$-rank implies the existence of a weakly-minimal formula:
\[tau bounded SU\]\[S0, Proposition 2.13\] Let $\UU$ be an unbounded $\tau^f$-open set over some set $A$. Assume $\UU$ has bounded finite $SU$-rank. Then there exists a set $B\supseteq A$ and $\theta(x)\in L(B)$ of $SU$-rank 1 such that $\theta^\CC\subseteq \UU\cup acl(B)$.
In \[S1\] the class of $\tilde\tau^f$-sets is introduced, this class is much wider than the class of basic $\tau^f$-open sets. Here, we look at the class of $\tilde \tau^f_{low}$-sets, instead of the class of $\tilde \tau^f_{st}$-sets from \[S1\].
*A relation $V(x,z_1,...z_l)$ is said to be a *pre-$\tilde\tau^f$-set relation *if there are $\theta(\tilde x,x,z_1,z_2,...,z_l)\in L$ and $\phi_i(\tilde x,y_i)\in L$ for $0\leq i\leq l$ such that for all $a,d_1,...,d_l\in \CC$ we have $$V(a,d_1,...,d_l)\ \mbox{iff}\ \exists \tilde a\
[\theta(\tilde a,a,d_1,d_2,...,d_l)\wedge\bigwedge^{l}_{i=0} (\phi_i(\tilde a,y_i)\ \mbox{forks\
over}\ d_1d_2...d_i)]$$ (for $i=0$ the sequence $d_1d_2...d_i$ is interpreted as $\emptyset$). If each $\phi_i(\tilde x,y_i)$ is assumed to be low in $y_i$ , $V(x,z_1,...z_l)$ is said to be a *pre-$\tilde\tau_{low}^f$-set relation.****
*1) A *$\tilde\tau^f$-set (over $\emptyset$) *is a set of the form $$\UU=\{a \vert\ \exists d_1,d_2,...d_l\
V(a,d_1,...,d_l)\}$$ for some *pre-$\tilde\tau^f$-set relation $V(x,z_1,...z_l)$*.*****
2\) A *$\tilde\tau^f_{low}$-set (over $\emptyset$) *is a set of the form $$\UU=\{a \vert\ \exists d_1,d_2,...d_l\
V(a,d_1,...,d_l)\}$$ for some *pre-$\tilde\tau^f_{low}$-set relation $V(x,z_1,...z_l)$*.****
The Theorem
===========
In this section $T$ is assumed to be a simple theory and we work in $\CC$ (so, $T$ not necessarily eliminates imaginaries).
*Let $\Theta=\{\theta_i(x_i,x)\}_{i\in I}$ be a set of $L$-formulas such that $\exists^{<\infty}x_i
\theta_i(x_i,x)$ for all $i\in I$. Let $s$ be the sort of $x$. For $A\subseteq \CC^s$, let $acl_\Theta(A)=\{b\vert\ \theta_i(b,a)\ \mbox{for\ some}\ \theta_i\in\Theta\ \mbox{and}\ a\in A\}$.*
*An invariant set $\UU(x,y_1,...y_r)$ is said to be *a generalized uniform family of $\tilde\tau^f_{low}$-sets *if there is a formula $\rho(\tilde
x,x,y_1,...,y_r,z_1,z_2,...,z_k)\in L$ and there are formulas $\psi_i(\tilde x,v_i),\mu_j(\tilde
x,w_j)\in L$ for $0\leq i\leq r$ and $1\leq j\leq k$ that are low in $v_i$ and low in $w_j$ respectively, such that for all $a,d_1,...d_r$ we have $\UU(a,d_1,...d_r)$ iff $\exists \tilde
a\exists e_1...e_k$ $$\rho(\tilde a,a,d_1,...d_r,e_1,...e_k)\wedge [\bigwedge_{i=0}^r\psi_i(\tilde
a,v_i)\ \mbox{forks\ over}\ d_1...d_i]\wedge [\bigwedge_{j=1}^k\mu_j(\tilde a,w_j)\ \mbox{forks\
over}\ d_1...d_re_1...e_j].$$***
*An invariant set $\FF(x,y_1,...y_r)$ is said to be *a uniform family of $\tilde\tau^f_{low}$-closed sets if $\FF(x,y_1,...y_r)=\bigcap_i \neg\UU_i(x,y_1,...y_r)$, where each $\UU_i(x,y_1,...y_r)$ is a uniform family of $\tilde\tau^f_{low}$-sets.**
\[tilde top thm\] Assume the extension property is first-order in $T$. Then
1\) Let $\UU$ be an unbounded $\tilde\tau^f$-set over $\emptyset$. Then there exists an unbounded $\tau^f$-open set $\UU^*$ over some finite set $A^*$ such that $\UU^*\subseteq \UU$. In fact, if $V(x,z_1,...,z_l)$ is a pre-$\tilde\tau^f$-set relation such that $\UU=\{a\vert \exists
d_1...d_l V(a,d_1,...,d_l)\}$, and $(d^*_1,...,d^*_m)$ is any maximal sequence (with respect to extension) such that $\exists d_{m+1}...d_l V(\CC,d^*_1,...,d^*_m,d_{m+1},...,d_l)$ is unbounded, then $$\UU^*=\exists d_{m+1}...d_l V(\CC,d^*_1,...,d^*_m,d_{m+1},...,d_l)$$ is a $\tau^f$-open set over $d^*_1...d^*_m$.
\[main\_thm\] Let $T$ be a countable simple theory in which the extension property is first-order. Assume:\
1) $\Theta=\{\theta_i(x'_i,x)\}_{i<\omega}$ is a set of $L$-formulas such that $\exists^{<\infty}x'_i
\theta_i(x'_i,x)$ for all $i<\omega$.\
2) $\UU_0(x)$ is a non-empty $\tilde\tau^f_{low}$-set over $\emptyset$.\
3) $\{F_n(x_n)\}_{n<\omega}$ is a family of $\emptyset$-invariant sets such that $F_n(\CC)\cap
acl(\emptyset)=\emptyset$ for all $n<\omega$.\
4) For every $n<\omega$ and every variables $\bar y=y_1,...y_r$, let $\FF_n^{\bar y}(x_n,\bar y)$ be a generalized uniform family of $\tilde\tau^f_{low}$-closed sets such that $F_n(\CC)\subseteq \FF_n^{\bar y}(\CC,\bar d)$ for all $\bar d$.\
Now, assume for all $a\in\UU_0$ there exists $b\in acl_{\Theta}(a)$ and $n<\omega$ such that $b\in
F_n(\CC)$. Then there is an unbounded $\tau^f_\infty$-open set $\UU^*$ over a finite tuple $\bar
d^*$ and variables $\bar y^*$ of the sort of $\bar d^*$, and $n^*<\omega$ such that $$\UU^*\subseteq \FF_{n^*}^{\bar y^*}(\CC,\bar d^*)\cap acl_{\Theta}(\UU_0).$$
First, we may assume $\Theta$ is closed downwards (i.e. if $\theta\in\Theta$ and $\theta'\vdash \theta$ then $\theta'\in \Theta$). Assume the conclusion of the theorem is false. It will be sufficient to show that for every non-empty $\tilde\tau^f_{low}$-set $\UU\subseteq \UU_0$, every $\theta\in \Theta$, and every $n<\omega$ there exists a non-empty $\tilde\tau^f_{low}$-set $\UU^*\subseteq \UU$ such that either $\neg\exists x'\theta(x',a)$ for all $a\in\UU^*$ or for all $a\in\UU^*$ there exists $b\models \theta(x',a)$ with $b\not\in F_n(\CC)$. Indeed, by iterating this for every pair $(\theta,n)\in \Theta\times\omega$ we get by compactness $a^*$ such that for all $\theta\in\Theta$ and all $n<\omega$ either $\neg\exists x'\theta(x',a^*)$ or there exists $b_{n,\theta}\models\theta(x',a^*)$ such that $b_{n,\theta}\not\in F_n(\CC)$. Since we assume $\Theta$ is closed downwards, we get contradiction to the assumption that for all $a\in\UU_0$ there exists $b\in acl_{\Theta}(a)$ and $n<\omega$ such that $b\in F_n(\CC)$ (note that $F_n(\CC)$ is $\emptyset$-invariant). To show this, let $\UU$, $\theta$ and $n<\omega$ be given. Let $V(x,z_1,...z_l)$ be a pre-$\tilde\tau^f_{low}$-set relation such that $$\UU=\{a \vert\ \exists d_1,d_2,...d_l\ V(a,d_1,...,d_l)\}.$$ where $V$ is defined by: $$V(a,d_1,...,d_l)\ \mbox{iff}\ \exists \tilde a\
[\sigma(\tilde a,a,d_1,d_2,...,d_l)\wedge\bigwedge^{l}_{i=0} (\phi_i(\tilde a,t_i)\ \mbox{forks\
over}\ d_1d_2...d_i)]$$ for some $\sigma(\tilde x,x,z_1,z_2,...,z_l)\in L$ and $\phi_i(\tilde
x,t_i)\in L$ which are low in $t_i$ for $0\leq i\leq l$. Let $V_\theta$ be defined by: for all $b,d_1,...,d_l\in \CC$, $$V_\theta(b,d_1,...,d_l)\ \mbox{iff}\ \exists a (\theta(b,a)\wedge
V(a,d_1,...,d_l)).$$ and let $$\UU_\theta=\{b \vert\ \exists d_1,d_2,...d_l\ V_\theta(b,d_1,...,d_l)\}.$$ Since by the assumption $F_n(\CC)\cap
acl(\emptyset)=\emptyset$, we may assume $\UU_\theta\cap acl(\emptyset)=\emptyset$ and $\UU_\theta$ is non-empty. Now, let $\bar d^*=(d_1^*,...,d_m^*)$ be a maximal sequence, with respect to extension ($0\leq m\leq l$) such that $$\tilde V_\theta(x')\equiv \exists d_{m+1},d_{m+2},...d_l
V_\theta(x',d^*_1,...d^*_m,d_{m+1},...d_l)$$ is non-algebraic. We may assume $m<l$ (by choosing $V$ appropriately). By Fact \[tilde top thm\], $\tilde V_\theta(\CC)$ is an unbounded basic $\tau^f_\infty$-open set over $\bar d^*$. Since we assume the conclusion of the theorem is false, $\tilde V_\theta(\CC)\not\subseteq \FF_{n}^{\bar y^*}(\CC,\bar d^*)$ where $\bar
y^*=y_1^*,...,y_m^*$ has the same sort as $\bar d^*$. Now, let $\UU_{s,n}(x_n,\bar y^*)$ for $s<\omega$ be each a uniform family of $\tilde \tau^f_{low}$-sets such that $\FF_n(x_n,\bar
y^*)=\bigcap_s \neg\UU_{s,n}(x_n,\bar y^*)$. Let $b^*\in\tilde V_\theta(\CC)\backslash\FF_{n}^{\bar
y^*}(\CC,\bar d^*)$. So, there exists $s^*<\omega$ such that $b^*\in\UU_{s^*,n}(\CC,\bar d^*)$. Let $\rho(\tilde x',x_n,y^*_1,...,y^*_m,z'_1,z'_2,...,z'_k)\in L$ and let $\psi_i(\tilde
x',v_i),\mu_j(\tilde x',w_j)\in L$ for $0\leq i\leq m$ and $1\leq j\leq k$ be low in $v_i$ and low in $w_j$ respectively, such that for all $b,d_1,...d_m$ we have $\UU_{s^*,n}(b,d_1,...d_m)$ iff $\exists \tilde b\exists e_1...e_k$ $$\rho(\tilde b,b,d_1,...d_m,e_1,...e_k)\wedge
[\bigwedge_{i=0}^m\psi_i(\tilde b,v_i)\ \mbox{forks\ over}\ d_1...d_i]\wedge
[\bigwedge_{j=1}^k\mu_j(\tilde b,w_j)\ \mbox{forks\ over}\ d_1...d_me_1...e_j].$$ Now, let $d^*_{m+1},...d^*_l$ and $a^*,\tilde a^*$ and $E^*=(e^*_1,...,e^*_k)$ and $\tilde b^*$ be such that $$\theta(b^*,a^*)\wedge\sigma(\tilde a^*,a^*,d^*_1,d^*_2,...,d^*_l)\wedge\bigwedge^{l}_{i=0}
(\phi_i(\tilde a^*,y_i)\ \mbox{forks\ over}\ d^*_1d^*_2...d^*_i)\ \ (*1)$$ and $$\rho(\tilde
b^*,b^*,d^*_1,..d^*_m,e^*_1,..e^*_k)\ (*2)$$ and $$[\bigwedge_{i=0}^m\psi_i(\tilde
b^*,v_i)\ \mbox{forks\ over}\ d^*_1...d^*_i]\wedge [\bigwedge_{j=1}^k\mu_j(\tilde b^*,w_j)\
\mbox{forks\ over}\ d^*_1...d^*_me^*_1...e^*_j]\ (*3).$$ By maximality of $\bar d^*$, we know $b^*\in acl(\bar d^*d^*_{m+1})$. Thus, by taking a non-forking extension of $tp(\tilde
b^*E^*/acl(\bar d^*d^*_{m+1}))$ over $acl(d^*_1...d^*_la^*\tilde a^*)$ we may assume $E^*$ is independent from $d^*_1...d^*_la^*\tilde a^*$ over $\bar d^*d^*_{m+1}$ and $(*1)$, $(*2)$ and $(*3)$ still hold. We conclude that $$\bigwedge^{l}_{i=m+1} (\phi_i(\tilde a^*,t_i)\ \mbox{forks\
over}\ d^*_1d^*_2...d^*_iE^*).$$ Now, we define the $\tilde\tau^f_{low}$-set $\UU^*$. First, define a relation $V^*$ by: $$V^*(a,d_1,...d_m,e_1,...e_k,d_{m+1},..d_l)\ \mbox{iff}\ \exists \tilde a, b,\tilde b(\theta^*\wedge
V^*_0\wedge V^*_1\wedge V^*_2),$$ where $\theta^*$ is defined by: $\theta^*(\tilde a,b,\tilde
b,a,d_1,..d_m,e_1,...e_k,d_{m+1},..d_l)$ iff$$\theta(b,a)\wedge\sigma(\tilde a,a,d_1,d_2,...,d_l)\wedge \rho(\tilde b,b,d_1,...d_m,e_1,...,e_k),$$ $V^*_0$ is defined by: $V^*_0(\tilde a,\tilde b,d_1,...d_m)$ iff $$\bigwedge^{m}_{i=0}(\phi_i(\tilde a,t_i)\vee \psi_i(\tilde b,v_i)\ \mbox{forks\ over}\ d_1d_2...d_i),$$ $V^*_1$ is defined by $V_1(\tilde b,d_1,..d_m,e_1,...e_k)$ iff $$\bigwedge_{j=1}^k(\mu_j(\tilde b,w_j)\ \mbox{forks\ over}\ d_1...d_me_1...e_j),\ \mbox{and}$$ $V^*_2$ is defined by $V_2(\tilde a,d_1,...d_m,e_1,...e_k,d_{m+1},..d_l)$ iff $$\bigwedge^{l}_{i=m+1} (\phi_i(\tilde a,t_i)\ \mbox{forks\ over}\ d_1d_2...d_ie_1...e_k).$$ Note that $V^*$ is a pre-$\tilde\tau^f_{low}$-set. Let $$\UU^*=\{a \vert \exists
d_1,..d_m,e_1,...e_k,d_{m+1},..d_l\ V^*(a,d_1,..d_m,e_1,...e_k,d_{m+1},...d_l)\}.$$ By the definition of $\UU^*$, $\UU^*\subseteq \UU$. $\UU^*$ is a $\tilde\tau^f_{low}$-set using Remark \[low\_disjunction\]. By the construction, $\UU^*\neq\emptyset$. Now, let $a\in\UU^*$. By the definition of $\UU^*$, there are $\tilde b,b,d_1,...d_m,e_1,...e_k$ such that $\theta(b,a)$, $\rho(\tilde b,b,d_1,...d_m,e_1,...,e_k)$, $$\bigwedge_{i=0}^m(\psi_i(\tilde b,v_i)\ \mbox{forks\
over}\ d_1,...d_i)\ \mbox{and}\ \bigwedge_{j=1}^k(\mu_j(\tilde b,w_j)\ \mbox{forks\ over}\
d_1,...d_me_1...e_j).$$ Thus $\UU_{s^*,n}(b,d_1...d_m)$ and therefore $\neg\FF_{n}^{\bar
y^*}(b,d_1...d_m)$. Hence $b\not\in F_n$ as required.
Applications
============
In this section we show some applications of Theorem \[main\_thm\]. In fact, we will show several instances of this theorem that apparently new even for stable theories. In this section $T$ is assumed to be a simple theory and we work in $\CC$.\
We start by pointing out that theorem \[main\_thm\] generalizes \[S1, Theorem 9.4\] that is one of the essential steps towards the proof of supersimplicity of countable simple unidimensional theories with elimination of hyperimaginaries. First recall the following definitions from \[S1\].
For $a\in \CC$, $A\subseteq B\subseteq \CC$, ${\mbox{$\begin{array}{ccc} \mbox{$a$} & \!\mbox{$\!\!\not\!\:\usebox{\sindbin}$} & \mbox{$B$} \\
& \mbox{$A$} &
\end{array}$}}$ if for some stable $\phi(x,y)\in
L$, there is $b$ over $B$ and $a'\models \phi(x,b)$ for some $a'\in dcl(Aa)$ such that $\phi(x,b)$ forks over $A$.
The $SU_{se}$-rank of $tp(a/A)$ is defined by induction on $\alpha$: if $\alpha=\beta+1$, $SU_{se}(a/A)\geq \alpha$ if there exist $B_1\supseteq B_0\supseteq A$ such that ${\mbox{$\begin{array}{ccc} \mbox{$a$} & \!\mbox{$\!\!\not\!\:\usebox{\sindbin}$} & \mbox{$B_1$} \\
& \mbox{$B_0$} &
\end{array}$}}$ and $SU_{se}(a/B_1)\geq\beta$. For limit $\alpha$, $SU_{se}(a/A)\geq\alpha$ if $SU_{se}(a/A)\geq\beta$ for all $\beta<\alpha$.
\[symmetry\_s\]*First, recall that in a simple theory in which $Lstp=stp$ over sets ${\mbox{$\begin{array}{ccc} \mbox{$$} & \usebox{\sindbin} & \mbox{$$}
\end{array}$}}$ is symmetric \[Lemma 6,7, S1\]. Thus for any finite tuples of sorts $s_0$ and $s_1$ and $n<\omega$ the set $\FF^{s_0,s_1}_n$ defined by $$\FF^{s_0,s_1}_n=\{ (a,A)\in \CC^{s_0}\times \CC^{s_1} \vert\
SU_{se}(a/A)<n\}.$$ is a uniform family of $\tilde\tau^f_{low}$-closed sets.*
For an $A$-invariant set $V$, let $acl_1(V)=\{a'\vert\ a'\in acl(a)\ \mbox{for\ some}\
a\in V\}$. The following corollary generalizes \[S1, Theorem 9.4\].
\[cor1\] Let $T$ be a countable simple theory in which the extension property is first-order and assume $Lstp=stp$ over sets. Let $\UU_0$ be a non-empty $\tilde\tau^f_{low}$-set. Assume for every $a\in\UU_0$ there exists $a'\in acl(a)\backslash acl(\emptyset)$ such that $SU_{se}(a')<\omega$. Then there exists an unbounded $\tau_{\infty}^f$-open set $\UU\subseteq acl_1(\UU_0)$ over a finite set such that $\UU$ has bounded finite $SU_{se}$-rank.
Let $x$ be the variable of $\UU_0$, so $\UU_0=\UU_0(x)$. Let $$\Theta=\{\theta(x',x)\vert\
\exists^{<\infty} x' \theta(x',x), x'\ \mbox{any variable}\}.$$ Let $\SS$ be the set of sorts. Let $I:\omega\rightarrow \SS\times \omega$ be a bijection, $I_1,I_2$ the projections of $I$ to the first and second coordinate respectively. Now, for each $n<\omega$ let $F_n=\{a\in
\CC^{I_1(n)}\backslash acl(\emptyset)\ \vert SU_{se}(a)<I_2(n)\}$. Now, for every finite tuple of variables $Y$ and $n<\omega$ let $s(Y)$ be the finite sequence of sorts of $Y$ and let $$\FF^Y_n=\{(a,A)\in \CC^{I_1(n)}\times \CC^{s(Y)}\vert\ SU_{se}(a/A)<I_2(n)\}.$$ Now, by the definition of the $SU_{se}$-rank, $\FF_n(\CC)\subseteq\FF^Y_n(\CC,A)$ for every $n<\omega$ and every $Y,A$. By Remark \[symmetry\_s\], $\FF^Y_n$ is a uniform family of $\tilde\tau^f_{low}$-closed sets for all $Y,n$. By our assumptions, we see that the assumptions of Theorem \[main\_thm\] hold for $\UU_0(x)$, $\Theta$ ,$\{F_n\}_n$ and $\{\FF^Y_n\}_{Y,n}$ and thus by its corollary we are done.
\[cor2\] Let $T$ be a countable theory with wnfcp. Let $\UU_0$ be a non-empty $\tilde\tau^f_{low}$-set over $\emptyset$ of finite $SU$-rank. Then there exists a finite set $A$ and a $SU$-rank 1 formula $\theta\in L(A)$ such that $\theta^\CC\subseteq \UU_0\cup acl(A)$.
Let $\Theta=\{x'=x\}$, $\UU_0(x)=\UU_0$ and let $\SS,I,I_1,I_2$ be as in the proof of Corollary \[cor1\]. Now, for each $n<\omega$ let $$F_n=\{a\in \CC^{I_1(n)}\backslash acl(\emptyset)\ \vert SU(a)<I_2(n)\}.$$ For every finite tuple of variables $Y$ and $n<\omega$ let $s(Y)$ be the finite sequence of sorts of $Y$ and let $$\FF^Y_n=\{(a,A)\in \CC^{I_1(n)}\times \CC^{s(Y)}\vert\ SU(a/A)<I_2(n)\}.$$ By symmetry of forking and the assumption that $T$ is low, each $\FF^Y_n$ is a uniform family of $\tilde\tau^f_{low}$-closed sets. Clearly, $F_n(\CC)\subseteq \FF_n^Y(\CC,A)$ for every $n<\omega$ and every $Y,A$. By our assumption, the assumptions of Theorem \[main\_thm\] are satisfied for $\UU_0$, $\Theta$, $\{F_n\}_n$ and $\{\FF^Y_n\}_{Y,n}$ and thus by its corollary there exists an unbounded $\tau^f$-open set $\UU^*\subseteq \UU_0$ over a finite set $A_0$ and $\UU^*$ has bounded finite $SU$-rank. By Fact \[tau bounded SU\], there exists a finite set $A\supseteq A_0$ and there exists a $SU$-rank 1 formula $\theta\in L(A)$ such that $\theta^\CC\subseteq \UU^*\cup
acl(A)$.
Let $T$ be a countable theory with wnfcp. Let $\UU_0$ be a non-empty $\tilde\tau^f_{low}$-set over $\emptyset$. Assume for every $a\in\UU_0$ there exists $a'\in acl(a)\backslash acl(\emptyset)$ such that $SU(a')<\omega$. Then there exists a finite set $A$ and a $SU$-rank 1 formula $\theta\in L(A)$ such that $\theta^\CC\subseteq acl_1(\UU_0)\cup acl(A)$.
Just like the proof of Corollary \[cor2\].\
The above corollaries together with the dichotomy \[S1, Theorem 5.5\] implies a strong dichotomy between 1-basedness and supersimplicity in the case $T$ is a countable wnfcp theory that eliminates hyperimaginaries. Before we state the above dichotomy for the special case of the $\tau^f$-topologies (simplified version), let us recall the basic definitions.
\[def ess-1-based\]*A type $p\in S(A)$ is said to be *essentially 1-based by mean of the $\tau^f$-topologies *if for every finite tuple $\bar c$ from $p$ and for every type-definable $\tau^f$-open set $\UU$ over $A\bar c$, the set $\{a\in \UU \vert\ Cb(a/A\bar c)\not\in bdd(aA)\}$ is nowhere dense in the Stone-topology of $\UU$.***
\[dichotomy thm\] Let $T$ be a countable simple theory with PCFT that eliminates hyperimaginaries. Let $p_0$ be a partial type over $\emptyset$ of $SU$-rank 1. Then, either there exists an unbounded finite-$SU$-rank $\tau^f$-open set over some countable set, or every type $p\in S(A)$, with $A$ countable, that is internal in $p_0$ is essentially 1-based by mean of the $\tau^f$-topologies.
By Corollary \[cor2\] we conclude:
\[dichotomy thm\_wnfcp\] Let $T$ be a countable theory with wnfcp that eliminates hyperimaginaries. Let $p_0$ be a partial type over $\emptyset$ of $SU$-rank 1. Then, either there exists a $SU$-rank 1 definable set, or every type $p\in S(A)$, with $A$ countable, that is internal in $p_0$ is essentially 1-based by mean of the $\tau^f$-topologies.
[www]{} I.Ben-Yaacov, A.Pillay, E.Vassiliev, Lovely pairs of models, Annals of Pure and Applied Logic 122 (2003), no. 1-3. E.Hrushovski, Countable unidimensional stable theories are superstable, unpublished paper. A.Pillay, On countable simple unidimensional theories, Journal of Symbolic Logic 68 (2003), no. 4. Z.Shami, On analyzability in the forking topology for simple theories, Annals of Pure Applied Logic 142 (2006), no. 1-3, 115–124. Z.Shami, Countable imaginary simple unidimensional theories, preprint.
|
---
abstract: 'In this paper, an $\mathbb{R}$-analytical function and the sequence of its Taylor polynomials (which are Lyapunov functions different from those of Vanelli & Vidyasagar (1985, Automatica, 21(1):6 9–80)) is presented, in order to determine and approximate the domain of attraction of the exponentially asymptotically stable zero steady state of an autonomous, $\mathbb{R}$-analytical system of differential equations. The analytical function and the sequence of its Taylor polynomials are constructed by recurrence formulae using the coefficients of the power series expansion of $f$ at $0$.'
address:
- |
Department of Mathematics, West University of Timişoara\
Bd. V. Parvan nr. 4, 300223, Timişoara, Romania\
phone, fax: +40-256-494002
- |
Department of Physics, West University of Timişoara\
Bd. V. Parvan nr. 4, 300223, Timişoara, Romania
- |
L.A.G.A, UMR 7539, Institut Galilée, Université Paris 13\
99 Avenue J.B. Clément, 93430, Villetaneuse, France
author:
- 'E. Kaslik'
- 'A.M. Balint'
- 'St. Balint'
title: Methods for determination and approximation of the domain of attraction
---
Domain of attraction, Lyapunov function
Introduction
============
Let be the following system of differential equations: $$\label{dyn.sys}
\dot{x}=f(x)$$ where $f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ is a function of class $C^{1}$ on $\mathbb{R}^{n}$ with $f(0)=0$ (i.e. $x=0$ is a steady state of (\[dyn.sys\])). If the steady state $x=0$ is asymptotically stable [@Gruyitch], then the set $D_{a}(0)$ of all initial states $x^{0}$ for which the solution $x(t;0,x^{0})$ of the initial value problem: $$\dot{x}=f(x) \qquad x(0)=x^{0}$$ tends to $0$ as $t$ tends to $\infty$, is open and connected and it is called the domain of attraction (domain of asymptotic stability [@Gruyitch]) of $0$.
The results of Barbashin [@Barbashin], Barbashin-Krasovskii [@Barbashin-Krasovskii] and of Zubov ([@Zubov2], Theorem 19, pp. 52-53, [@Zubov]), have probably been the first results concerning the exact determination of $D_{a}(0)$. In our context, the theorem of Zubov is the following:
An invariant and open set $S$ containing the origin and included in the hypersphere $B(r)=\{x\in\mathbb{R}^{n}:\|x\|<r\}$, $r>0$, coincides with the domain of attraction $D_{a}(0)$ if and only if there exist two functions $V$ and $\psi$ with the following properties:
1. the function $V$ is defined and continuous on $S$, and the function $\psi$ is defined and continuous on $\mathbb{R}^{n}$
2. $-1<V(x)<0$ for any $x\in S\setminus\{0\}$ and $\psi
(x)>0$, for any $x\in\mathbb{R}^{n}\setminus\{0\}$
3. $\lim\limits_{x\rightarrow 0}V(x)=0$ and $\lim\limits_{x\rightarrow 0}\psi(x)=0$
4. for any $\gamma_{2}>0$ small enough, there exist $\gamma_{1}>0$ and $\alpha_{1}>0$ such that $V(x)<-\gamma_{1}$ and $\psi (x)>\alpha_{1}$, for $\|x\|\geq\gamma_{2}$
5. for any $y\in\partial S$, $\lim\limits_{x\rightarrow y}V(x)=-1$
6. $\frac{d}{dt}V(x(t;0;x^{0}))=\psi(x(t;0;x^{0}))[1+V(x(t;0;x^{0}))]$
\[thm.Zubov\]
At this level of generality, the effective determination of $D_{a}(0)$ using the functions $V$ and $\psi$ from Zubov’s theorem is not possible, because the function $V$ (if $\psi$ is chosen) is constructed by the method of characteristics, using the solutions of system (\[dyn.sys\]). This fact implicitly requests the knowledge of the domain of attraction $D_{a}(0)$ itself. \[rem.Zubov.nefolosibila\]
Another interesting result concerning the exact determination of $D_{a}(0)$, under the hypothesis that the real parts of the eigenvalues of the matrix $\frac{\partial f}{\partial x}(0)$ are negative, is due to Knobloch and Kappel [@Knobloch-Kappel]. In our context, Knobloch-Kappel’s theorem is the following:
If the real parts of the eigenvalues of the matrix $\frac{\partial
f}{\partial x}(0)$ are negative, then for any function $\zeta:\mathbb{R}^{n}\rightarrow\mathbb{R}$, with the following properties:
1. $\zeta$ is of class $C^{2}$ on $\mathbb{R}^{n}$
2. $\zeta(0)=0$ and $\zeta(x)>0$, for any $x\neq 0$
3. the function $\zeta$ has a positive lower limit on every subset of the set $\{x:\|x\|\geq \varepsilon\}$, $\varepsilon >0$
there exists a unique function $V$ of class $C^{1}$ on $D_{a}(0)$ which satisfies
- $\langle\nabla V(x),f(x)\rangle=-\zeta(x)$
- $V(0)=0$
In addition, $V$ satisfies the following conditions:
- $V(x)>0$, for any $x\neq 0$
- $\lim\limits_{x\rightarrow y}V(x)=\infty$, for any $y\in\partial
D_{a}(0)$ or for $\|x\|\rightarrow\infty$
\[thm.Knobloch-Kappel\]
The effective determination of $D_{a}(0)$ using the functions $V$ and $\zeta$ from Knobloch-Kappel’s theorem (at this level of generality) is not possible, because the function $V$ (if $\zeta$ is chosen) is constructed by the method of characteristics using the solutions of system (\[dyn.sys\]). This fact implicitly requests the knowledge of $D_{a}(0)$. \[rem.KK.nefolosibila\]
Vanelli and Vidyasagar have established in [@Vanelli-Vidyasagar] a result concerning the existence of a maximal Lyapunov function (which characterizes $D_{a}(0)$), and of a sequence of Lyapunov functions which can be used for approximating the domain of attraction $D_{a}(0)$. In the context of our paper, the theorem of Vanelli-Vidyasagar is the following:
An open set $S$ which contains the origin coincides with the domain of asymptotic stability of the asymptotically stable steady state $x=0$, if and only if there exists a continuous function $V:S\rightarrow\mathbb{R}_{+}$ and a positive definite function $\psi$ on $S$ with the following properties:
1. $V(0)=0$ and $V(x)>0$, for any $x\in S\setminus\{0\}$ ($V$ is positive definite on $S$)
2. $D_{r}V(x^{0})=\lim\limits_{t\rightarrow
0_{+}}\frac{V(x(t;0,x^{0}))-V(x^{0})}{t}=-\psi(x^{0})$, for any $x^{0}\in S$
3. $\lim\limits_{x\rightarrow y}V(x)=\infty$, for any $y\in\partial
S$ or for $\|x\|\rightarrow\infty$
\[thm.Vanelli-Vidyasagar\]
The determination of $D_{a}(0)$ using the functions $V$ and $\psi$ from Vanelli-Vidyasagar’s theorem is not possible, for the same reason as in the case of the theorems of Zubov and Knobloch-Kappel. \[rem.VV.nefolosibila\]
Restraining generality, and considering the case of an $\mathbb{R}$-analytic function $f$, for which the real parts of the eigenvalues of the matrix $\frac{\partial f}{\partial x}(0)$ are negative, Vanelli and Vidyasagar [@Vanelli-Vidyasagar] establish a second theorem which provides a sequence of Lyapunov functions, which are not necessarily maximal, but can be used in order to approximate $D_{a}(0)$. These Lyapunov functions are of the form: $$V_{m}(x)=\frac{r_{2}(x)+r_{3}(x)+...+r_{m}(x)}{1+q_{1}(x)+q_{2}(x)+...+q_{m}(x)}
\qquad m\in\mathbb{N}$$ where $r_{i}$ and $q_{i}$ are $i$-th degree homogeneous polynomials, constructed using the elements of the matrix $\frac{\partial f}{\partial x}(0)$, of a positively definite matrix $G$ and the nonlinear terms from the development of $f$. The algorithm of the construction of $V_{m}$ is relatively complex, but does not suppose knowledge of the solutions of system (\[dyn.sys\]). \[rem.VV.Lyap\]
Very interesting results concerning the exact determination of the domains of attraction (asymptotic stability domains) have been found by Gruyitch between 1985-1995. These results can be found in [@Gruyitch], chap. 5. In these results, the function $V$ which characterizes the domain of attraction is constructed by the method of characteristics, which uses the solutions of system (\[dyn.sys\]). Some illustrative examples are exceptions because for them $V$ is found in a finite form, for some concrete functions $f$, but without a precise generally applicable rule.
In the same year as Vanelli and Vidyasagar (1985), Balint [@Balint1], proved the following theorem:
(see [@Balint1] or [@KBB]) If the function $f$ is $\mathbb{R}$-analytic and the real parts of the eigenvalues of the matrix $\frac{\partial f}{\partial x}(0)$ are negative, then the domain of attraction $D_{a}(0)$ of the asymptotically stable steady state $x=0$ coincides with the natural domain of analyticity of the $\mathbb{R}$-analytical function $V$ defined by $$\label{eq.V}
\langle\nabla V(x),f(x)\rangle=-\|x\|^{2}\qquad V(0)=0$$ The function $V$ is strictly positive on $D_{a}(0)\setminus\{0\}$ and $\lim\limits_{x\rightarrow y}V(x)=\infty$ for any $y\in\partial D_{a}(0)$ or for $\|x\|\rightarrow\infty$. \[thm.Balint\]
In the case when the matrix $\frac{\partial f}{\partial x}(0)$ is diagonalizable, recurrence formulae have been established in [@Balint2] (see also [@KBB]) for the computation of the coefficients of the power series expansion in $0$ of the function $V$ defined by (\[eq.V\]) (called optimal Lyapunov function in [@Balint2]):
Consider $S:\mathbb{C}^{n}\rightarrow\mathbb{C}^{n}$ an isomorphism which reduces $\frac{\partial f}{\partial x}(0)$ to the diagonal form $S^{-1}\frac{\partial f}{\partial
x}(0)S=diag(\lambda_{1},\lambda_{2}...\lambda_{n})$. Let be $g=S^{-1}\circ f\circ S$ and $W=V\circ S$. If the expansion of $W$ at $0$ is $$\label{serieW}
W(z_{1},z_{2},...,z_{n})=\sum\limits_{m=2}^{\infty}\sum\limits_{|j|=m}B_{j_{1}j_{2}...j_{n}}z_{1}^{j_{1}}z_{2}^{j_{2}}...z_{n}^{j_{n}}$$ and the expansions at $0$ of the scalar components $g_{i}$ of $g$ are $$\label{serieg}
g_{i}(z_{1},z_{2},...,z_{n})=\lambda_{i}z_{i}+\sum\limits_{m=2}^{\infty}\sum\limits_{|j|=m}b^{i}_{j_{1}j_{2}...j_{n}}z_{1}^{j_{1}}z_{2}^{j_{2}}...z_{n}^{j_{n}}$$ then the coefficients $B_{j_{1}j_{2}...j_{n}}$ of the development (\[serieW\]) are given by the following relations: $$\label{coefB} B_{j_{1}j_{2}...j_{n}}=
\begin{array}{lll}
\left\{\begin{array}{l}
-\frac{1}{2\lambda_{i_{0}}}\sum\limits_{i=1}^{n}s^{2}_{ii_{0}}
\textrm{
if } |j|=j_{i_{0}}=2\\ \\
-\frac{2}{\lambda_{p}+\lambda_{q}}\sum\limits_{i=1}^{n}s_{ip}s_{iq}
\textrm{ if } |j|=2 \textrm{ and } j_{p}=j_{q}=1\\ \\
-\frac{1}{\sum\limits_{i=1}^{n}j_{i}\lambda_{i}}\sum\limits_{p=2}^{|j|-1}\sum\limits_{|k|=p,k_{i}\leq
j_{i}}\sum\limits_{i=1}^{n}[(j_{i}-k_{i}+1)\\
b^{i}_{k_{1}k_{2}...k_{n}}B_{j_{1}-k_{1}...j_{i}-k_{i}+1...j_{n}-k_{n}}]
\textrm{ if } |j|\geq3
\end{array}\right.
\end{array}$$ Using these recurrence formulae, the optimal Lyapunov functions $V$ and the domains of attraction $D_{a}(0)$ for some two-dimensional systems have been found in [@Balint2] in a finite form. \[rem.Balint.procedeu\]
*$$\label{ex1}
\begin{array}{ll}
\left\{\begin{array}{l}
\dot{x_{1}}=-\lambda x_{1}+\rho_{1}x_{1}^{2}+\rho_{2}x_{1}x_{2}\\
\dot{x_{2}}=-\lambda x_{2}+\rho_{1}x_{1}x_{2}+\rho_{2}x_{2}^{2}
\end{array}\right.
\end{array}\qquad \lambda>0,\rho_{1},\rho_{1}\in\mathbb{R}^{1}$$ The Lyapunov function corresponding to the zero asymptotically stable steady state of this system is $$V(x_{1},x_{2})=\frac{x_{1}^{2}+x_{2}^{2}}{\lambda}[\frac{\lambda^{2}}{(\rho_{1}x_{1}+\rho_{2}x_{2})^{2}}
\ln\frac{\lambda}{\lambda-(\rho_{1}x_{1}+\rho_{2}x_{2})}-\frac{\lambda}{\rho_{1}x_{1}+\rho_{2}x_{2}}]$$ and the domain of attraction is $$D_{a}(0)=\{x\in\mathbb{R}^{2}:\rho_{1}x_{1}+\rho_{2}x_{2}<\lambda\}$$* \[exmp.Balint1\]
*$$\label{ex1}
\begin{array}{ll}
\left\{\begin{array}{l}
\dot{x_{1}}=-\lambda x_{1}+\rho x_{1}^{3}+\rho x_{1}x_{2}^{2}\\
\dot{x_{2}}=-\lambda x_{2}+\rho x_{1}^{2}x_{2}+\rho x_{2}^{3}
\end{array}\right.
\end{array}\qquad \lambda>0,\rho\in\mathbb{R}^{1}$$ The Lyapunov function corresponding to the zero asymptotically stable steady state of this system is $$V(x_{1},x_{2})=\frac{1}{2\rho}\ln\frac{\lambda}{\lambda-\rho(x_{1}^{2}+x_{2}^{2})}$$ and the domain of attraction is $$D_{a}(0)=\{x\in\mathbb{R}^{2}:\lambda-\rho(x_{1}^{2}+x_{2}^{2})>0\}$$* \[exmp.Balint2\]
Therefore, when the function $f$ is $\mathbb{R}$-analytic, the real parts of the eigenvalues of the matrix $\frac{\partial
f}{\partial x}(0)$ are negative, and the matrix $\frac{\partial
f}{\partial x}(0)$ is diagonalizable, then the optimal Lyapunov function $V$ can be found theoretically by computing the coefficients of its power series expansion at $0$, without knowing the solutions of system (\[dyn.sys\]). More precisely, in this way, the “embryo” $V_{0}$ (i.e. the sum of the series) of the function $V$ is found theoretically on the domain of convergence $D_{0}$ of the power series expansion. A formula for determining the region of convergence $D_{0}\subset D_{a}(0)$ of the series of $V$ can be found in [@Balint3] or [@KBB]. If $D_{0}$ is a strict part of $D_{a}(0)$, then the “embryo” $V_{0}$ can be prolonged using the algorithm of prolongation of analytic functions:
If $D_{0}$ is strictly contained in $D_{a}(0)$, then there exists a point $x^{0}\in\partial D_{0}$ such that the function $V_{0}$ is bounded on a neighborhood of $x^{0}$. Let be a point $x^{0}_{1}\in
D_{0}$ close to $x^{0}$, and the power series development of $V_{0}$ in $x^{0}_{1}$ (the coefficients of this development are determined by the derivatives of $V_{0}$ in $x_{1}^{0}$). Using the formula from [@Balint3] or [@KBB], the domain of convergence $D_{1}$ of the series centered in $x^{0}_{1}$ is obtained, which gives a new part $D_{1}\setminus(D_{0}\bigcap
D_{1})$ of the domain of attraction $D_{a}(0)$. The sum $V_{1}$ of the series centered in $x^{0}_{1}$ is a prolongation of the function $V_{0}$ to $D_{1}$ and coincides with $V$ on $D_{1}$. At this step, the part $D_{0}\bigcup D_{1}$ of $D_{a}(0)$ and the restriction of $V$ to $D_{0}\bigcup D_{1}$ are obtained.
If there exists a point $x^{1}\in\partial (D_{0}\bigcup D_{1})$ such that the function $V|_{D_{0}\bigcup D_{1}}$ is bounded on a neighborhood of $x^{1}$, then the domain $D_{0}\bigcup D_{1}$ is strictly included in the domain of attraction $D_{a}(0)$. In this case, the procedure described above is repeated, in a point $x_{1}^{1}$ close to $x^{1}$.
The procedure cannot be continued in the case when it is found that on the boundary of the domain $D_{0}\bigcup D_{1}\bigcup
...\bigcup D_{p}$ obtained at step $p$, there are no points having neighborhoods on which $V|_{D_{0}\bigcup D_{1}\bigcup ...\bigcup
D_{p}}$ is bounded. We illustrate this process in the following example:
*Consider the following differential equation: $$\dot{x}=x(x-1)(x+2)$$ $x=0$ is an asymptotically stable steady state for this equation. The coefficients of the power series development in $0$ of the optimal Lyapunov function are computed using (\[coefB\]): $A_{n}=\frac{2^{n-1}+(-1)^{n}}{3n 2^{n-1}}$, $n\geq 2$. The domain of convergence $D_{0}=(-1,1)$ of the series is found using the formula: $$x\in D_{0}\qquad \textrm{iff}\qquad \overline{\lim_{n}}\sqrt[n]{|A_{n}x^{n}|}<1$$ The embryo $V_{0}(x)$ is unbounded in $1$ and bounded in $-1$, as $V_{0}(-1)=\frac{\ln 2}{3}$. We expand $V_{0}(x)$ in $-0.9$ close to $-1$. The coefficients of the series centered in $-0.9$ are: $A'_{n}=\frac{1}{3n}[\frac{1}{(1.9)^{n}}+\frac{2(-1)^{n}}{(1.1)^{n}}]$. The domain of convergence $D_{1}$ of the series centered in $-0.9$ is given by: $$x\in D_{1}\qquad \textrm{iff}\qquad \overline{\lim_{n}}\sqrt[n]{|A'_{n}(x+0.9)^{n}|}<1$$ and it is $D_{1}=(-2,0.2)$. So far, we have obtained the part $D=D_{0}\bigcup D_{1}=(-2,1)$ of the domain of attraction $D_{a}(0)$. As the function $V$ is unbounded at both ends of the interval, we conclude that $D_{a}(0)=(-2,1)$.*
We have illustrated how this approximation technique described in [@Balint1; @Balint2; @Balint3] works in some particular cases. In more complex cases (for example if the right hand side terms in (\[dyn.sys\]) are just polynomials of second degree), we can only compute effectively the coefficients $A_{j_{1}j_{2}...j_{n}}$ of the expansion of $V$ up to a finite degree $p$. With these coefficients, the Taylor polynomial of degree $p$ corresponding to $V$: $$V_{0}^{p}(x_{1},x_{2},...,x_{n})=\sum\limits_{m=2}^{p}\sum\limits_{|j|=m}A_{j_{1}j_{2}...j_{n}}
x_{1}^{j_{1}}x_{2}^{j_{2}}...x_{n}^{j_{n}}$$ can be constructed. In the followings, it will be shown how $V_{0}^{p}$ can be used in order to approximate $D_{a}(0)$.
Theoretical results
===================
For $r>0$, we denote by $B(r)=\{x\in\mathbb{R}^{n}:\|x\|<r\}$ the hypersphere of radius $r$.
For any $p\geq 2$, there exists $r_{p}>0$ such that for any $x\in
\overline{B(r_{p})}\setminus\{0\}$ one has:
1. $V_{p}(x)>0$
2. $\langle\nabla V_{p}(x), f(x)\rangle <0$
\[thm.Vp.lyap.bila\]
First, we will prove that for $p=2$, the function $V_{2}$ has the properties 1. and 2. For this, write the function $f$ as: $$f(x)=Ax+g(x)\qquad \textrm{with } A=\frac{\partial f}{\partial x}(0)$$ and the equation $$\langle\nabla V(x),f(x)\rangle=-\|x\|^{2}$$ as $$\langle\nabla V_{2}(x),Ax\rangle + \langle\nabla(V-V_{2})(x),Ax+g(x)\rangle + \langle\nabla V_{2}(x),g(x)\rangle =-\|x\|^{2}$$ Equating the terms of second degree, we obtain: $$\langle\nabla V_{2}(x),Ax\rangle =-\|x\|^{2}$$ As $V_{2}(0)=0$, it results that: $$\label{V2.expresie}
V_{2}(x)=\int_{0}^{\infty}\|e^{At}x\|^{2}dt$$ This shows that $V_{2}(x)>0$ for any $x\in\mathbb{R}^{n}\setminus\{0\}$.
On the other hand, one has: $$\begin{aligned}
% \nonumber to remove numbering (before each equation)
\nonumber \langle\nabla V_{2}(x),f(x)\rangle &=& \langle\nabla V_{2}(x),Ax\rangle +\langle\nabla V_{2}(x),g(x)\rangle = \\
\nonumber &=& -\|x\|^{2} + \langle\nabla V_{2}(x),g(x)\rangle = \\
&=& -\|x\|^{2}[1-\frac{\langle\nabla V_{2}(x),g(x)\rangle}{\|x\|^{2}}]\end{aligned}$$ As $\lim\limits_{\|x\|\rightarrow 0}\frac{\langle\nabla
V_{2}(x),g(x)\rangle}{\|x\|^{2}}=0$, there exists $r_{2}>0$ such that for any $x\in \overline{B(r_{2})}\setminus\{0\}$, we have $|\frac{\langle\nabla
V_{2}(x),g(x)\rangle}{\|x\|^{2}}|<\frac{1}{2}$. Therefore, for any $x\in \overline{B(r_{2})}\setminus\{0\}$, we get that: $$\langle\nabla V_{2}(x),f(x)\rangle\leq -\frac{1}{2}\|x\|^{2}$$ We will show that for any $p>2$, the function $V_{p}$ satisfies conditions 1. and 2. Write the function $V_{p}$ as $$V_{p}(x)=V_{2}(x)[1+\frac{V_{p}(x)-V_{2}(x)}{V_{2}(x)}]\qquad
x\neq 0$$ As $\lim\limits_{\|x\|\rightarrow
0}\frac{V_{p}(x)-V_{2}(x)}{V_{2}(x)}=0$, there exists $r_{p}^{1}$ such that for any $x\in \overline{B(r_{p}^{1})}\setminus\{0\}$, we have $|\frac{V_{p}(x)-V_{2}(x)}{V_{2}(x)}|<\frac{1}{2}$. Therefore, for any $x\in \overline{B(r_{p}^{1})}\setminus\{0\}$, we have: $$V_{p}(x)\geq\frac{1}{2}V_{2}(x)>0$$ thus, $V_{p}$ satisfies condition 1.
On the other hand, we have: $$\begin{aligned}
\nonumber \langle\nabla V_{p}(x),f(x)\rangle &=& \langle\nabla
V_{2}(x),Ax\rangle[1+\frac{\langle\nabla
(V_{p}-V_{2})(x),f(x)\rangle+\langle\nabla
V_{2}(x),g(x)\rangle}{\langle\nabla
V_{2}(x),Ax\rangle}]= \\
&=& -\|x\|^{2}[1-\frac{\langle\nabla
(V_{p}-V_{2})(x),f(x)\rangle+\langle\nabla
V_{2}(x),g(x)\rangle}{\|x\|^{2}}]\end{aligned}$$ As $\lim\limits_{\|x\|\rightarrow 0}\frac{\langle\nabla
(V_{p}-V_{2})(x),f(x)\rangle+\langle\nabla
V_{2}(x),g(x)\rangle}{\|x\|^{2}}=0$, there exists $r_{p}^{2}$ such that for any $x\in \overline{B(r_{p}^{2})}\setminus\{0\}$, we have $|\frac{\langle\nabla (V_{p}-V_{2})(x),f(x)\rangle+\langle\nabla
V_{2}(x),g(x)\rangle}{\|x\|^{2}}|<\frac{1}{2}$. Therefore, for any $x\in \overline{B(r_{p}^{2})}\setminus\{0\}$, we have: $$\langle\nabla V_{p}(x),f(x)\rangle\leq -\frac{1}{2}\|x\|^{2}$$ Therefore, for any $x\in \overline{B(r_{p})}\setminus\{0\}$, where $r_{p}=\min\{r_{p}^{1},r_{p}^{2}\}$, the function $V_{p}$ satisfies conditions 1. and 2.
For any $p\geq 2$, there exists a maximal domain $G_{p}\subset\mathbb{R}^{n}$ such that $0\in G_{p}$ and for any $x\in G_{p}\setminus\{0\}$, function $V_{p}$ verifies 1. and 2. from Theorem \[thm.Vp.lyap.bila\]. In other words, for any $p\geq 2$ the function $V_{p}$ is a Lyapunov function for (\[dyn.sys\]) (in the sense of [@Gruyitch]).
Theorem \[thm.Vp.lyap.bila\] provides that the Taylor polynomials of degree $p\geq 2$ associated to $V$ in $0$ are Lyapunov functions. This sequence of Lyapunov functions is different of that provided by Vanelli and Vidyasagar in [@Vanelli-Vidyasagar].
For any $p\geq 2$, there exists $c>0$ and a closed and connected set $S$ of points from $x\in\mathbb{R}^{n}$, with the following properties:
1. $0\in Int(S)$
2. $V_{p}(x)< c$ for any $x\in Int(S)$
3. $V_{p}(x)=c$ for any $x\in\partial S$
4. $S$ is compact and included in the set $G_{p}$.
\[thm.Ncp.exist\]
Let be $p\geq 2$ and $r_{p}>0$ determined in Theorem \[thm.Vp.lyap.bila\]. Let be $c=\min\limits_{\|x\|=r_{p}}V_{p}(x)$ and $S'=\{x\in\overline{B(r_{p})}:V_{p}(x)< c\}$. It is obvious that $c>0$ and that there exist $x^{\star}$ with $\|x^{\star}\|=r_{p}$ such that $V(x^{\star})=c$. The set $S'$ is open, $0\in S'$ and $S'\subset \overline{B(r_{p})}\subset G_{p}$.
We will prove that $V_{p}(x)=c$ for any $x\in\partial S'$. Let be $\bar{x}\in\partial S'$. Thus, $\|\bar{x}\|\leq r_{p}$ and there exists a sequence $x^{k}\in S'$ such that $x^{k}\rightarrow
\bar{x}$ as $k\rightarrow\infty$. As $V_{p}(x^{k})<c$, we have that $V_{p}(\bar{x})=\lim\limits_{k\rightarrow\infty}V_{p}(x^{k})\leq
c$. The case $\|\bar{x}\|= r_{p}$ and $V_{p}(\bar{x})<c$ is impossible, because $c=\min\limits_{\|x\|=r_{p}}V_{p}(x)$. The case $\|\bar{x}\|< r_{p}$ and $V_{p}(\bar{x})<c$ is also impossible, because this would mean that $\bar{x}$ belongs to the interior of the set $S'$, and not to its boundary. Therefore, for any $\bar{x}\in\partial S'$ we have $V_{p}(\bar{x})=c$.
If the set $S'$ is not connected (see Example \[ex.nu.rad.cresc\] in this paper), we denote by $S''$ its connected component which contains the origin, and let be $S=\overline{S''}$. Then it is obvious that $S$ is connected (being the closure of the open connected set $S''$), $0\in
Int(S)=S''$, and that for any $x\in Int(S)=S''$, we have $V_{p}(x)<c$. More, as $\partial S=\partial S''$, we have $V_{p}(x)=c$ for any $x\in
\partial S$. As $S''$ is bounded, we obtain that the closed set $S$ is also bounded, thus, it is compact. As $S''\subset
\overline{B(r_{p})}\subset G_{p}$, we have that $S=\overline{S''}\subset \overline{B(r_{p})}\subset G_{p}$. Therefore, $S$ satisfies the properties 1-4.
Let be $p\geq 2$, $c>0$ and a closed and connected set $S$ satisfying 1-4 from Theorem \[thm.Ncp.exist\]. Then for any $x^{0}\in S$, the solution $x(t;0,x^{0})$ of system (\[dyn.sys\]) starting from $x^{0}$ is defined on $[0,\infty)$ and belongs to $Int(S)$ for any $t>0$. \[lemma.Npc.prelung\]
Let be $x^{0}\in S$. We denote by $[0,\beta_{x^{0}})$ the right maximal interval of existence of the solution $x(t;0,x^{0})$ of system (\[dyn.sys\]) with starting state $x^{0}$.
First, if $x^{0}\in Int(S)\setminus\{0\}$, we show that $x(t;0,x^{0})\in Int(S)$, for all $t\in [0,\beta_{x^{0}})$. Suppose the contrary, i.e. there exists $T\in(0,\beta_{x^{0}})$ such that $x(t;0,x^{0})\in Int(S)$, for $t\in [0,T)$ and $x(T;0,x^{0})\in \partial S$ (i.e. $V_{p}(x(T;0,x^{0}))=c$). As $x(t;0,x^{0})\in G_{p}\setminus\{0\}$, for $t\in [0,T)$, $V_{p}(x(t;0,x^{0}))$ is strictly decreasing, and it follows that $V_{p}(x(t;0,x^{0}))<V_{p}(x^{0})<c$, for $t\in (0,T)$. Therefore $V_{p}(x(T;0,x^{0}))<c$, which contradicts the supposition $x(T;0,x^{0})\in \partial S$. Thus, $x(t;0,x^{0})\in Int(S)$, for all $t\in [0,\beta_{x^{0}})$. (It is clear that for $x^{0}=0$, the solution $x(t;0,0)=0\in Int(S)$, for all $t\geq 0$.)
If $x^{0}\in \partial S$, we show that $x(t;0,x^{0})\in Int(S)$, for all $t\in (0,\beta_{x^{0}})$. As the compact set $S$ is a subset of the domain $G_{p}$, the continuity of $x(t;0,x^{0})$ provides the existence of $T_{x^{0}}>0$ such that $x(t;0,x^{0})\in
G_{p}\setminus\{0\}$ for any $t\in[0,T_{x^{0}}]\subset[0,\beta_{x^{0}})$. Therefore $V_{p}(x(t;0,x^{0}))$ is strictly decreasing on $[0,T_{x^{0}}]$, and it follows that $V_{p}(x(t;0,x^{0}))<V_{p}(x^{0})= c$, for any $t\in (0,T_{x^{0}})$. This means that $V_{p}(x(t;0,x^{0}))\in
Int(S)$, for any $t\in (0,T_{x^{0}}]$. The first part of the proof guarantees that $x(t;0,x^{0})\in Int(S)$, for all $t\in
[T_{x^{0}},\beta_{x^{0}})$, therefore, for all $t\in
(0,\beta_{x^{0}})$.
In conclusion, for any $x^{0}\in S$, we have that $x(t;0,x^{0})\in
Int(S)$, for all $t\in (0,\beta_{x^{0}})$.
As for any $x^{0}\in S$, the solution $x(t;0,x^{0})$ defined on $[0,\beta_{x^{0}})$, belongs to the compact $S$, we obtain that $\beta_{x^{0}}=\infty$ and the solution $x(t;0,x^{0})$ is defined on $[0,\infty)$, for each $x^{0}\in S$. More, $x(t;0,x^{0})\in
Int(S)$, for all $t>0$.
Lemma \[lemma.Npc.prelung\] states that a closed and connected set $S$ satisfying 1-4 from Theorem \[thm.Ncp.exist\] is positively invariant to the flow of system (\[dyn.sys\]).
(LaSalle-type theorem) Let be $p\geq 2$, $c>0$ and a closed and connected set $S$ satisfying 1-4 from Theorem \[thm.Ncp.exist\]. Then $S$ is a part of the of the domain of attraction $D_{a}(0)$. \[thm.Ncp.parte.DA\]
Let be $x^{0}\in S\setminus\{0\}$. To prove that $\lim\limits_{t\rightarrow\infty}x(t;0,x^{0})=0$, it is sufficient to prove that $\lim\limits_{k\rightarrow\infty}x(t_{k};0,x^{0})=0$, for any sequence $t_{k}\rightarrow\infty$.
Consider $t_{k}\rightarrow\infty$. The terms of the sequence $x(t_{k};0,x^{0})$ belong to the compact $S$. Thus, there exits a convergent subsequence $x(t_{k_{j}};0,x^{0})\rightarrow y^{0}\in
S$.
It can be shown that $$\label{ineg0}
V_{p}(x(t;0,x^{0}))\geq V_{p}(y^{0}) \textrm{ for all } t\geq 0$$ For this, observe that $x(t_{k_{j}};0,x^{0})\rightarrow y^{0}$ and $V_{p}$ is strictly decreasing along the trajectories, which implies that $V_{p}(x(t_{k_{j}};0,x^{0}))\geq V_{p}(y^{0})$ for any $k_{j}$. On the other hand, for any $t\geq 0$, there exists $k_{j}$ such that $t_{k_{j}}\geq t$, and therefore $V_{p}(x(t;0,x^{0}))\geq V_{p}(x(t_{k_{j}};0,x^{0}))\geq
V_{p}(y^{0})$.
We show now that $y^{0}=0$. Suppose the contrary, i.e. $y^{0}\neq
0$. Inequality (\[ineg0\]) becomes $$\label{contrad}
V_{p}(x(t;0,x^{0}))\geq V_{p}(y^{0})>0 \textrm{ for all } t\geq 0$$ As $V_{p}(x(s;0,y^{0}))$ is strictly decreasing on $[0,\infty)$, we find that $$V_{p}(x(s;0,y^{0}))<V_{p}(y^{0})\textrm{ for all } s>0$$ For $\bar{s}>0$, there exists a neighborhood $U_{x(\bar{s};0,y^{0})}\subset S$ of $x(\bar{s};0,y^{0})$ such that for any $x\in U_{x(\bar{s};0,y^{0})}$ we have $0<V_{p}(x)<V_{p}(y^{0})$. On the other hand, for the neighborhood $U_{x(\bar{s};0,y^{0})}$ there exists a neighborhood $U_{y^{0}}\subset S$ of $y^{0}$ such that $x(\bar{s};0,y)\in
U_{x(\bar{s};0,y^{0})}$ for any $y\in U_{y^{0}}$. Therefore: $$\label{ineg1}
V_{p}(x(\bar{s};0,y))<V_{p}(y^{0})\textrm{ for all } y\in U_{y^{0}}$$ As $x(t_{k_{j}};0,x^{0})\rightarrow y^{0}$, there exists $k_{\bar{j}}$ such that $x(t_{k_{j}};0,x^{0})\in U_{y^{0}}$, for any $k_{j}\geq k_{\bar{j}}$. Making $y=x(t_{k_{j}};0,x^{0})$ in (\[ineg1\]), it results that $$\label{ineg2}
V_{p}(x(\bar{s}+t_{k_{\bar{j}}};0,x^{0}))=V_{p}(x(\bar{s};0,x(t_{k_{\bar{j}}};0,x^{0})))<V_{p}(y^{0})
\qquad \textrm{for }k_{j}\geq k_{\bar{j}}$$ which contradicts (\[contrad\]). This means that $y^{0}=0$, consequently, every convergent subsequence of $x(t_{k};0,x^{0})$ converges to $0$. This provides that the sequence $x(t_{k};0,x^{0})$ is convergent to $0$, for any $t_{k}\rightarrow
\infty$, thus $\lim\limits_{t\rightarrow\infty}x(t;0,x^{0})=0$, and $x^{0}\in D_{a}(0)$.
Therefore, the set $S$ is contained in the domain of attraction of $D_{a}(0)$.
For any $p\geq 2$ and $c>0$ there exists at most one closed and connected set satisfying 1-4 from Theorem \[thm.Ncp.exist\].
Suppose the contrary, i.e. for a $p\geq 2$ and $c>0$ there exist two different closed and connected sets $S_{1}$ and $S_{2}$ satisfying 1-4 from Theorem \[thm.Ncp.exist\]. Assume for example that there exists $x^{0}\in S_{1}\setminus S_{2}$. Due to Theorem \[thm.Ncp.parte.DA\], $S_{1}\subset D_{a}(0)$ and therefore $\lim\limits_{t\rightarrow\infty}x(t;0,x^{0})=0$. As $x^{0}\notin S_{2}$, and $S_{2}$ is a closed and connected neighborhood of $0$, there exists $T>0$ such that $x(T;0,x^{0})\in\partial S_{2}$. Therefore, $V_{p}(x(T;0,x^{0}))=c$ which contradicts Lemma \[lemma.Npc.prelung\]. Consequently, we have $S_{1}\subseteq
S_{2}$. By the same reasons, $S_{2}\subseteq S_{1}$. Finally, $S_{1}= S_{2}$.
If for $p\geq 2$ and $c>0$ there exists a closed and connected set satisfying 1-4 from Theorem \[thm.Ncp.exist\], then it is unique and it will be denoted by $N_{p}^{c}$.
Any set $N_{p}^{c}$ is included in the domain of attraction $D_{a}(0)$.
Let be $p\geq 2$ and $c>0$ such that there exists the set $N_{p}^{c}$. Then, for any $c'\in(0,c]$ the set $\{x\in
N_{p}^{c}:V_{p}(x)\leq c'\}$ coincides with the set $N_{p}^{c'}$. \[lemma.Npc.incluz.conex\]
Let be $c'\in(0,c]$. It is obvious that $N_{p}^{c'}$ is included in the set $\{x\in N_{p}^{c}:V_{p}(x)\leq c'\}$. Let be $x^{0}\in
N_{p}^{c}$ such that $V_{p}(x^{0})\leq c'$. We know that $V_{p}(x(t;0,x^{0}))<V_{p}(x^{0})\leq c'$, for any $t>0$. Theorem \[thm.Ncp.parte.DA\] provides that $x^{0}\in N_{p}^{c}\subset
D_{a}(0)$, therefore, $x^{0}$ is connected to $0$ through the continuous trajectory $x(t;0,x^{0})$, along which $V_{p}$ takes values below $c'$. In conclusion, $x^{0}\in N_{p}^{c'}$.
If for $p\geq 2$ and $c>0$ there exists $N_{p}^{c}$, then for any $c'\in(0,c)$ there exists $N_{p}^{c'}$ and $N_{p}^{c'}\subset
N_{p}^{c}$. More, for any $c_{1},c_{2}\in (0,c)$ we have $N_{p}^{c_{1}}\subset N_{p}^{c_{2}}$ if and only if $c_{1}<c_{2}$.\[thm.Ncp.incluziune\]
Lemma \[lemma.Npc.incluz.conex\] provides that for any $c'\in(0,c)$ there exists $N_{p}^{c'}=\{x\in
N_{p}^{c}:V_{p}(x)\leq c'\}$. It is obvious that $N_{p}^{c'}\subset N_{p}^{c}$.
Let’s show that for any $c_{1},c_{2}\in (0,c)$ we have $N_{p}^{c_{1}}\subset N_{p}^{c_{2}}$ if and only if $c_{1}<c_{2}$.
To show the *necessity*, let’s suppose the contrary, i.e. $N_{p}^{c_{1}}\subset N_{p}^{c_{2}}$ and $c_{1}\geq c_{2}$. Let be $x^{0}\in\partial N_{p}^{c_{2}}\subset G_{p}$. Then $V_{p}(x^{0})=c_{2}$ and as $x^{0}\in G_{p}$, we get that $$\label{c2.contrad}
V_{p}(x(t;0,x^{0}))\leq V_{p}(x^{0})=c_{2}\qquad \textrm {for any
}t\geq 0$$ Theorem \[thm.Ncp.parte.DA\] provides that $x^{0}\in\partial
N_{p}^{c_{2}}\subset D_{a}(0)$, therefore $\lim\limits_{t\rightarrow\infty}x(t;0,x^{0})=0$. As $N_{p}^{c_{1}}$ and $N_{p}^{c_{2}}$ are connected neighborhoods of $0$ and $N_{p}^{c_{1}}\subset N_{p}^{c_{2}}$, there exists $T\geq
0$ such that $x(T;0,x^{0})\in \partial N_{p}^{c_{1}}$. This means that $V_{p}(x(T;0,x^{0}))=c_{1}\geq c_{2}$, and (\[c2.contrad\]) provides that $c_{1}=c_{2}$. As $N_{p}^{c_{1}}$ is strictly included in $N_{p}^{c_{2}}$, there exists $\bar{x}\in\partial
N_{p}^{c_{1}}$ (i.e. $V_{p}(\bar{x})=c_{1}=c_{2}$) such that $\bar{x}\in Int(N_{p}^{c_{2}})$. This contradicts the property 2 from Theorem \[thm.Ncp.exist\] concerning $N_{p}^{c_{2}}$. In conclusion, $c_{1}< c_{2}$.
To prove the *sufficiency*, let’s suppose that $c_{1}<c_{2}$ and let be $x^{0}\in N_{p}^{c_{1}}\setminus\{0\}$. As $x^{0}\in
N_{p}^{c_{1}}\subset D_{a}(0)$, we have that $\lim\limits_{t\rightarrow\infty}x(t;0,x^{0})=0$, so $x^{0}$ is connected to $0$ through the continuous trajectory $x(t;0,x^{0})$. More, as $x^{0}\in N_{p}^{c_{1}}\setminus\{0\}$, we have $V_{p}(x^{0})\leq c_{1}\leq c_{2}$. This means that $x^{0}\in
N_{p}^{c_{2}}$, therefore $N_{p}^{c_{1}}\subseteq N_{p}^{c_{2}}$. The inclusion is strict, because $N_{p}^{c_{1}}=N_{p}^{c_{2}}$ means $\partial N_{p}^{c_{1}}=\partial N_{p}^{c_{2}}$, i.e. $c_{1}=c_{2}$, which contradicts $c_{1}<c_{2}$.
For a given $p\geq 2$, the set of all $N_{p}^{c}$-s is totally ordered and $\bigcup\limits_{c}N_{p}^{c}$ is included in $D_{a}(0)$. Therefore, for a given $p\geq 2$, the largest part of $D_{a}(0)$ which can be found by this method is $\bigcup\limits_{c}N_{p}^{c}$.
For any $p\geq 2$ let be $R_{p}=\{r>0:\overline{B(r)}\subset
G_{p}\}$. For $r\in R_{p}$ we denote by $c_{p}^{r}=\inf\limits_{\|x\|=r}V_{p}(x)$.
For any $r\in R_{p}$, there exists the set $N_{p}^{c_{p}^{r}}$ and $N_{p}^{c_{p}^{r}}\subseteq \overline{B(r)}$. \[cor.Ncpcpr.exist\]
For any $p\geq 2$ and any $r',r''\in R_{p}$, $r'<r''$, such that $V_{p}$ is radially increasing on $\overline{B(r'')}$ we have $c_{p}^{r'}<c_{p}^{r''}$. \[cor.Ncpcpr.incluziune\]
In some cases, it can be shown that the function $V_{p}$ is radially increasing on $G_{p}$:
- $V_{2}$ is radially increasing on $\mathbb{R}^{n}$;
- If $n=1$, then for any $p\geq 2$, $V_{p}$ is radially increasing on $G_{p}$.
This result is not true in general, provided by the following example:
*Let be the following system of differential equations: $$\begin{array}{ll}
\left\{\begin{array}{l}
\dot{x_{1}}=-x_{1}-x_{1}x_{2}\\
\dot{x_{2}}=-x_{2}+x_{1}x_{2}
\end{array}\right.
\end{array}$$ for which $(0,0)$ is an asymptotically stable steady state. For $p=3$ the Lyapunov function $V_{3}(x_{1},x_{2})$ is given by: $$V_{3}(x_{1},x_{2})=\frac{1}{2}(x_{1}^{2}+x_{2}^{2})+\frac{1}{3}(x_{1}x_{2}^{2}-x_{2}x_{1}^{2})$$ Consider the point $(3\sqrt{5},\sqrt{5})\in \partial G_{3}$ and let be $g:[0,1)\rightarrow G_{3}$ defined by $g(\lambda)=V_{3}(3\sqrt{5}\lambda,\sqrt{5}\lambda)$. The function $g$ is increasing on $[0,\frac{\sqrt{5}}{3}]$ and decreasing on $(\frac{\sqrt{5}}{3},1)$, therefore, the Lyapunov function $V_{3}$ is not radially increasing on the direction $(3\sqrt{5},\sqrt{5})$. In conclusion, $V_{3}$ is not radially increasing on $G_{3}$.*
*More, for $c=0.32$ there exists $N_{3}^{c}$, but the set $\{x=(x_{1},x_{2})\in G_{3}:V_{3}(x_{1},x_{2})\leq c\}$ is not connected. The reason is that the point $(\bar{x_{1}},\bar{x_{2}})=(\frac{123}{8},\frac{41}{24})\in\partial
G_{3}$ with $V_{3}(\bar{x_{1}},\bar{x_{2}})=0$ has a nonempty neighborhood $U$ such that $V_{3}(x_{1},x_{2})\leq c$, for any $(x_{1},x_{2})\in G_{3}\bigcap U$ and $(G_{3}\bigcap U)\bigcap
N_{3}^{c}= \emptyset$.* \[ex.nu.rad.cresc\]
For any $p\geq 2$ there exists $\rho_{p}>0$ such that $V_{p}$ is radially increasing on $\overline{B(\rho_{p})}$.
It can be easily verified that $V_{2}$ is radially increasing on $\mathbb{R}^{n}$, using relation (\[V2.expresie\]). This provides that for any $x\in\mathbb{R}^{n}\setminus\{0\}$, the function $g_{2}^{x}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ defined by $g_{2}^{x}(\lambda)=V_{2}(\lambda x)$ is strictly increasing on $\mathbb{R}_{+}$, therefore $\frac{d}{d\lambda}g_{2}^{x}(\lambda)>0$ on $\mathbb{R}_{+}^{\star}$, i.e. $$\label{V2.ec.deriv}
\langle\nabla V_{2}(\lambda x),x \rangle>0\qquad\textrm{for any }
\lambda>0\textrm{ and }x\in\mathbb{R}^{n}\setminus\{0\}$$ Let be $p>2$, $x\in\mathbb{R}^{n}\setminus\{0\}$ and $g_{p}^{x}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ defined by $g_{p}^{x}(\lambda)=V_{p}(\lambda x)$. One has: $$\begin{aligned}
\nonumber \frac{d}{d\lambda}g_{p}^{x}(\lambda) &=& \langle\nabla
V_{p}(\lambda x),x \rangle=\langle\nabla V_{2}(\lambda x),x \rangle+\langle\nabla (V_{p}-V_{2})(\lambda x),x \rangle =\\
&=&\langle\nabla V_{2}(\lambda x),x \rangle(1+\frac{\langle\nabla (V_{p}-V_{2})(\lambda x),x
\rangle}{\langle\nabla V_{2}(\lambda x),x \rangle})
\label{Vp.ec.deriv}\end{aligned}$$ As $\lim\limits_{x\rightarrow
0}\frac{\langle\nabla(V_{p}-V_{2})(\lambda
x),x\rangle}{\langle\nabla V_{2}(\lambda x),x \rangle}=0$, there exists $\rho_{p}>0$ such that $|\frac{\langle\nabla
(V_{p}-V_{2})(\lambda x),x\rangle}{\langle\nabla V_{2}(\lambda
x),x \rangle}|\leq \frac{1}{2}$, for any $x\in
\overline{B(\rho_{p})}\setminus\{0\}$. Relation (\[Vp.ec.deriv\]) provides that for any $x\in
\overline{B(\rho_{p})}\setminus\{0\}$, we have: $$\frac{d}{d\lambda}g_{p}^{x}(\lambda)\geq\frac{1}{2}\langle\nabla V_{2}(\lambda x),x
\rangle>0\qquad\textrm{ for any }\lambda>0$$ Therefore, for any $x\in \overline{B(\rho_{p})}\setminus\{0\}$, the function $g_{p}^{x}$ is strictly increasing on $\mathbb{R}^{n}$, i.e. $V_{p}$ is radially increasing on $\overline{B(\rho_{p})}$.
Let be $p\geq 2$ and $c>0$ such that there exists the set $N_{p}^{c}$. Suppose that for any $c'\leq c$, the sets $N_{p}^{c'}$ have the star-property, i.e. for any $x\in
N_{p}^{c'}$ and for any $\lambda\in [0,1)$ one has $\lambda x\in
Int(N_{p}^{c'})$. Then $V_{p}$ is radially increasing on $N_{p}^{c}$.
Let be $x^{0}\in \partial N_{p}^{c}$ and $0<\lambda_{1}<\lambda_{2}\leq 1$. We have to show that $V_{p}(\lambda_{1}x^{0})<V_{p}(\lambda_{2}x^{0})$. Denote $c_{1}=V_{p}(\lambda_{1}x^{0})>0$, $c_{2}=V_{p}(\lambda_{2}x^{0})>0$ and suppose the contrary, i.e. $c_{1}\geq c_{2}$. Theorem \[thm.Ncp.incluziune\] provides that $N_{p}^{c_{1}}\supseteq N_{p}^{c_{2}}$. Lemma \[lemma.Npc.incluz.conex\] guarantees that $\lambda_{2}x\in\partial N_{p}^{c_{2}}$. As $N_{p}^{c_{2}}$ has the star-property, then for $\lambda=\frac{\lambda_{1}}{\lambda_{2}}\in (0,1)$, we have that $\lambda (\lambda_{2}x)=\lambda_{1}x\in Int(N_{p}^{c_{2}})$, so $c_{1}=V_{p}(\lambda_{1}x)< c_{2}$ which contradicts the supposition $c_{1}\geq c_{2}$. Therefore, $V_{p}$ is radially increasing on $N_{p}^{c}$.
- For any $x\in D_{0}$, there exists $p_{x}\geq 2$ such that $x\in
G_{p}$ for any $p\geq p_{x}$;
- If $n=1$, there exists $p_{0}\geq 2$ such that $D_{0}\subset
G_{p}$, for any $p\geq p_{0}$.
- If there exists $r>0$ such that $\overline{B(r)}\subset
G_{p}$ for any $p\geq 2$, then there exists $p_{0}\geq 2$ such that $D_{0}\subset
G_{p_{0}}$.
For any $x\in D_{a}(0)$ there exists $p\geq 2$ and $c>0$ such that $x\in N_{p}^{c}$.
Numerical example: the Van der Pol system
=========================================
We consider the following system of differential equations:
$$\label{sys.Van.der.Pol}
\begin{array}{ll}
\left\{\begin{array}{l}
\dot{x_{1}}=-x_{2}\\
\dot{x_{2}}=x_{1}-x_{2}+x_{1}^{2}x_{2}
\end{array}\right.
\end{array}$$
The $(0,0)$ steady state of (\[sys.Van.der.Pol\]) is asymptotically stable. The boundary of the domain of attraction of $(0,0)$ is a limit cycle of (\[sys.Van.der.Pol\]).
For $p=20$ we have computed that the largest value $c>0$ for which there exists the set $N_{p}^{c}$ is $c_{20}=8.8466$. For $p=50$, the largest value $c>0$ for which there exists the set $N_{p}^{c}$ is $c_{50}=13.887$. In the figures below, the thick black curve represents the boundary of $D_{a}(0,0)$, the thin black curve represents the boundary of $G_{p}$ and the gray surface represents the set $N_{p}^{c_{p}}$. The set $N_{p}^{c_{50}}$ approximates very well the domain of attraction of $(0,0)$.
[10]{} url \#1[`#1`]{}urlprefix
L. Gruyitch, J.-P. Richard, P. Borne, J.-C. Gentina, Stability domains, Nonlinear systems in aviation, aerospace, aeronautics, astronautics, Chapman&Hall/CRC, 2004.
E. Barbashin, The method of sections in the theory of dynamical systems, Matem. Sb. 29.
E. Barbashin, N. Krasovskii, On the existence of lyapunov functions in the case of asymptotic stability in the whole, Prikle. Kat. Mekh. XVIII (1954) 345–350.
V. Zubov, Methods of A.M. Lyapunov and their applications, Leningrad Gos. University, Leningrad, 1964.
V. Zubov, Théorie de la commande, Editions Mir, Moscou, 1978.
H. Knobloch, F. Kappel, Gewohnliche Differentialgleichungen, B.G. Teubner, Stuttgart, 1974.
A. Vanelli, M. Vidyasagar, Maximal lyapunov functions and domains of attraction for autonomous nonlinear systems, Automatica 21 (1) (1985) 69–80.
S. Balint, Considerations concerning the maneuvering of some physical systems, An. Univ. Timisoara, seria St. Mat. XXIII (1985) 8–16.
E. Kaslik, A. Balint, S. Balint, Gradual approximation of the domain of attraction by gradual extension of the “embryo” of the transformed optimal lyapunov function, Nonlinear Studies 10 (1) (2003) 8–16.
S. Balint, A. Balint, V. Negru, The optimal lyapunov function in diagonalizable case, An. Univ. Timisoara, seria St. Mat. XXIV (1986) 1–7.
S. Balint, V. Negru, A. Balint, T. Simiantu, An appoach of the region of attraction by the region of convergence of the series of the optimal lyapunov function, An. Univ. Timisoara, seria St. Mat. XXV (1987) 15–30.
|
---
abstract: 'The Reynolds stress, or equivalently the average of the momentum flux, is key to understanding the statistical properties of turbulent flows. Both typical and rare fluctuations of the time averaged momentum flux are needed to fully characterize the slow flow evolution. The fluctuations are described by a large deviation rate function that may be calculated either from numerical simulation, or from theory. We show that, for parameter regimes in which a quasilinear approximation is accurate, the rate function can be found by solving a matrix Riccati equation. Using this tool we compute for the first time the large deviation rate function for the Reynolds stress of a turbulent flow. We study a barotropic flow on a rotating sphere, and show that the fluctuations are highly non-Gaussian. This work opens up new perspectives for the study of rare transitions between attractors in turbulent flows.'
author:
- 'F. Bouchet'
- 'J. B. Marston'
- 'T. Tangarife'
bibliography:
- 'biblio.bib'
title: Fluctuations and large deviations of Reynolds stresses in zonal jet dynamics
---
Tomás {#tomás .unnumbered}
=====
Regretfully, Tomás Tangarife suddenly and unexpectedly passed away a few months before completing the research reported in this paper. Most of the science discussed in this paper was developed in patient work by Tomás, and is part of his PhD thesis. F. Bouchet and J. B. Marston pay homage to the unique friendship and passion for science of Tomás, and would like to remember the intense and enriching collaboration that led to these scientific results. Tomás’ quiet and constant character, his generosity, and his deep thoughts, were always a source of happiness and joy to his friends and colleagues.
Introduction
============
For a wide range of applications, in physics, engineering, and geophysics, the determination of the behavior of the average or typical behavior of a turbulent flow is a key issue. Since the work of Reynolds more than one century ago, the role of momentum fluxes and their divergence, or their averages called Reynolds stresses, have been recognized to play the key role. In order to be more specific, we now consider the very simple case of a two dimensional flow on a plane or in a channel, with an average flow that is parallel to the $\mathbf{e}_{x}$ direction, $U(y)\mathbf{e}_{x}$ (where $x$ and $y$ are Cartesian coordinates). We also assume that all averaged quantities do not depend on $x$. The spatially averaged equation of motion for the fluid reads $$\frac{\partial U}{\partial t}=-\frac{\partial}{\partial y}\mathbb{E}\left(<uv>\right)+D\left[U\right],\label{eq:Reynolds}$$ where $D[U]$ is the average dissipation operator, $\mathbb{E}\left(<uv>\right)$ is the Reynolds stress, and $\frac{\partial}{\partial y}\mathbb{E}\left(<uv>\right)$ is the momentum flux divergence along the $\mathbf{e}_{x}$ direction. The symbol $\mathbb{E}$ is either an ensemble or time average (for a time average $\partial U/\partial t=0$), while $<.>$ denotes a spatial average. The spatial average is an average along the $\mathbf{e}_{x}$ direction. The spatial average can be avoided, but it is often useful to include for practical reasons. Because the Reynolds stress is the key quantity that determines the average flow behavior it has been extensively studied experimentally, numerically and theoretically, for a wide range of turbulent flows (see for instance classical turbulence textbooks [@tennekes1972first; @pope2001turbulent].
Beyond the average value, fluctuations of the momentum flux $<uv>$, or its divergence $\frac{\partial}{\partial y}\left(<uv>\right)$, are very important quantities in a variety of dynamical circumstances. By contrast with the average value, as far as we know, no work has been devoted so far to study such fluctuations, and we undertake this task as the main aim of the paper. An important example of when fluctuations play an important role is in the case of time scale separation between the typical time $\tau_{U}$ for the evolution of the parallel flow (or jet) and the time $\tau_{e}$ for the evolution of the turbulent fluctuations (or eddies): $\tau_{e}\ll\tau_{U}$. Such time scale separation is common when the parallel flow has a very large amplitude; classical examples include some regimes of two dimensional, geostrophic, or plasma turbulence. Then, following the classical results of stochastic averaging for systems with two timescales, a natural generalization of Reynolds average equation is $$\frac{\partial U}{\partial t}=-\frac{\partial}{\partial y}\mathbb{E}_{U}\left(<uv>\right)+\frac{\partial}{\partial y}\zeta_{U}+D\left[U\right],\label{eq:Slow_Stochastic_Dynamics}$$ where now $\mathbb{E}_{U}$ means an average over a time window short compared to the typical time evolution of the parallel flow $U$, and we still call $\mathbb{E}_{U}\left(<uv>\right)$ the Reynolds stress that now depends on the state of $U$ at time $t$, and $\zeta_{U}(y,t)$ characterizes the Gaussian typical fluctuations of the momentum flux $<uv>$. $\mathbb{E}_{U}\left(<uv>\right)$ and $\zeta_{U}$ represent two aspects of the action of the unresolved eddies on the mean flow, the average and typical fluctuations respectively. In such a situation of time scale separation, $\zeta_{U}$ is a white in time Gaussian field whose variance is related through a Kubo formula to the variance of the time average of the momentum flux $$r_v=\frac{1}{T}\int\text{dt}\,<uv>,\label{eq:Time_Averaged_Reynold_Stress}$$ where the time average is over a time window of duration $T$, which is assumed to be short compared to the time scale for the evolution of $U$, but large compared with the evolution of the turbulent fluctuations: $\tau_{e}\ll T\ll\tau_{U}$. We call the fluctuation of (\[eq:Time\_Averaged\_Reynold\_Stress\]) the Reynolds stress fluctuations (the fluctuation of the time averaged momentum fluxes, over finite but long times $T$).
In many instances, rarer and non Gaussian fluctuations are also important. Then does not contain the relevant information and one wants to go beyond the study of the second moment of . In the asymptotic regime $\tau_{e}\ll T$, the probability distribution function of $r_v$ takes a very simple form $P(r_v,T)\underset{T\rightarrow\infty}{\asymp}\exp\left(-TI_v[r_v])\right)$, where $\asymp$ is a logarithmic equivalence (the logarithms of the right and left hand sides of the equation are equivalent in the limit $T \rightarrow \infty$). This relation is called the large deviation principle. (For a review, see Ref. .) The large deviation rate function $I_v[r_v]$ characterizes the fluctuations of the time averaged Reynolds stress, both typical (the second variations of $I_v[r_v]$ gives the statistics of $\zeta_{U}$), and very rare. In many examples of turbulent flows, it has been observed that the dynamics has several "attractors” (see for instance [@Bouchet_Simonnet_2008] and references therein ; by “attractor” we mean here stationary solutions of the deterministic Reynolds equation $\frac{\partial U}{\partial t}=-\frac{\partial}{\partial y}\mathbb{E}_{U}\left(uv\right)$). Then rare fluctuations of the Reynolds stress characterized by the large deviation rate function $I_v$, are responsible for rare transitions between attractors. For all these reasons, it is very important to be able able to compute $I_v$ and to be able to study its properties from a fluid mechanics point of view.\
We develop theoretical and numerical tools to study Reynolds stress fluctuations, and compute the large deviation rate function $I_v$. First we sample empirically (from time series generated from numerical simulations) the large deviation rate function, using the method developed in reference . In addition to this empirical approach, we determine the Reynolds stress fluctuations and large deviation rate function directly for the case of the quasilinear approximation to the full non-linear dynamics. The quasilinear approximation amounts at neglecting the eddy-eddy interactions (fluctuation + fluctuation $\rightarrow$ fluctuation triads) while retaining interactions between the mean flow and the eddies, and may thus be expected to be accurate when the magnitude of the average flow is much larger than the fluctuations. Such a quasilinear approximation, investigated at least as early as 1963 by Herring [@herring1963investigation], is believed to be accurate for the 2D Navier-Stokes equation, barotropic flows, or quasigeostrophic models, on either a plane, a torus, or a sphere, for a range of parameters (discussed below). Two dimensional flows are a particularly favorable setting for the quasi-linear approximation because, as Kraichnan showed in his seminal 1967 paper [@Kraichnan:1967jk], an inverse cascade of energy to the largest scales is expected, leading to the formation of coherent structures with non-trivial mean flows [@Kraichnan:1980uy]. For unforced perfect flows, the large scale structures can be predicted through equilibrium statistical mechanics (see for instance [@BouchetVenaille-PhysicsReport]). For forced and dissipated flows eddies both sustain, and interact with, the large-scale flows, and both processes are captured by the quasi-linear approximation. By contrast, the scale-by-scale cascade of energy that plays a central role in Kraichnan’s picture [@Kraichnan:1967jk] relies on eddy + eddy $\rightarrow$ eddy processes that are neglected in the quasi-linear approximation [@farrell2003structural; @marston2014direct].
The quasilinear approximation has been shown to be self-consistent [@Bouchet_Nardini_Tangarife_2013_Kinetic] in the limit when a time scale separation exists between a typical large scale flow inertial time scale $\tau_{i}$ and a flow spin up or spin down time scale $\tau_{s}$: $\tau_{i}\ll\tau_{s}$ (then $\tau_{U}\simeq\tau_{s}$ and $\tau_{e}\simeq\tau_{i}$). This time scale separation condition may however not be necessary. Other factors may favor the validity of the quasilinear approximation, for instance the forcing of the flow through a large number of independent modes, through either a broad band spectrum, or a small scale forcing, keeping the total energy injection rate fixed. The energy transfer is then the same for all forcing spectrums, but with a braod band spectrum each eddy has reduced amplitude, lessening the interaction between eddies. The range of validity of the quasilinear approximation has not been fully understood yet. When the quasilinear approximation is valid, and when one further assumes that the forcing acts on small scales only, one can predict explicitly the averaged Reynolds stress [@srinivasan2014reynolds; @laurie2014universal; @woillez2016computation] and sometimes the averaged velocity profile. The Gaussian fluctuations of the Reynolds stress may be parameterized phenomenologically [@farrell2003structural; @marston2014direct]. The spatial structure of the Gaussian fluctuations has also been studied theoretically. It has been proven to have a singular part with white in space correlation function and a smooth part (see [@Bouchet_Nardini_Tangarife_2016_kinetic_Zonal_Jets], section 1.4.3, or [@tangarife-these], see also [@BouchetNardiniTangarife2015]).
Within the context of the quasilinear approximation, we show that the Reynolds stress fluctuations and its large deviation rate function can be studied by solving a matrix Riccati equation. The equation can be easily implemented and solved by a generalization of the classical tools used to solve Lyapunov equation for the two-point correlation functions. This mathematical result is the main reason why we study the Reynolds stress fluctuations for the quasilinear dynamics in this first study. Moreover we show that the matrix Riccati equation is a much more computationally efficient way to study rare fluctuations than through the traditional route of direct numerical simulation. The calculation is illustrated for the case of barotropic flow on the sphere [@marston2014direct], for which the relevance of the quasilinear approximation, over certain parameter ranges, has been recognized for a some time now. For the case of a barotropic flow it is technically more convenient to discuss the dynamics in terms of the equation of motion for the vorticity, so we study the corresponding Reynolds stress that drives the vorticity.
Section \[sec:Barotropic-equation-and\] introduces the barotropic equation on the sphere and its quasilinear approximation. Section \[sec:Equal-time-statistics-of\] discusses the fluctuations of the Reynolds stresses, without time average. Section \[sec:large\_deviations\] is an introduction to averaging for stochastic processes. It explains pedagogically how an equation for the slow degrees of freedom, for instance the Reynolds equation (\[eq:Slow\_Stochastic\_Dynamics\]), can be obtained. The relation between the statistics of the noise term, $\zeta_{U}$, in equation (\[eq:Slow\_Stochastic\_Dynamics\]), and the large deviation of the Reynolds stress (\[eq:Time\_Averaged\_Reynold\_Stress\]) is explained. A short introduction to the large deviation rate function is also provided. Finally, the matrix Riccati equation that permits direct calculation of the large deviation rate function is derived both in a general framework, and in the case of the quasilinear approximation of the barotropic equation on the sphere. Section \[sub:Gaussian-approximation-of\] uses the solution of the matrix Riccati equation in order to study numerically the zonal energy balance and the time scale separation in the inertial limit. Section \[sec:Large\_Deviations\_Reynolds\_stresses\] discusses the computation of the large deviation rate function for the time averaged Reynolds stresses of the barotropic equation on the sphere. Section \[conclusions\_perspectives\] discusses the main conclusions and presents some perspectives.
Barotropic equation and quasi–linear approximation\[sec:Barotropic-equation-and\]
=================================================================================
Here we discuss the barotropic equation and its quasilinear approximation that is expected to be valid when a time scale separation exists between the typical time for the evolution of the zonal flow and that of the evolution of the eddies. We study the dynamics of zonal jets in the quasi-geostrophic one-layer barotropic model on a sphere of radius $a$, rotating at rate $\Omega$, $$\left\lbrace \begin{aligned} & \frac{\partial\omega}{\partial t}+J(\psi,\omega)+\frac{2\Omega}{a^{2}}\frac{\partial\psi}{\partial\lambda}=-\kappa\omega-\nu_{n}\left(-\Delta\right)^{n}\omega+\sqrt{\sigma}\eta,\\
& u=-\frac{1}{a}\frac{\partial\psi}{\partial\phi},\quad v=\frac{1}{a\cos\phi}\frac{\partial\psi}{\partial\lambda},\quad\omega=\Delta\psi
\end{aligned}
\right.\label{eq:barotropic-topography-d}$$ where $\omega$ is the relative vorticity, ${\bf v}=(u,v)$ is the horizontal velocity field, $\psi$ is the stream function and $J(\psi,\omega)=\frac{1}{a^{2}\cos\phi}\left(\partial_{\lambda}\psi\cdot\partial_{\phi}\omega-\partial_{\lambda}\omega\cdot\partial_{\phi}\psi\right)$ is the Jacobian operator. The coordinates are denoted $\left(\lambda,\phi\right)\in[0,2\pi]\times[-\pi/2,\pi/2]$, $\lambda$ is the longitude and $\phi$ is the latitude. All fields $\omega,u,v$ and $\psi$ can be decomposed onto the basis of spherical harmonics $Y_{\ell}^{m}(\phi, \lambda)$, for example $$\psi\left(\phi, \lambda \right)=\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{\ell}\psi_{m,\ell}~ Y_{\ell}^{m}(\phi, \lambda)
\label{eq:spherical-harmonics-decomposition}$$ All fields $\omega,u,v$ and $\psi$ are $2\pi$-periodic in the zonal ($\lambda$) direction, so we can also define the Fourier coefficients in the zonal direction, $$\psi_{m}(\phi)\equiv\frac{1}{2\pi}\int_{0}^{2\pi}\psi(\phi, \lambda)~ \mbox{e}^{-im\lambda}\,\mbox{d}\lambda=\sum_{\ell=|m|}^{\infty}\psi_{m,\ell}~ P_{\ell}^{m}(\sin \phi),\label{eq:Fourier-definition}$$ with the associated Legendre polynomials $P_{\ell}^{m}(\sin \phi)$.
In (\[eq:barotropic-topography-d\]), $\kappa$ is a linear friction coefficient, also known as Ekman drag or Rayleigh friction, that models the dissipation of energy at the large scales of the flow [@vallis_atmospheric_2006]. Hyper-viscosity $\nu_{n}\left(-\Delta\right)^{n}$ accounts for the dissipation of enstrophy at small scales and is used mainly for numerical reasons. Most of the dynamical quantities are independent of the value of $\nu_{n}$, for small enough $\nu_{n}$. $\eta$ is a Gaussian noise with zero mean and correlations $\mathbb{E}\left[\eta\left(\lambda_{1},\phi_{1},t_{1}\right)\eta\left(\lambda_{2},\phi_{2},t_{2}\right)\right]=C\left(\lambda_{1}-\lambda_{2},\phi_{1},\phi_{2}\right)\delta\left(t_{1}-t_{2}\right)$, where $C$ is a positive-definite function and $\mathbb{E}$ is the expectation over realizations of the noise $\eta$. $C$ is assumed to be normalized such that $\sigma$ is the average injection of energy per unit of time and per unit of mass by the stochastic force $\sqrt{\sigma}\eta$. There is no symmetry reason to enforce homogeneous forcing over a rotating sphere, which only has axial symmetry. Thus it is natural to consider forcing that varies with latitude. The barotropic equation is sometimes used to describe the vertically-averaged atmospheric dynamics. The stochastic forces model the driving influence of the baroclinic instability on the barotropic flow. Baroclinic instabilities are typically strongest at mid-latitude.
Time scale separation between large scale and small scale dynamics
------------------------------------------------------------------
### Energy balance and non–dimensional equations\[sec:LD-energy-balance-sphere\]
The inertial barotropic model (eq. (\[eq:barotropic-topography-d\]) with $\kappa=\nu_{n}=\sigma=0$) conserves the energy $\mathcal{E}\left[\omega\right]=-\frac{1}{2}\int\omega\psi\,\mbox{d}{\bf r}$ (we denote by $\mbox{d}{\bf r}=a^{2}\cos\phi\,\mbox{d}\phi\mbox{d}\lambda$), the moments of potential vorticity $\mathcal{C}_{m}\left[\omega\right]=\int(\omega+f)^{m}\,\mbox{d}{\bf r}$ with the Coriolis parameter $f(\phi)=2\Omega\sin\phi$, and the angular momentum $L[\omega]=\int\omega\cos\phi\,\mbox{d}{\bf r}$. The average energy balance for the dissipated and stochastically forced barotropic equation is obtained applying the Ito formula [@Gardiner_1994_Book_Stochastic] to . It reads $$\frac{dE}{dt}=-2\kappa E-2\nu_{n}Z_{n}+\sigma,\label{eq:energy-balance-visc}$$ where $E=\mathbb{E}\left[\mathcal{E}\left[\omega\right]\right]$ is the total average energy and $Z_{n}=\mathbb{E}\left[-\frac{1}{2}\int\psi(-\Delta)^{n}\omega\,\mbox{d}{\bf r}\right]$. The term $-2\nu_{n}Z_{n}$ in represents the dissipation of energy at the small scales of the flow. In the regime we are interested in, most of the energy is concentrated in the large-scale zonal jet, so the main mechanism of energy dissipation is the linear friction (first term in the right-hand side of ). In this turbulent regime, energy dissipated by hyper-viscosity can be neglected. Then, in a statistically stationary state, $E_{stat}\simeq\frac{\sigma}{2\kappa}$, expressing the balance between stochastic forces and linear friction in .
The estimated total energy yields a typical jet velocity of $U\sim\sqrt{\frac{\sigma}{2\kappa}}$. The order of magnitude of the time scale of advection and stirring of turbulent eddies by this jet is $\tau_{eddy}\sim\frac{a}{U}$. We perform a non-dimensionalization of the stochastic barotropic equation using $\tau_{eddy}$ as unit time and $a$ as unit length. The non-dimensionalization may be carried out by setting $a=1$ and using the non-dimensionalized variables $t'=t/\tau_{eddy}$, $\omega'=\omega\tau_{eddy}$, $\psi'=\psi\tau_{eddy}$, $\Omega'=\Omega\tau_{eddy}$, $$\alpha=\kappa\tau_{eddy}=\sqrt{\frac{2\kappa^{3}}{\sigma}},\label{eq:alpha}$$ $\nu_{n}'=\nu_{n}\tau_{eddy}$, $\sigma'=\sigma\tau_{eddy}^{3}=2\alpha$, and a rescaled force $\eta'=\eta\sqrt{\tau_{eddy}}$ such that $\mathbb{E}\left[\eta'\left(\lambda_{1},\phi_{1},t'_{1}\right)\eta'\left(\lambda_{2},\phi_{2},t'_{2}\right)\right]=C\left(\lambda_{1}-\lambda_{2},\phi_{1},\phi_{2}\right)\delta\left(t'_{1}-t'_{2}\right)$. In these new units, and dropping the primes for simplicity, the stochastic barotropic equation reads $$\frac{\partial\omega}{\partial t}+J(\psi,\omega)+2\Omega\frac{\partial\psi}{\partial\lambda}=-\alpha\omega-\nu_{n}\left(-\Delta\right)^{n}\omega+\sqrt{2\alpha}\eta.\label{eq:barotropic}$$ In , $\alpha$ is an inverse Reynolds’ number based on the linear friction and $\nu_{n}$ is an inverse Reynolds’ number based on hyper-viscosity. The turbulent regime mentioned previously corresponds to $\nu_{n}\ll\alpha\ll1$. In such regime and in the units of , the total average energy in a statistically stationary state is $E_{stat}=1$.
We are interested in the dynamics of zonal jets in the regime of small forces and dissipation, defined as $\alpha\ll1$. In the next section we show that the dynamics corresponds to a regime in which the zonal jet evolves much more slowly than the surrounding turbulent eddies.
### Decomposition into zonal and non–zonal components
In order to decompose into a zonally averaged flow and perturbations around it, we define the zonal projection of a field $$\left\langle \psi\right\rangle (\phi)\equiv\psi_{0}(\phi)=\frac{1}{2\pi}\int_{0}^{2\pi}\psi(\lambda,\phi)\,\mbox{d}\lambda.$$ The zonal jet velocity profile is defined by $U(\phi)\equiv\left\langle u\right\rangle (\phi)$. In most situations of interest, the stochastic force in does not act direcly on the zonal flow: $\left\langle \eta\right\rangle =0$. Then the perturbations of the zonal jet is proportional to the amplitude of the stochastic force in . We thus decompose the velocity field as ${\bf v}=U{\bf e}_{x}+\sqrt{\alpha}\delta{\bf v}$ and the relative vorticity field as $\omega=\omega_{z}+\sqrt{\alpha}\delta\omega$ with $\omega_{z}\equiv\left\langle \omega\right\rangle $, where $\alpha$ is the non-dimensional parameter defined in . We call the perturbation velocity $\delta{\bf v}$ and vorticity $\delta\omega$ the eddy velocity and eddy vorticity, respectively.\
With the decomposition of the vorticity field, the barotropic equation reads $$\left\lbrace \begin{aligned} & \frac{\partial\omega_{z}}{\partial t}=\alpha R-\alpha\omega_{z}-\nu_{n}\left(-\Delta\right)^{n}\omega_{z}\\
& \frac{\partial\delta\omega}{\partial t}=-L_{U}\left[\delta\omega\right]-\sqrt{\alpha}NL\left[\delta\omega\right]+\sqrt{2}\eta,
\end{aligned}
\right.\label{eq:barotropic-decomposed}$$ with $$R(\phi)\equiv-\left\langle J\left(\delta\psi,\delta\omega\right)\right\rangle
\label{eq:R}$$ the zonally averaged advection term, where the linear operator $L_{U}$ reads $$L_{U}\left[\delta\omega\right]=\frac{1}{\cos\phi}\left(U(\phi)\partial_{\lambda}\delta\omega+\gamma(\phi)\partial_{\lambda}\delta\psi\right)+\alpha\delta\omega+\nu_{n}\left(-\Delta\right)^{n}\delta\omega,\label{eq:LD-linear-operator}$$ with $\gamma\left(\phi\right)=\partial_{\phi}\omega_{z}(\phi)+2\Omega\cos\phi$, and where $$NL\left[\delta\omega\right]=J(\delta\psi,\delta\omega)-\left\langle J(\delta\psi,\delta\omega)\right\rangle$$ is the non-linear eddy-eddy interaction term.\
Using $\omega_{z}\left(\phi\right)=-\frac{1}{\cos\phi}\partial_{\phi}\left(U\left(\phi\right)\cos\phi\right)$ and the first equation of (\[eq:barotropic-quasi-linear\]), we get the evolution equation for the zonal flow velocity $U\left(\phi\right)$ $$\frac{\partial U}{\partial t}=\alpha f-\alpha U-\nu_{n}\left(-\Delta\right)^{n}U\,,\label{eq:zonal-vorticity-ell}$$ where $f\left(\phi\right)$ is such that $R\left(\phi\right)=-\frac{1}{\cos\phi}\partial_{\phi}\left(f\left(\phi\right)\cos\phi\right)$. $f$ is minus the divergence of the Reynolds’ stress.
### Quasi-linear and linear dynamics
In this section we discuss the quasilinear approximation to the barotropic equation and the associated linear dynamics.
In the limit of small forces and dissipation $\alpha\ll1$, the perturbation flow is expected to be of small amplitude. Then the non-linear term $NL[\delta\omega]$ in (\[eq:barotropic-decomposed\]) is negligible compared to the linear term $L_{U}\left[\delta\omega\right]$. Neglecting these non-linear eddy-eddy interaction terms, we obtain the so-called quasi-linear approximation of the barotropic equation [@Srinivasan-Young-2011-JAS], $$\left\lbrace \begin{aligned} & \frac{\partial\omega_{z}}{\partial t}=\alpha R-\alpha\omega_{z}-\nu_{n}\left(-\Delta\right)^{n}\omega_{z}\\
& \frac{\partial\delta\omega}{\partial t}=-L_{U}\left[\delta\omega\right]+\sqrt{2}\eta.
\end{aligned}
\right.\label{eq:barotropic-quasi-linear}$$ The approximation leading to the quasi-linear dynamics (\[eq:barotropic-quasi-linear\]) amounts at suppressing some of the triad interactions. Nonetheless, the inertial quasi-linear dynamics has the same quadratic invariants as the initial barotropic equations. The average energy balance for the quasi-linear barotropic dynamics (\[eq:barotropic-quasi-linear\]) is thus the same as the one for the full barotropic dynamics .\
For many flows of interest, for example Jovian jets, the turbulent eddies $\delta\omega$ evolve much faster than the zonal jet velocity profile $U$ [@Porco2003]. In (\[eq:barotropic-decomposed\]) and (\[eq:barotropic-quasi-linear\]), the natural time scale of evolution of the zonal jet is of order $1/\alpha$, while the typical time scale of evolution of the perturbation vorticity $\delta\omega$ is of order $1$. In the regime $\alpha\ll1$, we thus expect to observe a separation of time scales between the evolution of $\omega_{z}$ and $\delta\omega$, consistent with the definition of $\alpha$ as the ratio of the inertial time scale $\tau_{eddy}$ and of the dissipative time scale $1/\kappa$, see (\[eq:alpha\]).
In the regime $\alpha\ll1$, it is natural to consider the linear dynamics of $\delta\omega$ with $U$ held fixed, $$\frac{\partial\delta\omega}{\partial t}=-L_{U}\left[\delta\omega\right]+\sqrt{2}\eta\,.\label{eq:frozen-QL}$$ The relevance of as an effective description of turbulent eddy dynamics is further discussed later. In particular, we show in section \[sub:Empirical-validation-of\] that the correlation time of Reynolds’ stresses resulting from the linear dynamics —the most relevant time scale related to the dynamics of eddies and their action on the evolution of the zonal jet— is of the order or smaller than $\tau_{eddy}$, holding even as $\alpha$ decreases. It means that the time scale separation hypothesis that leads us to consider the linear dynamics is self-consistent in the limit of weak forces and dissipation $\alpha\ll1$.
### Reynolds averaging for the vorticity equation
In the introduction we discussed Reynolds averaging and Reynolds stresses for the simplest possible case: a two dimensional flow that does not break the symmetry along the direction $\mathbf{e}_{x}$. We now adapt the discussion to two dimensional flows on a sphere. As it is much more convenient to work directly with the vorticity equation, we discuss Reynolds averaging for the vorticity equation only.
Our aim is to write the counterpart of Eq. and , for the vorticity equation. In the cases when there is a time scale separation between the evolution of the slow zonal and the fast non zonal part of the flow, averaging either Eq. or Eq. leads to an effective equation for the low frequency evolution of the zonal vorticity $$\frac{\partial\omega_{z}}{\partial t}=\alpha \mathbb{E}\left(R\right)-\alpha\omega_{z}-\nu_{n}\left(-\Delta\right)^{n}\omega_{z} + \xi_{\omega_z},
\label{R_gaussian_evolution}$$ where $\mathbb{E}\left(R\right)$ is the average of the vorticity flux $R$ , and the white in time Gaussian noise $\xi_{\omega}$ describes the typical fluctuations. We consider time averages of the vorticity flux $$r=\frac{1}{T}\int\text{dt}\,R(u).\label{eq:Reynold_Stress_fluctuations_vorticity}$$ The average of $r$ is the term $\mathbb{E}\left(R\right)$ appearing in the Reynolds averaged equation . We call this term the vorticity Reynolds stress; however it does not have the same physical dimension as the usual stress. When the time average is over a time window of duration $T$ which is assumed to be short compared to the time scale for the evolution of $U$, but large compared with the evolution of the turbulent fluctuations: $\tau_{e}\ll T\ll\tau_{U}$, we call the fluctuations of (\[eq:Reynold\_Stress\_fluctuations\_vorticity\]) the vorticity Reynolds stress fluctuations (the fluctuation of the time averaged vorticity fluxes, over finite but long times $T$). In the asymptotic regime $\tau_{e}\ll T$, the probability distribution function of $r$ takes the simple large deviation form $P(r,T)\underset{T\rightarrow\infty}{\asymp}\exp\left(-TI[r])\right)$. The variance of $\xi_{\omega}$ is given by a Kubo formula, and is simply related to the second variations of $I$.
We note that there exists a simple relation between the Reynolds stress large deviations rate function $I_v$, that describes the averages of the actual momentum fluxes that appear in the velocity equation, and the vorticity Reynolds stress large deviation rate function $I$. In the following we study the vorticity Reynolds stress only. For simplicity, as there is no ambiguity, we call these quantities Reynolds stresses and Reynolds stress large deviation rate functions, omitting the word vorticity.
Numerical implementation\[sub:Numerical-implementation\]
--------------------------------------------------------
Direct numerical simulations (DNS) of the barotropic equation (\[eq:barotropic-decomposed\]), the quasi-linear barotropic equation (\[eq:barotropic-quasi-linear\]) and the linear equation (\[eq:frozen-QL\]) are performed using a purely spectral code with a fourth-order-accurate Runge-Kutta algorithm and an adaptive time step[^1]. The spectral cutoffs defined by $\ell\leq L$, $\left|m\right|\leq\min\left\{ \ell,M\right\} $ in the spherical harmonics decomposition of the fields are taken to be $L=80$ and $M=20$. In all the simulations, the rotation rate of the sphere is $\Omega=3.7$ in the units defined previously.
The stochastic noise is implemented using the method proposed in Ref. , with a non-zero renewal time scale $\tau_{r}$ larger than the time step of integration. For $\tau_{r}$ much smaller than the typical eddy turnover time scale, the noise can be considered as white in time.
Whenever one considers the linear dynamics (\[eq:frozen-QL\]), modes with different values of $m$ decouple, thanks to the zonal symmetry. Then the statistics of the contribution of the Reynolds stress coming from different values of $m$ are independent. The statistics for the total Reynolds stress can be computed from the statistics of the contribution of each independent value of $m$. It is natural and simpler to study the contribution from each different value of $m$ independently. For this reason we consider in this study a force that acts on one mode only. However, as explained in the previous section the validity of the quasilinear approximation is favored by the use of a broad band spectrum forcing, or a forcing acting on a large number of small scale modes, or both. Forcing only one mode is the most unfavorable case from the point of view of the accuracy of the quasilinear approximation. Larger time scale separation may be required in this case to ensure the accuracy of the quasilinear approximation. However whenever the quasilinear approximation is accurate, the statistics of the Reynolds stress arising from the forced mode are accurately described by the methods reported here.
The forcing only acts on the mode $\left|m\right|=10$, $\ell=10$, which is concentrated around the equator (see figure \[fig:U\]). With such a forcing spectrum and setting $\alpha=0.073$, the integration of the quasi-linear barotropic equation (\[eq:barotropic-quasi-linear\]) leads to a stationary state characterized by a strong zonal jet with velocity $U\left(\phi\right)$, represented in Figure \[fig:U\]. We spectrally truncate the jet to its first 25 spherical harmonics to fix the mean flow in the simulation of the linear barotropic equation (\[eq:frozen-QL\]). We use hyper-viscosity of order 4 with coefficient $\nu_{4}$ such that the damping rate of the smallest mode is 4. To assess that hyper-viscosity is negligible in the large scale statistics, simulations of the linear equation with $\nu_{4}=4$ and $\nu_{4}=2$ are compared in sections \[sec:Equal-time-statistics-of\], \[sub:Gaussian-approximation-of\] and \[sub:Application-of-the\].
![\[fig:U\]Top pannel: the zonal flow velocity profile $U\left(\phi\right)$ used in numerical simulations of the linearized barotropic equation . Bottom panel: zonally averaged energy injection rate by the stochastic force $\eta$ in , and .](U)
Equal-time statistics of vorticity fluxes\[sec:Equal-time-statistics-of\]
=========================================================================
![\[fig:histograms\]Probability density functions of $R_{3}$, the third component in the spherical harmonics decomposition of the zonally averaged advection term (vorticity flux) $R(\phi)$, from direct numerical simulations of the linear barotropic equation (blue), the quasi-linear barotropic equation (orange), and the non-linear barotropic equation (yellow). Exponential tails are observed in all of the different cases. The common parameters are $\alpha=0.073$, $\Omega=3.7$, total integration time $5,450$, and the forcing is concentrated in wavenumbers $\left|m\right|=10$, $\ell=10$.](equal-time-PDF)
![\[fig:histogram-viscosity\]Probability density functions of $R_{3}$, the third component in the spherical harmonics decomposition of the zonally averaged advection term (vorticity flux) $R$, from direct numerical simulations of the quasi-linear barotropic equation with hyper-viscosity such that the smallest scale has a hyperviscous damping rate of $4$ (red curve) and $2$ (black curve). The two probability density functions are nearly identical, showing that hyper-viscosity can be considered to be negligible as far as the zonal jet statistics are concerned.](VorticityFluxHistogram.pdf)
The aim of this section is to illustrate that fluctuations of equal-time vorticity flux $R$ may be strongly non Gaussian. We prove that vorticity flux fluctuations have exponential tails with a distribution close to that of Gaussian product statistics [@Grooms:2016kw]. While equal-time fluctuations of the vorticity flux are important for high frequency jet variability, Reynolds stresses (time average of the vorticity fluxes) are more important for the long term evolution of the jet. Beginning in section IV, we study Reynolds stresses, and their large deviations.
The evolution of the mean flow $\omega_{z}(\phi,t)$ is given by the advection term $R(\phi,t)=-\left\langle J\left(\delta\psi,\delta\omega\right)\right\rangle $, through (\[eq:barotropic-decomposed\]) or (\[eq:barotropic-quasi-linear\]). In most previous statistical approaches to zonal jet dynamics, only the averaged advection term, the Reynolds stress, was considered. This is for instance the case in S3T [@bakas2015s3t] and CE2 [@marston65conover; @Srinivasan-Young-2011-JAS; @tobias2013direct] approaches. Such restriction gives a good approximation of the relaxation of zonal jets towards the attractors of the dynamics, that is expected to be quantitatively accurate in the inertial limit $\alpha\to0$ [@Bouchet_Nardini_Tangarife_2013_Kinetic]. However, replacing the advection term $R$ by its average does not describe fluctuations of the vorticity fluxes, that may lead to fluctuations of zonal jets. Understanding the statistics of vorticity fluxes beyond their average value is thus a very interesting perspective. In this section, we study the whole distribution function of vorticity fluxes, as computed from direct numerical simulations.\
The zonally averaged advection term is a function of latitude $\phi$ and can be decomposed with spherical harmonics according to . We denote by $R_{\ell}(t)\equiv R_{0,\ell}(t)$ the $\ell$-th component in the spherical harmonics decomposition of $R(\phi,t)$. All $R_{l}$ for odd values of $l$ larger than one have non-zero amplitudes (the amplitude of the $l=1$ mode is zero because total angular momentum about the polar axis remains zero). In the following, for simplicity, we focus our analysis on $R_{3}$ only, that has the largest contribution. The probability density functions of $R_{3}$, computed either from direct numerical simulations of the barotropic equation , or the quasi-linear barotropic equation or the linear equation , with the forcing spectrum specified in section \[sub:Numerical-implementation\] and with $\alpha=0.073$, are shown in Figure \[fig:histograms\]. Figure \[fig:histogram-viscosity\] shows that the probability distribution of $R_3$ is not affected by the choice of small scale dissipation.
In the linear dynamics , the eddy vorticity evolves according to the linearized barotropic equation close to the fixed base flow $U(\phi)$ shown in Figure \[fig:U\]. In the quasi-linear dynamics , the zonal mean flow has the same average velocity profile $U(\phi)$, but this zonal flow is allowed to fluctuate. This difference in the dynamics of the zonal flow between linear and quasi-linear equations explains the slight difference observed in the corresponding advection term histograms (respectively blue curve and orange curve in Figure \[fig:histograms\]), namely, the probability density function is more spread (the vorticity fluxes fluctuate more) in the quasi-linear dynamics than in the linear dynamics.
In contrast, the probability density function of $R_{3}$ computed from the non-linear integration (yellow curve in Figure \[fig:histograms\]) is very different from the other ones for two reasons: the average zonal flow is different from the fixed zonal flow used in the linear dynamics, and the dynamics of $\delta\omega$ is also different from the quasi-linear dynamics because of the non-linear eddy-eddy interaction terms in (this is expected, as forcing a single mode is the most unfavorable case from the point of view of the validity of the quasilinear approximation, as explained in section \[sec:Barotropic-equation-and\]).\
In all three cases, the probability distribution functions in Figure \[fig:histograms\] show large fluctuations and heavy tails. For instance it is clear that typical fluctuations of the vorticity flux have much larger amplitude than the value of their average (the variance is much larger than the average). While essential for understanding the high frequency and small variability of the jets, on the slow time scale, the jet evolution is described by time averaged vorticity fluxes (the Reynolds stress).
In all of the simulations, the distribution of the vorticity flux shows exponential tails. This can be easily understood for the case of the linear equation . Indeed, in this case the statistics of the eddy vorticity are exactly Gaussian ($\delta\omega$ is an Ornstein-Uhlenbeck process [@Gardiner_1994_Book_Stochastic]). Then, the statistics of $R(\phi)$ can be calculated explicitly, as we explain now.
Using we can write the vorticity flux as $$R(\phi)=-\frac{1}{\cos\phi}\sum_{m}im\left(\psi_{m}\cdot\partial_{\phi}\omega_{-m}+\partial_{\phi}\psi_{m}\cdot\omega_{-m}\right),\label{eq:Reynolds-stress-m}$$ where $\omega_{m}(\phi)$ is the $m$-th Fourier coefficient of $\delta\omega$, and $\psi_{m}(\phi)$ is the associated stream function. The Ornstein-Uhlenbeck process $\omega_{m}\left(\phi\right)$ is a Gaussian random variable at each latitude $\phi$. The sum of Gaussian random variables is a Gaussian random variable, so $\psi_{m}(\phi)$, $\partial_{\phi}\psi_{m}(\phi)$ and $\partial_{\phi}\omega_{m}(\phi)$ are also Gaussian random variables at each latitude $\phi$. All these Gaussian random variables have zero mean, and in general they are correlated in a non-trivial way.
The vorticity flux is thus of the form $R=\xi_{1}\xi_{2}+\ldots+\xi_{M-1}\xi_{M}$ where $\xi_{1},\ldots,\xi_{M}$ are $M$ real-valued[^2] correlated Gaussian variables with zero mean. We denote by $\xi$ the column vector with components $\xi_{1},\ldots,\xi_{M}$. By definition, the probability distribution function of $\xi$ is $$P_{\xi}\left(\xi\right)=\frac{1}{Z}\exp\left(-\frac{1}{2}\xi^{T}G^{-1}\xi\right),$$ where $\xi^{T}$ denotes the transpose vector of $\xi$, $G$ is the covariance matrix of $\xi$, and $Z$ is a normalisation constant. The probability density function of $R$, denoted $P_{R}$, is given by $$\begin{aligned}
P_{R}(R) & =\int\mbox{d}\xi\,P_{\xi}\left(\xi\right)\delta\left(R-\xi_{1}\xi_{2}-\ldots-\xi_{M-1}\xi_{M}\right)\\
& =\int\mbox{d}\xi_{2}\ldots\mbox{d}\xi_{m}\,\frac{1}{\left|\xi_{2}\right|}\,P_{\xi}\left(\frac{R-\xi_{3}\xi_{4}-\ldots-\xi_{M-1}\xi_{M}}{\xi_{2}},\xi_{2},\ldots\xi_{M}\right).\end{aligned}$$ Using the change of variable $\zeta_{m}=\xi_{m}/\sqrt{\left|R\right|}$ for $m=2,\ldots,M$, the first argument of $P_{\xi}$ becomes $\sqrt{\left|R\right|}\frac{\frac{R}{\left|R\right|}-\zeta_{3}\zeta_{4}-\ldots-\zeta_{M-1}\zeta_{M}}{\zeta_{2}}$, so we obtain: $$P_{R}(R)=\frac{1}{Z}\int\mbox{d}\zeta_{2}\ldots\mbox{d}\zeta_{M}\,\frac{\left|R\right|^{\frac{M-2}{2}}}{\left|\zeta_{2}\right|}\,\exp\left(-\left|R\right|Q_{\pm}\left(\zeta_{2},\ldots,\zeta_{M}\right)\right),$$ where $Q_{\pm}$ is a function of $\left(\zeta_{2},\ldots,\zeta_{M}\right)$, that depends only on the sign of $R$, according to $R=\pm\left|R\right|$. The tails of the distribution $P_{R}$ correspond to the limits $R\to\pm\infty$. In both limits, $\left|R\right|\to\infty$ so we can perform a saddle-point approximation in the above integral, and get $$\ln\left(P_{R}(R)\right)\underset{R\to\pm\infty}{\sim}-\left|R\right|\mu_{\pm},\label{eq:PDF-exponential-tails}$$ where the rates of decay are defined by $$\mu_{\pm}=\min_{\zeta_{2},\ldots,\zeta_{M}}\left\{ Q_{\pm}\left(\zeta_{2},\ldots,\zeta_{M}\right)\right\} .\label{eq:exponential-rate}$$ The exponential tails of the distribution $P_{R}$ are direct consequences of the fact that the eddy vorticity $\delta\omega$ evolving according to the linear equation is a Gaussian process and of the fact that $R$ is quadratic in $\delta\omega$. This simple argument explains the exponential tails observed in probability density functions of the zonally averaged advection term in simulations of the linear dynamics (blue curve in Figure \[fig:histograms\]), where the vorticity field is exactly an Ornstein-Uhlenbeck process.\
In the quasi-linear and non-linear dynamics, the zonal flow and eddies evolve at the same time scale. As a consequence, the dynamics of the eddy vorticity is not linear, and its statistics are not Gaussian. However, we observe that the probability density functions of eddy vorticity are nearly Gaussian (skewness -0.0147 and kurtosis 3.8079 in the quasi-linear case, skewness -0.0037 and kurtosis 3.3964 in the non-linear case, compared to skewness 0.0172 and kurtosis 3.0028 in the linear case). The previous argument can thus also be applied empirically to explain the exponential tails observed in the curves corresponding to quasi-linear and non-linear simulations in Figure \[fig:histograms\].\
The same analysis has been performed on direct numerical simulations of the deterministic 2-layer quasi-geostrophic baroclinic model [@vallis_atmospheric_2006], see Figure \[fig:baroclinic\]. In this case, the eddy vorticity statistics are highly non-Gaussian, while statistics of the vorticity flux have exponential tails similar to those found in the one-layer case. The observation indicates that the previous explicit calculation might not be the most general explanation of the exponential distribution of vorticity fluxes.
![\[fig:baroclinic\]Probability density functions of the vorticity component $\omega_{3,3}$ (top panel) and zonally averaged advection term (vorticity flux) $R_{3}$ (bottom pannel) from a direct numerical simulation of the deterministic 2-layer quasi-geostrophic baroclinic equation. The eddy vorticity is clearly non-Gaussian, and yet the advection term distribution has exponential tails as in the one-layer cases (Figure \[fig:histograms\]). This observation calls for a more general study of vorticity flux statistics close to a zonal jet.](2layer-Eddy)
![\[fig:baroclinic\]Probability density functions of the vorticity component $\omega_{3,3}$ (top panel) and zonally averaged advection term (vorticity flux) $R_{3}$ (bottom pannel) from a direct numerical simulation of the deterministic 2-layer quasi-geostrophic baroclinic equation. The eddy vorticity is clearly non-Gaussian, and yet the advection term distribution has exponential tails as in the one-layer cases (Figure \[fig:histograms\]). This observation calls for a more general study of vorticity flux statistics close to a zonal jet.](2layer-Stress)
Averaging and large deviations in systems with time scale separation\[sec:large\_deviations\]
=============================================================================================
As explained in section \[sec:Barotropic-equation-and\], we are interested in the regime where zonal jets evolve much slower than the surrounding turbulent eddies. In this section, we present some theoretical tools (stochastic averaging, large deviation principle) that can be applied to study the effective dynamics and statistics of slow dynamical variables coupled to fast stochastic processes. Most of these tools are classical ones [@freidlin2012random; @Gardiner_1994_Book_Stochastic; @pavliotis2008multiscale], except for the explicit results presented in section \[sub:Quasi-linear-systems-with\] [@BouchetTangarifeVandenEijnden2015]. Application of these general tools to the quasi-linear barotropic model is considered in sections \[sub:Gaussian-approximation-of\] and \[sub:Application-of-the\].
Consider the stochastic dynamical system $$\left\lbrace \begin{aligned} & \frac{dx}{dt}=\alpha f\left(x,y\right)\\
& \frac{dy}{dt}=b\left(x,y\right)+\eta
\end{aligned}
\right.\label{eq:slow-general}$$ where $0<\alpha\ll1$, and where $\eta$ is a Gaussian random column vector with zero mean and correlations $\mathbb{E}\left[\eta\left(t_{1}\right)\eta^{T}\left(t_{2}\right)\right]=C\delta\left(t_{1}-t_{2}\right)$ with the correlation matrix $C$. In the case we are interested in, the random vector $y$ is actually the eddy vorticity field, and $x$ is the zonal jet vorticity or velocity. For simplicity we use vector notation $x=\left(x_{\ell}\right)_{1\leq\ell\leq L}$ in this section, the formal generalization to the field case is straightforward, see sections \[sub:Gaussian-approximation-of\] and \[sub:Application-of-the\].
In (\[eq:slow-general\]), the variable $x$ typically evolves on a time scale of order $1/\alpha$, while $y$ evolves on a time scale of order 1. When there is a time scale separation between zonal jets and eddies, defined by $\alpha\ll1$, the quasi-linear barotropic equation (\[eq:barotropic-quasi-linear\]) is a particular case of the system (\[eq:slow-general\]). Note however that in that case, dissipation terms of order $\alpha$ are present in $b(x,y)$. The general results presented in this section usually do not take into account such terms [@freidlin2012random; @Gardiner_1994_Book_Stochastic; @pavliotis2008multiscale]. As a consequence, in sections \[sub:Gaussian-approximation-of\] and \[sub:Application-of-the\] we make sure that our results do not depend on the dissipative terms in the limit $\alpha\to0$.\
The goal of stochastic averaging is to give an effective description of the dynamics of $x$ over time scales of order $1/\alpha$, where the effect of the fast process $y$ is averaged out. The effective dynamics describes the attractors of $x$, the relaxation dynamics towards these attractors and the small fluctuations around these attractors, in the regime $\alpha\ll1$. For quasi-geostrophic zonal jets dynamics, stochastic averaging leads to a kinetic description of zonal jets [@Bouchet_Nardini_Tangarife_2013_Kinetic], related to statistical closures of the dynamics (S3T [@bakas2015s3t] and CE2 [@Srinivasan-Young-2011-JAS; @tobias2011astrophysical; @tobias2013direct]).
The effective dynamics obtained through stochastic averaging or statistical closures is not able to describe arbitrarily large fluctuations of the slow process $x$. Such rare events are of major importance in the long-term dynamics of $x$. For instance in the case where the system (\[eq:slow-general\]) has several attractors, transitions between the attractors are governed by large fluctuations of the system. The description of such transitions (transition probability, typical transition path) cannot be done through a stochastic averaging procedure.
Large deviation theory is a natural framework to describe large fluctuations of $x$ in the regime $\alpha\to0$. The large deviation principle [@freidlin2012random] gives the asymptotic form of the probability density of paths $\left\{ x(t)\right\} _{0\leq t\leq T}$ when $\alpha\ll1$, with the effect of the fast process $y$ averaged out. Information about the typical effective dynamics of $x$ as obtained through stochastic averaging is captured, but the principle allows us to go further to describe arbitrarily rare events. In cases of multistability of $x$, the Large Deviation Principle yields the asymptotic expression of the transition probability from one attractor to another, the average relative residence time in each attractor, and the typical transition path $\left\{ x(t)\right\} _{0\leq t\leq T}$ that links two attractors in a given time $T\gtrsim1/\alpha$, among other relevant statistical quantities. Implementing the large deviation principle in practice for systems like (\[eq:slow-general\]) and for the quasilinear dynamics is one of the goals of this work.\
In the effective descriptions of $x$ provided by stochastic averaging and the Large Deviation Principle, the dynamics of $y$ is approximated by its stationary dynamics with $x$ held fixed, the so-called virtual fast process. The mathematics is described in section \[sub:The-virtual-fast\]. The effective dynamics of $x$ over time scales $t\gg1$ provided by stochastic averaging is presented in section \[sub:Average-evolution-and\]. The Large Deviation Principle for (\[eq:slow-general\]) is stated in section \[sub:Large-time-large-deviations\], and in section \[sub:Sampling-the-SCGF\] we give a method to estimate the quantities involved in the Large Deviation Principle from simulations of the virtual fast process.
The virtual fast process\[sub:The-virtual-fast\]
------------------------------------------------
In slow-fast systems like (\[eq:slow-general\]), the time scale separation implies that at leading order, the statistics of $y$ are very close to the stationary statistics of the virtual fast process $\tilde{y}(u)$ $$\frac{d\tilde{y}}{du}=b\left(x,\tilde{y}(u)\right)+\eta(u),\label{eq:virtual-fast-process}$$ where $x$ is held fixed [@freidlin2012random; @Gardiner_1994_Book_Stochastic]. The time scale separation hypothesis is relevant only when the fast process described by is stable (for instance has an invariant measure and is ergodic). The stationary process depends parametrically on $x$, and the expectation over the invariant measure of is thus denoted $\mathbb{E}_x$. The statistics of $\tilde{y}$ change when $x$ evolves adiabatically on longer timescales.
For quasilinear barotropic dynamics , the virtual fast process is the linearized barotropic equation close to the fixed stable zonal flow $U$ (the necessity for $U$ to be stable for the quasilinear hypothesis to be correct was emphasized in reference [@Bouchet_Nardini_Tangarife_2013_Kinetic].)\
The process (\[eq:virtual-fast-process\]) is relevant only if a time scale separation effectively exists between the evolutions of $x$ and $y$. In practice, the time scale separation hypothesis in (\[eq:slow-general\]) can be considered to be self-consistent if the typical time scale of evolution of the virtual fast process (\[eq:virtual-fast-process\]) is of order one, while the slow variable evolves on a time scale of order $1/\alpha$. From the point of view of the interaction with the dynamics of $x$, the most relevant time scales related to the evolution of $\tilde{y}(u)$ are the correlation times of processes $f_{\ell}\left(x,\tilde{y}(u)\right)$ and $f_{\ell'}\left(x,\tilde{y}(u)\right)$, defined as [@newman1999monte; @papanicolaou1977introduction] $$\begin{aligned}
\tau_{\ell,\ell'} & =\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\int_{0}^{t}\frac{\mathbb{E}_{x}\left[\left[f_{\ell}\left(x,\tilde{y}\left(u_{1}\right)\right)f_{\ell'}\left(x,\tilde{y}\left(u_{2}\right)\right)\right]\right]}{2\mathbb{E}_{x}\left[\left[f_{\ell}\left(x,\tilde{y}\right)f_{\ell'}\left(x,\tilde{y}\right)\right]\right]}\,\mbox{d}u_{1}\mbox{d}u_{2}\label{eq:def-autocorrelation-time}\end{aligned}$$ where $\mathbb{E}_{x}\left[\left[X_{1}\left(u_{1}\right)X_{2}\left(u_{2}\right)\right]\right]\equiv\mathbb{E}_{x}\left[X_{1}\left(u_{1}\right)X_{2}\left(u_{2}\right)\right]-\mathbb{E}_{x}\left[X_{1}\left(u_{1}\right)\right]\mathbb{E}_{x}\left[X_{2}\left(u_{2}\right)\right]$ is the covariance of $X_{1}$ at time $u_{1}$ and $X_{2}$ at time $u_{2}$. If $\ell=\ell'$, $\tau_{\ell,\ell}$ is called the auto-correlation time of the process $f_{\ell}\left(x,\tilde{y}(u)\right)$. In all these expressions, $x$ is fixed and $\mathbb{E}_{x}$ is the average over realizations of the fast process (\[eq:virtual-fast-process\]) in its statistically stationary state. The correlation times $\left\{ \tau_{\ell,\ell'}\right\} $ give an estimate of the time scales of evolution of the terms that force the slow process $x$ in (\[eq:slow-general\]).\
In the regime $\alpha\ll1$, we can consider a time $\Delta t$ much larger than the auto-correlation times $\tau_{\ell,\ell'}$ but much smaller than the typical time for the evolution of $x$ itself: $\tau_{\ell,\ell'}\ll \Delta t\ll1/\alpha$. Over such time scale, (\[eq:slow-general\]) can be integrated to give $$x(t+\Delta t)=x(t)+\alpha\int_{t}^{t+\Delta t}f\left(x(u),y(u)\right)\mbox{d}u\simeq x(t)+\alpha\int_{t}^{t+\Delta t}f\left(x(t),\tilde{y}(u)\right)\mbox{d}u,\label{eq:slow-process-integrated}$$ where in obtaining the last equality we have used the fact that over time $\Delta t$ the process $x$ has almost not evolved. The relation (\[eq:slow-process-integrated\]) is used in the following to derive equations for the average behaviour, typical fluctuations and large fluctuations of $x$, in the time scale separation limit $\alpha\ll1$.
Average evolution and energy balance for the slow process\[sub:Average-evolution-and\]
--------------------------------------------------------------------------------------
We now describe the typical dynamics of $x$ over time scales $\Delta t$ such that $\tau_{\ell,\ell'}\ll \Delta t\ll1/\alpha$, recovering classical results from stochastic averaging [@Gardiner_1994_Book_Stochastic]. Because the time $\Delta t$ in is much larger than the typical correlation time of the components of $f\left(x,\tilde{y}(u)\right)$, by the Law of Large Numbers we can replace the time average by a statistical average: $\frac{1}{\Delta t}\int_{t}^{t+\Delta t}f\left(x,\tilde{y}(u)\right)\mbox{d}u\simeq F(x)$ where $F(x)\equiv\mathbb{E}_{x}\left[f\left(x,\tilde{y}(u)\right)\right]$ is the average force acting on $x$, computed in the statistically stationary state of the virtual fast process (\[eq:virtual-fast-process\]). Then, the average evolution of $x$ at leading order in $\alpha \Delta t\ll1$ is $$\frac{\Delta x}{\Delta t}\equiv\frac{x(t+\Delta t)-x(t)}{\Delta t}\simeq\alpha F(x(t)).\label{eq:average-omegaz}$$ In the case of zonal jet dynamics in barotropic models, $x$ is the zonally averaged vorticity (or velocity) and $F(x)$ is the average advection term $R$. The effective dynamics is very close to S3T-CE2 types of closures [@marston65conover; @bakas2015s3t; @Srinivasan-Young-2011-JAS; @tobias2013direct; @marston2014direct] or to kinetic theory [@Bouchet_Nardini_Tangarife_2013_Kinetic]. This point is further discussed in section \[sub:Gaussian-approximation-of\].\
The effective dynamics is not enough to describe the effective energy balance related to the slow process $x$. Indeed, replacing the time averaged force in by its statistical average amounts to neglecting fluctuations in the process $f(x,\tilde{y}(u))$. The fluctuations are however relevant in the evolution of quadratic forms of $x$. In particular, if we define the energy of the slow degrees of freedom as $E=\frac{1}{2}x\cdot x=\sum_{\ell}E_{\ell}$ with $E_{\ell}=\frac{1}{2}x_{\ell}^{2}$, an equation for $E_{\ell}$ can be derived using , $$\begin{aligned}
E_{\ell}(t+\Delta t)\simeq &\,E_{\ell}(t)+\alpha x_{\ell}(t)\int_{t}^{t+\Delta t}f_{\ell}\left(x(t),\tilde{y}(u)\right)\mbox{d}u\\
&+\frac{\alpha^{2}}{2}\int_{t}^{t+\Delta t}\int_{t}^{t+\Delta t}f_{\ell}\left(x(t),\tilde{y}\left(u_{1}\right)\right)f_{\ell}\left(x(t),\tilde{y}\left(u_{2}\right)\right)\mbox{d}u_{1}\mbox{d}u_{2}.\\
\end{aligned}$$ Define $$Z_{\ell,\ell'}(x) \equiv\lim_{\Delta t\to\infty}\frac{1}{\Delta t}\int_{0}^{\Delta t}\int_{0}^{\Delta t}\mathbb{E}_{x}\left[\left[f_{\ell}\left(x,\tilde{y}\left(u_{1}\right)\right)f_{\ell'}\left(x,\tilde{y}\left(u_{2}\right)\right)\right]\right]\mbox{d}u_{1}\mbox{d}u_{2}\,,\label{eq:def-Xi-ell-ell}$$ then using again that $\Delta t$ is much larger than the correlation time of $f\left(x,\tilde{y}(u)\right)$ we get $$\frac{\Delta E_{\ell}}{\Delta t}\simeq\alpha x_{\ell}F_{\ell}(x)+\frac{\alpha^{2}}{2}Z_{\ell,\ell}(x).\label{eq:slow-energy-balance}$$ This relation is the energy balance for the slow evolution of $x$: $p_{mean,\ell}=\alpha x_{\ell}F_{\ell}(x)$ is the average energy injection rate by the mean force $F(x)$, and $p_{fluct,\ell}=\frac{\alpha^{2}}{2}Z_{\ell,\ell}(x)$ is the average energy injection rate by the typical fluctuations of the force $f$, as quantified by $Z(x)$. Neglecting the term $p_{fluct,\ell}$ in , we recover the energy balance we would have obtained by computing the evolution of $E_{\ell}$ from . This observation confirms the fact that fluctuations of $f$, which are not taken into account in , are relevant in the effective dynamics of $x$.
Large Deviation Principle for the slow process\[sub:Large-time-large-deviations\]
---------------------------------------------------------------------------------
### Large deviation rate function for the action of the fast variable on the slow variable
Equations and give the evolution of $x$ and $x\cdot x$ at leading order in $\alpha\ll1$. Such effective evolution equations can also be found in a more formal way using stochastic averaging [@freidlin2012random; @Gardiner_1994_Book_Stochastic]. The effective equations only describe the low-order statistics of the slow process: The average evolution and typical fluctuations (variance or energy). In contrast, the Large Deviation Principle gives access to the statistics of both typical and rare events, also in the limit $\alpha\ll1$. For system (\[eq:slow-general\]), the Large Deviation Principle was first proved by Freidlin (see Ref. and references therein). It states that the probability density of a path of the slow process $x$, denoted $\mathcal{P}[x]$, satisfies [@freidlin2012random] $$\ln\mathcal{P}\left[x\right]\underset{\alpha\to0}{\sim}-\frac{1}{\alpha}\int\mathcal{L}\left(x(t),\dot{x}(t)\right)\mbox{d}t\label{eq:probability-path}$$ with $\mathcal{L}\left(x,\dot{x}\right)\equiv\min_{\theta}\left\{ \dot{x}\cdot\theta-H\left(x,\theta\right)\right\} $ and where $H\left(x,\theta\right)$ is the scaled cumulant generating function $$H\left(x,\theta\right)\equiv\lim_{\Delta t\to\infty}\frac{1}{\Delta t}\ln\mathbb{E}_{x}\left[\exp\left(\theta\cdot\int_{0}^{\Delta t}f\left(x,\tilde{y}(u)\right)\mbox{d}u\right)\right],\label{eq:SCGF-general}$$ where we recall that $\mathbb{E}_{x}$ is an average over realisations of the virtual fast process (\[eq:virtual-fast-process\]) in its statistically stationary state. Quantities $H$ and $\mathcal{L}$ are classical definitions from Large Deviation Theory [@freidlin2012random]. The knowledge of the function $H(x,\theta)$ is equivalent to the knowledge of $\mathcal{L}\left(x,\dot{x}\right)$, which gives the probability of any path of the slow process $x$ through (\[eq:probability-path\]). Computing $H\left(x,\theta\right)$ is thus a very efficient way to study the effective statistics of $x(t)$, even when extremely rare events that are not described in the effective equations and play an important role.
Because the Large Deviation Principle describes both rare events and typical events, information about the effective dynamics (\[eq:average-omegaz\], \[eq:slow-energy-balance\]) is encoded in the definition of the scaled cumulant generating function. Indeed, a Taylor expansion in powers of $\theta$ in gives $$H\left(x,\theta\right)=\sum_{\ell}\theta_{\ell}F_{\ell}(x)+\frac{1}{2}\sum_{\ell,\ell'}\theta_{\ell}\theta_{\ell'}Z_{\ell,\ell'}(x)+O\left(\theta^{3}\right),\label{eq:SCGF-expansion-theta}$$ with $F(x)\equiv\mathbb{E}_{x}\left[f\left(x,\tilde{y}(u)\right)\right]$ and $Z$ given by . The terms appearing in the leading order evolution of $x$ and of the energy are thus contained in the scaled cumulant generating function, through .
Higher-order terms in (\[eq:SCGF-expansion-theta\]) involve cubic and higher-order cumulants of large time averages of the process $f\left(x,\tilde{y}(u)\right)$. If this process is a Gaussian process, its statistics are only given by its first and second order cumulants [@Gardiner_1994_Book_Stochastic]. As a consequence, for such process $H\left(x,\theta\right)$ is quadratic in $\theta$ and (\[eq:SCGF-expansion-theta\]) is exact (corrections of order $\theta^{3}$ are exactly zero).\
In practice, the scaled cumulant generating function (\[eq:SCGF-general\]) involves the virtual fast process (\[eq:virtual-fast-process\]). This stochastic process depends only parametrically on $x$, which means that we do not have to study the coupled system (\[eq:slow-general\]) in order to compute $H(x,\theta)$. This result is consistent with the time scale separation property of (\[eq:slow-general\]). In quasi-linear systems such as the quasi-linear barotropic dynamics, the virtual fast process is an Ornstein-Uhlenbeck process, which is particularly simple to study. This specific class of systems is considered next in section \[sub:Quasi-linear-systems-with\].
### Quasi-linear systems with action of the fast process on the slow one through a quadratic force: the matrix Riccati equation\[sub:Quasi-linear-systems-with\]
We are particularly interested in the more specific class of systems defined by $$\left\lbrace \begin{aligned} & \frac{dx}{dt}=\alpha y^{T}\mathcal{M}y+\alpha g\left(x\right)\\
& \frac{dy}{dt}=-L_{x}\left[y\right]+\eta
\end{aligned}
\right.\label{eq:slow-quasilinear}$$ where $\mathcal{M}$ is a symmetric matrix, and $L_{x}$ is a linear operator acting on $y$ that depends parametrically on $x$. The system (\[eq:slow-quasilinear\]) is a particular case of (\[eq:slow-general\]) with $f(x,y)=y^{T}\mathcal{M}y+g\left(x\right)$ and $b\left(x,y\right)=-L_{x}\left[y\right]$.
When $x$ is the zonal flow vorticity profile and $y$ is the eddy vorticity, the quasi-linear barotropic dynamics is an example of such a system, where the quadratic form $y^{T}\mathcal{M}y$ defines the zonally averaged advection term $R$ and $g\left(x\right)$ contains the dissipative terms acting on the large-scale zonal flow $x$, and where $L_{x}$ is the linearized barotropic operator close to the zonal flow $x$ (see also section \[sub:Application-of-the\]).\
We now describe the effective dynamics and large deviations of $x$ in the system (\[eq:slow-quasilinear\]), in the limit $\alpha\to0$. In this limit, the statistics of $y$ are very close to the statistics of the virtual fast process , which in this case reads $$\frac{d\tilde{y}}{dt}=-L_{x}\left[\tilde{y}\right]+\eta,\label{eq:Ornstein-Uhlenbeck}$$ where $x$ is frozen. Equation describes an Ornstein-Uhlenbeck process, whose stationary distribution is Gaussian [@Gardiner_1994_Book_Stochastic]. Then, the stationary statistics of are fully determined by the mean and covariance of $\tilde{y}$. The mean is zero, and the covariance $G_{ij}=\mathbb{E}\left[\tilde{y}_{i}\tilde{y}_{j}\right]$ is given by the Lyapunov equation $$\frac{d G}{d t}+L_{x}G+GL_{x}^{T}=C.\label{eq:Lyapunov}$$ The Lyapunov equation (\[eq:Lyapunov\]) converges to a unique stationary solution whenever has an invariant measure. We recall that such an invariant measure is required for the time scale separation hypothesis to be relevant. The effective dynamics of $x$ over times $\Delta t\ll1/\alpha$ is given by . In the case of (\[eq:slow-quasilinear\]), it reads $$\frac{\Delta x}{\Delta t}\simeq\alpha\left[\mathcal{M}\cdot G_{\infty}(x)+g(x)\right]\label{eq:CE2}$$ with $\mathcal{M}\cdot G_{\infty}(x)=\sum_{i,j}\mathcal{M}_{ij}\left(G_{\infty}\right)_{ij}(x)$ where $G_{\infty}$ is the stationary solution of the Lyapunov equation . Simulating the effective slow dynamics can be done by integrating the Lyapunov equation , using standard methods[^3]. It provides an effective description of the attractors of $x$, and of the relaxation dynamics towards the attractors. Examples of such numerical simulations of in the case of zonal jet dynamics in the barotropic model can be found for instance in Refs. .\
In order to describe large fluctuations of $x$ in (\[eq:slow-quasilinear\]), we need to use the Large Deviation Principle . In practice, we compute the scaled cumulant generating function . As proven in Ref. , for the system (\[eq:slow-quasilinear\]), the scaled cumulant generating function is given by $$H\left(x,\theta\right)=\theta\cdot g(x)+\mbox{tr}\left(CN_{\infty}\left(x,\theta\right)\right)\label{eq:SCGF-quasilinear}$$ where $C$ is the covariance matrix of the noise $\eta$ in (\[eq:slow-quasilinear\]) and $N_{\infty}\left(x,\theta\right)$ is a symmetric matrix, stationary solution of $$\frac{d N}{d t}+NL_{x}+L_{x}^{T}N=2NCN+\theta\mathcal{M}.\label{eq:NL-Lya}$$ Equation (\[eq:NL-Lya\]) is a particular case of a matrix Ricatti equation, and in the following we refer to (\[eq:NL-Lya\]) as the Ricatti equation. $\theta$ is the parameter of the cumulant generating function that defines $H$. Whenever $\theta$ is in the parameter range for which the limit in exists, called the admissible $\theta$ range, Eq. (\[eq:NL-Lya\]) has a stationary solution. For the case in this section, with a linear dynamics with a quadratic observable, the admissible $\theta$ range is easily studied through the analysis of the positivity of a quadratic form. One can conclude that the admissible $\theta$ range is an interval containing $0$. All the information regarding the large deviation rate function is contained in the values of $H$ for $\theta$ in this range.
The Ricatti equation (\[eq:NL-Lya\]) is similar to the Lyapunov equation , and it can be solved using similar methods[^4]. Moreover, the numerical implementation of (\[eq:SCGF-quasilinear\], \[eq:NL-Lya\]) can be easily checked using the relation with the Lyapunov equation . Namely, implies that $$\left.\frac{dH}{d\theta}\right|_{\theta=0}=\mathcal{M}\cdot G_{\infty}(x)+g(x).$$ The first term in the right-hand side is computed from the Lyapunov equation , while the left-hand side is computed from the Ricatti equation (\[eq:NL-Lya\]) together with (\[eq:SCGF-quasilinear\]).
In section \[sub:Application-of-the\], we present a numerical resolution of (\[eq:NL-Lya\]) for the case of the quasi-linear barotropic equation on the sphere, and compute directly the scaled cumulant generating function using (\[eq:SCGF-quasilinear\]). We show that (\[eq:NL-Lya\]) can be very easily solved for a given value of $\theta$. This means that the result (\[eq:SCGF-quasilinear\]) permits the study of arbitrarily rare events in zonal jet dynamics extremely easily, through the Large Deviation Principle . Such result is in clear contrast with approaches through direct numerical simulations, which require that the total time of integration increases as the probability of the event of interest decreases. This limitation of direct numerical simulations in the study of rare events statistics is made more precise in next section.
Estimation of the large deviation function from time series analysis\[sec:LD-estimation-SCGF\]
----------------------------------------------------------------------------------------------
In this section we present a way to compute the scaled cumulant generating function (\[eq:SCGF-general\]) from a time series of the virtual fast process , for instance one obtained from a direct numerical simulation. Many of the technical aspects of this empirical approach follow Ref. .
Consider a time series $\left\{ \tilde{y}(u)\right\} _{0\leq u\leq T}$ of the virtual fast process , with a given total time window $u\in[0,T]$. Because the quantities of interest like $H(x,\theta)$ involve expectations in the stationary state of the virtual fast process, we assume that the time series $\left\{ \tilde{y}(u)\right\} _{0\leq u\leq T}$ corresponds to this stationary state. We use the continuous time series notation for simplicity. The generalization of the following formulas to the case of discrete time series is straightforward. For simplicity, we also denote by $R(u)\equiv f\left(\tilde{y}(u)\right)$, the quantity for which the scale cumulant generating function $H\left(\theta\right)=\lim_{t\rightarrow \infty}\frac{1}{t}\log\mathbb{E}\exp\left(\theta\int_{0}^{t}R(u)\,\mbox{d}u\right)$ should be estimated.
The basic method to estimate the scaled cumulant generating function (\[eq:SCGF-general\]) is to divide the full time series $\left\{ \tilde{y}(u)\right\} _{0\leq u\leq T}$ into blocks of length $\Delta t$, to compute the integrals $\int_{t_{0}}^{t_{0}+\Delta t}R(u)\,\mbox{d}u$ over those blocks, and to average the quantity $\exp\left(\theta\cdot\int_{t_{0}}^{t_{0}+\Delta t}R(u)\,\mbox{d}u\right)$. For a small block length $\Delta t$, the large-time regime defined by the limit $\Delta t\to\infty$ in the theoretical expression of $H$ (\[eq:SCGF-general\]) is not attained. On the other hand, too large values of $\Delta t$ —typically of the order of the total time $T$— lead to a low number of blocks, and thus to a very poor estimation of the empirical mean. Estimating $H$ thus requires finding an intermediate regime for $\Delta t$. More precisely, we expect this regime to be attained for $\Delta t$ equal to a few times the correlation time of the process $R(u)$, defined by [@newman1999monte; @papanicolaou1977introduction] $$\tau \equiv \lim_{\Delta t\to\infty} \frac{\int_0^{\Delta t}\int_0^{\Delta t} \mathbb{E}_z\left[\left[\,R(u_1) R(u_2)\,\right] \right] \,\mathrm{d} u_1\mathrm{d} u_2}{2\Delta t\,\mathbb{E}_z\left[\left[\,R^2\,\right] \right]} = \frac{\int_0^\infty \mathbb{E}_z\left[\left[\,R(u) R(0)\,\right] \right] \,\mathrm{d} u}{\mathbb{E}_z\left[\left[\,R^2\,\right] \right]}\,,
\label{eq:LD-tau_corr-def}$$ where $\mathbb{E}_z[[R(u_1) R(u_2)]]$ is the covariance of $R$ at time $u_1$ and at time $u_2$. The second equality is easily obtained assuming that the process $R(u)$ is stationary. Because of the infinite-time limit in , the estimation of $\tau$ suffers from the same finite sampling problem as the estimation of $H$.
Finding a block length $\Delta t$ such that the estimation of $H$ and $\tau$ is accurate is thus a tricky problem. In the following, we propose a method to find the optimal $\Delta t$ and estimate the quantities we are interested in. The proposed method is close to the “data bunching” method used to estimate errors in Monte Carlo simulations [@krauth2006statistical].
### Estimation of the correlation time\[sub:Estimation-of-the\]
We first consider the problem of the estimation of $\tau$ in a simple solvable case, so the numerical results can be compared directly to explicit formulas. Consider the stochastic process $R=w^{2}$ where $w$ is the one-dimensional Ornstein-Uhlenbeck process $$\frac{dw}{dt}=-w+\eta,\label{eq:OU-1D}$$ where $\eta$ is a Gaussian white noise with correlation $\mathbb{E}\left(\eta(t)\eta(t')\right)=\delta(t-t')$. A direct calculation gives the correlation time of $R$, $\tau=1/2$. Using and , the scaled cumulant generating function can also be computed explicitly (see for instance Ref. ). We obtain $$H(\theta) = \frac12 - \frac12 \sqrt{1-2\theta},
\label{eq:exact-SCGF}$$ defined for $\theta\leq 1/2$.
For a time series $\left\{ R(u)\right\} _{0\leq u\leq T}$, we denote by $\bar{R}_{T}=\frac{1}{T}\int_{0}^{T}R(u)\,\mbox{d}u$ and by $\mbox{var}_{T}(R)=\frac{1}{T}\int_{0}^{T}\left(R(u)-\bar{R}_{T}\right)^{2}\mbox{d}u$ respectively the empirical mean and variance of $R$ over the full time series. We estimate the correlation time $\tau$ defined in (\[eq:LD-tau\_corr-def\]) using an average over blocks of length $\Delta t$, $$\tau_{\Delta t}=\frac{1}{2\Delta t\,\mbox{var}_{T}(R)}\mathbb{E}_{\frac{T}{\Delta t}}\left[\left(\int_{t_{0}}^{t_{0}+\Delta t}\left(R(u)-\bar{R}_{T}\right)\,\mbox{d}u\right)^{2}\right],\label{eq:tau-estimation}$$ where $\mathbb{E}_{\frac{T}{\Delta t}}\left[h_{t_{0}}\right]$ is the empirical average over realisations of the quantity $h_{t_{0}}$ inside the brackets[^5].
To find the optimal value of $\Delta t$, we plot $\tau_{\Delta t}$ as a function of $\Delta t$ in figure \[fig:estimation-tau\]. For small values of $\Delta t$, the large-time limit in (\[eq:LD-tau\_corr-def\]) is not achieved, which explains the low values of $\tau_{\Delta t}$. For too large values of $\Delta t$, the empirical average $\mathbb{E}_{\frac{T}{\Delta t}}$ in is not accurate due to the small value of $\frac{T}{\Delta t}$ (small number of blocks), which explains the increasing fluctuations in $\tau_{\Delta t}$ as $\Delta t$ increases. The optimal value of $\Delta t$ —denoted $\Delta t^{\star}$ in the following— is between the values giving these artificial behaviours. It should satisfy $T\gg \Delta t^{\star}\gg\tau_{\Delta t^{\star}}$. Here, one can read $\Delta t^{\star}\simeq10$ and $\tau_{\Delta t^{\star}}\simeq0.5$, so this optimal $\Delta t^{\star}$ satifies the aforementioned condition. The estimated value $\tau_{\Delta t^{\star}}$ is in agreement with the theoretical value $\tau=1/2$.
The error bars for $\tau_{\Delta t}$ are given by $\Delta\tau_{\Delta t}=\sqrt{\mbox{var}\left(\tau_{\Delta t}\right)/N_{terms}}$, where $\mbox{var}\left(\tau_{\Delta t}\right)$ is the empirical variance associated with the average $\mathbb{E}_{\frac{T}{\Delta t}}$ defined in (\[eq:empirical-average-welch\]), and $N_{terms}$ is the number of terms in this sum (roughly $N_{terms}\simeq2T/\Delta t$).
![\[fig:estimation-tau\]Plot of the estimated correlation time $\tau_{\Delta t}$ (black line) and error bars (grey shading) as functions of $\Delta t$. For small values of $\Delta t$, the large-time limit in (\[eq:LD-tau\_corr-def\]) is not achieved, which explains the low values of $\tau_{\Delta t}$. For too large values of $\Delta t$, the empirical average $\mathbb{E}_{\frac{T}{\Delta t}}$ in (\[eq:tau-estimation\]) is not accurate due to the small value of $\frac{T}{\Delta t}$, which explains the increasing fluctuations in $\tau_{\Delta t}$ as $\Delta t$ increases. The optimal value $\Delta t^{\star}$ is the one in between these artificial behaviour. Here, one can read $\Delta t^{\star}\simeq20$ and $\tau_{\Delta t^{\star}}\simeq0.5$, in agreement with the exact value $\tau=1/2$ (dashed line). The Ornstein-Uhlenbeck process (\[eq:OU-1D\]) has been integrated over $T=5.10^{4}$ using the method proposed in Ref. , with time step $10^{-3}$.](OU_tau)
### Estimation of the scaled cumulant generating function\[sub:Sampling-the-SCGF\]
The self-consistent estimation of the correlation time $\tau$ presented in the previous section gives the optimal value $\Delta t^{\star}$ of the block length. Then, the scaled cumulant generating function is computed for a given value of $\theta$ as
$$H_{T}\left(\theta\right)\equiv\frac{1}{\Delta t^{\star}}\ln\mathbb{E}_{\frac{T}{\Delta t^{\star}}}\left[\exp\left(\theta\int_{t_{0}}^{t_{0}+\Delta t^\star}R(u)\,\mbox{d}u\right)\right],\label{eq:SCGF-empirical}$$
where $\mathbb{E}_{\frac{T}{\Delta t}}$ is the empirical average over the blocks, as defined in . However, the knowledge of $H\left(x,\theta\right)$ for an arbitrarily large value of $\left|\theta\right|$ leads to the probability of an arbitrarily rare event for the slow process $x$ through the Large Deviation Principle . This is in contradiction with the fact that the available time series $\left\{ R(u)\right\} _{0\leq u\leq T}$ is finite. In other words, the range of values of $\theta$ for which the scaled cumulant generating function $H_{T}(\theta)$ can be computed with accuracy depends on $T$.
Indeed, for large positive values of $\theta$, the sum $\mathbb{E}_{\frac{T}{\Delta t^{\star}}}$ in (\[eq:SCGF-empirical\]) is dominated by the largest term $\exp\left(\theta I_{max}\right)$ where $I_{max}=\max_{t_{0}}\left\{ \int_{t_{0}}^{t_{0}+\Delta t}R(u)\,\mbox{d}u\right\} $ is the largest value of $\int_{t_{0}}^{t_{0}+\Delta t}R(u)\,\mbox{d}u$ over the finite sample $\left\{ R(u)\right\} _{0\leq u\leq T}$. Then $H_{T}(\theta)\sim\frac{1}{\Delta t^{\star}}I_{max}\theta$ for $\theta\gg1$. This phenomenon is known as linearization [@rohwer2014convergence], and is clearly an artifact of the finite sample size. We denote by $\theta_{max}$ the value of $\theta$ such that linearization occurs for $\theta>\theta_{max}$ . Typically, we expect $\theta_{max}$ to be a positive increasing function of $T$. The same way, $H_{T}(\theta)\sim-\frac{1}{\Delta t^{\star}}I_{min}\theta$ for $\theta<0$ and $\left|\theta\right|\gg1$, with $I_{min}=\min_{t_{0}}\left\{ \int_{t_{0}}^{t_{0}+\Delta t}R(u)\,\mbox{d}u\right\} $. In a similar way, we define $\theta_{min}$ as the minimum value of $\theta$ for which linearization occurs. Typically, we expect $\theta_{min}$ to be a negative decreasing function of $T$.
The convergence of estimators like (\[eq:SCGF-empirical\]) is studied in Ref. , in particular it is shown that error bars can be computed in the range $\left[\theta_{min}/2,\theta_{max}/2\right]$ for a given time series $\left\{ R(u)\right\} _{0\leq u\leq T}$. An example of a computation of $H_{T}(\theta)$ is shown in Figure \[fig:SCGF-estimation\] for the one-dimensional Ornstein-Uhlenbeck process, and compared to the explicit solution. The full error bars in Figure \[fig:SCGF-estimation\] are given by the error from the estimation of $\tau$ and the statistical error described in Ref. . The method shows excellent agreement with theory, and exposes non-Gaussian behavior.\
In sections \[sub:Gaussian-approximation-of\] and \[sub:Application-of-the\], we apply the tools (estimation of the correlation time and of the scaled cumulant generating function) to study the statistics of Reynolds’ stresses in zonal jet dynamics.
![\[fig:SCGF-estimation\]Computation of the scaled cumulant generating function from (\[eq:SCGF-empirical\]) for the one-dimensional Ornstein-Uhlenbeck process (\[eq:OU-1D\]). Upper panel: illustration of the linearization effect for large values of $\left|\theta\right|$. The solid curve is the estimated scaled cumulant generating function $H_T$, and the dashed lines are the expected linear tails, which are artifacts of the finite sample size [@rohwer2014convergence]. The thin vertical lines show the range $\theta\in\left[\theta_{min},\theta_{max}\right]$ for which we consider that linearization does not take place. Bottom pannel: the converged scaled cumulant generating function estimator $H_{T}$ on $\theta\in\left[\theta_{min}/2,\theta_{max}/2\right]$ (thick black curve, with error bars in grey shading). The yellow curve is the exact scaled cumulant generating function , it fits the estimated one within statistical errors. The purple curve is the quadratic approximation, that corresponds to a Gaussian process $R(u)$ (see equation ). This quadratic approximation is computed using the exact mean, variance and correlation time of $R$. The Ornstein-Uhlenbeck process (\[eq:OU-1D\]) has been integrated over $T=5 \times10^{4}$ using the method proposed in Ref. , with time step $10^{-3}$.](OU_choix_theta "fig:"){height="7cm"}\
![\[fig:SCGF-estimation\]Computation of the scaled cumulant generating function from (\[eq:SCGF-empirical\]) for the one-dimensional Ornstein-Uhlenbeck process (\[eq:OU-1D\]). Upper panel: illustration of the linearization effect for large values of $\left|\theta\right|$. The solid curve is the estimated scaled cumulant generating function $H_T$, and the dashed lines are the expected linear tails, which are artifacts of the finite sample size [@rohwer2014convergence]. The thin vertical lines show the range $\theta\in\left[\theta_{min},\theta_{max}\right]$ for which we consider that linearization does not take place. Bottom pannel: the converged scaled cumulant generating function estimator $H_{T}$ on $\theta\in\left[\theta_{min}/2,\theta_{max}/2\right]$ (thick black curve, with error bars in grey shading). The yellow curve is the exact scaled cumulant generating function , it fits the estimated one within statistical errors. The purple curve is the quadratic approximation, that corresponds to a Gaussian process $R(u)$ (see equation ). This quadratic approximation is computed using the exact mean, variance and correlation time of $R$. The Ornstein-Uhlenbeck process (\[eq:OU-1D\]) has been integrated over $T=5 \times10^{4}$ using the method proposed in Ref. , with time step $10^{-3}$.](OU_H "fig:"){height="7cm"}
Zonal energy balance and time scale separation in the inertial limit\[sub:Gaussian-approximation-of\]
=====================================================================================================
In this section we discuss the effective evolution and effective energy balance for zonal flows in the inertial regime $\nu_n\ll\alpha\ll1$, using the general results of section \[sub:Average-evolution-and\] and numerical simulations.
Effective dynamics and energy balance for the zonal flow\[sub:energy-balance\]
------------------------------------------------------------------------------
Using and , the effective evolution of the zonal jet velocity profile $U(\phi,t)$ in the regime $\nu_n\ll\alpha\ll1$ reads $$\frac{\partial U}{\partial t} \simeq\alpha F[U] - \alpha U - \nu_n (-\Delta)^n U,
\label{eq:LD-effective-U}$$ with $F[U]\equiv \mathbb{E}_U[f]$ where $f$ is minus the Reynolds’ stress divergence and $\mathbb{E}_U$ is the average in the statistically stationary state of the linear barotropic dynamics , with $U$ held fixed.
Equation describes the effective slow dynamics of zonal jets in the regime $\nu_n\ll\alpha\ll1$, it is the analogous of the kinetic equation proposed in Ref. . In particular, the attractors of are the same as the attractors of a second order closure of the barotropic dynamics [@marston65conover; @ait2015cumulant].\
As explained in a general setting in section \[sub:Average-evolution-and\], equation only takes into account the average Reynolds’ stresses (through the term $F[U]$). As a consequence it does not describe accurately the effective zonal energy balance. Quantifying the influence of fluctuations of Reynolds’ stresses on the zonal energy balance is one of the goals of this study. We now derive the effective zonal energy balance, and describe the relative influence of average and fluctuations of Reynolds’ stresses using numerical simulations.
First note that the hyperviscous terms in essentially dissipate energy at the smallest scales of the flow. In the turbulent regime we are interested in, such small-scale dissipation is negligible in the global energy balance. For this reason, the viscous terms can be neglected in and in the zonal energy balance. Note however that some hyper-viscosity is still present in the numerical simulations of the linear barotropic equation (\[eq:frozen-QL\]), in order to ensure numerical stability. For consistency, we make sure that the hyper-viscous terms do not influence the numerical results, see Figure \[fig:energy-balance-total\].
The kinetic energy contained in zonal degrees of freedom reads $E_{z}=\int\mbox{d}\phi\,E\left(\phi\right)$ with $E\left(\phi\right)=\pi\cos\phi\,U^{2}\left(\phi\right)$. Using we get the equation for the effective evolution of $E\left(\phi\right)$: $$\frac{1}{\alpha}\frac{dE}{dt}=p_{mean}(\phi)-2E+\alpha p_{fluct}(\phi)\,.\label{eq:energy-balance-kinetic}$$ The left hand side is the instantaneous energy injection rates into the zonal mean flow. It is equal to the sum of the average Reynolds’ stresses $p_{mean}\left(\phi\right)\equiv2\pi\cos\phi\,F[U]\left(\phi\right)U\left(\phi\right)$, $-2E$, and the fluctuations of Reynolds’ stresses $\alpha p_{fluct}\left(\phi\right)\equiv\alpha\pi\cos\phi\,Z[U]\left(\phi\right)$, where $$Z[U]\left(\phi\right)\equiv \lim_{\Delta t\to\infty}\frac{1}{\Delta t}\int_{0}^{\Delta t}\int_{0}^{\Delta t}\mathbb{E}_{U}\left[\left[f\left(\phi,u_{1}\right)f\left(\phi,u_{2}\right)\right]\right]\mbox{d}u_{1}\mbox{d}u_{2}\,.$$ Integrating over latitudes, we obtain the total zonal energy balance $$\frac{1}{\alpha}\frac{dE_{z}}{dt}=P_{mean}-2E_{z}+\alpha P_{fluct},\label{eq:zonal-energy-balance-total}$$ with $P_{mean}\equiv\int\mbox{d}\phi\,p_{mean}(\phi)$ and $\alpha P_{fluct}\equiv\int\mbox{d}\phi\,\alpha p_{fluct}(\phi)$.
All the terms appearing in and can be easily estimated using data from a direct numerical simulation of the linearized barotropic equation . Indeed, $F[U](\phi)$ can be computed as the empirical average of $f(\phi)$ in the stationary state of , and $Z[U](\phi)$ can be computed using the method described in section \[sub:Estimation-of-the\] to estimate correlation times[^6].
The functions $F[U]$ and $Z[U](\phi)$ may be computed directly from the scaled cumulant generating function $H$, using . Computing $H$ using the Ricatti equation (\[eq:SCGF-quasilinear\], \[eq:NL-Lya\]) and using , we have a very easy way to compute the terms appearing in the effective slow dynamics or in the zonal energy balance equations and , without having to simulate directly the fast process .\
We now describe the results obtained by solving numerically the linearized barotropic equation , where the mean flow velocity, $U$, is obtained from a quasilinear simulation as described in the end of section \[sub:Numerical-implementation\], and represented in Figure \[fig:U\]. The energy injection rates $P_{mean}$ and $\alpha P_{fluct}$, computed using both of the methods explained above, with different values of the non-dimensional damping rate $\alpha$ are represented in Figure \[fig:energy-balance-total\]. The first term $P_{mean}$ (solid curve) is roughly of the order of magnitude of the dissipation term in (recall we use units such that $E_{z}\simeq1$). The second term $\alpha P_{fluct}$ is about an order of magnitude smaller than $P_{mean}$. In this case, the energy balance implies that the zonal velocity is actually slowly decelerating.
Here, neglecting $\alpha P_{fluct}$ in leads to an error in the zonal energy budget of about 5–10%. This confirms the fact that fluctuations of Reynolds’ stresses are only negligible in a first approximation, and that they should be taken into account in order to obtain a quantitative description of zonal jet evolution. However, we emphasize that only one mode is stochastically forced in this case (see section \[sub:Numerical-implementation\] for details). When several modes are forced independently, the Reynolds’ stress divergence $f(\phi)$ is computed as the sum of independent contributions from each mode. If the number $K$ of forced modes becomes large, then the Central Limit Theorem implies that the typical fluctuations of $f(\phi)$ (and thus $\alpha P_{fluct}$) roughly scale as $1/K$. In Figure \[fig:energy-balance-total\], $K=1$ so we are basically considering the case where fluctuations of Reynolds’ stresses are the most important in the zonal energy balance. In other words, this is the worst case test for CE2 types of closures. In most previous studies of second order closures like CE2, a large number of modes is forced [@marston2010statistics; @tobias2013direct], so in these cases $p_{fluct}(\phi)$ and $\alpha P_{fluct}$ are most likely to be negligible in the zonal energy balance.
We also observe that $P_{mean}$ increases up to a finite value as $\alpha\ll1$, while $\alpha P_{fluct}$ is nearly constant over the range of values of $\alpha$ considered. We further comment the behavior in the following.\
The spatial distribution of the energy injection rates $p_{mean}(\phi)$ and $p_{fluct}(\phi)$ are represented in Figures \[fig:comparison-U-p\] and \[fig:energy-balance-mean\], \[fig:energy-balance-fluct\]. Both $p_{mean}(\phi)$ and $p_{fluct}(\phi)$ are concentrated in the jet region $\phi\in[-\pi/4,\pi/4]$, which is also the region where the stochastic forces act (see Figure \[fig:U\]).
In Figure \[fig:energy-balance-mean\], we observe that $p_{mean}$ is always positive. This means that the turbulent perturbations are everywhere injecting energy into the zonal degrees of freedom, i.e. the average Reynolds’ stresses are intensifying the zonal flow $U(\phi)$ at each latitude. This effect is predominant at the jet maximum and around the jet minima (around $\phi=\pm\pi/8$). We also observe that $p_{mean}$ (and thus $F[U]$) converges to a finite value as $\alpha$ decreases. A similar result has been obtained for the two dimensional Navier–Stokes equation under the assumption that the linearized equation close to the base flow has no normal mode, using theoretical arguments [@Bouchet_Nardini_Tangarife_2013_Kinetic]. Those assumptions are not satisfied here, thus indicating that the finite limit of $F[U]$ as $\alpha$ vanishes is a more general result. This result is extremely important, indeed it implies that the effective dynamics is actually well-posed in the limit $\alpha\to0$.
By definition, $p_{fluct}(\phi)$ is necessarily positive. In Figure \[fig:energy-balance-fluct\], we see that $p_{fluct}(\phi)$ keeps increasing as $\alpha$ decreases in the region away from the jet maximum (roughly for $|\phi|\in[\pi/16,\pi/4]$). This is in contrast with the behaviour of $p_{mean}(\phi)$ (fig. \[fig:energy-balance-mean\]). We note that such a behaviour for $p_{fluct}(\phi)$ has been obtained recently for the two-dimensional Navier-Stokes equation under the assumption that the base flow has no mode [@BouchetNardiniTangarife2015]. However, the range of values of $\alpha$ considered here is not wide enough to check precisely those theoretical results.
We also observe in Figure \[fig:energy-balance-fluct\] that $p_{fluct}(\phi)$ is relatively small in the region of jet maximum $\phi\simeq 0$. This means that Reynolds’ stresses tend to fluctuate less in this area. In the context of the deterministic two-dimensional Euler equation linearized around a background shear flow, it is known that extrema of the background flow lead to a decay of the perturbation vorticity (depletion of the vorticity at the stationary streamline [@Bouchet_Morita_2010PhyD]). In a stochastic context, this implies that the perturbation vorticity $\delta\omega$ is expected to fluctuate less in the region of jet extrema, in qualitative agreement with what is observed in Figure \[fig:energy-balance-fluct\].
![\[fig:energy-balance-total\]Total energy injection rate into the zonal flow by the mean Reynolds’ stresses $P_{mean}$ (first term in the r.h.s of (\[eq:zonal-energy-balance-total\]), in solid line) and by the fluctuations of Reynolds’ stresses $\alpha P_{fluct}$ (last term in the r.h.s of (\[eq:zonal-energy-balance-total\]), in dashed line with statistical error bars in grey shading) as a function of $1/\alpha$. The quantities are estimated from direct numerical simulations (DNS) of the linearized barotropic equation with parameters given in section \[sub:Numerical-implementation\], and $P_{mean}$ is also computed directly using the Ricatti equation (yellow curve). ](r_tot){height="8cm"}
![\[fig:comparison-U-p\]From top to bottom: zonal velocity profile $U(\phi)$, energy injection rate by the average Reynolds’ stresses $p_{mean}(\phi)$ and energy injection rate by the fluctuations of Reynolds’ stresses $\alpha p_{fluct}(\phi)$, as functions of latitude $\phi$ restricted to the northern hemisphere. The values in the southern hemisphere are symmetric with respect to northern hemisphere, see Figures \[fig:U\], \[fig:energy-balance-mean\] and \[fig:energy-balance-fluct\]. $p_{mean}$ and $p_{fluct}$ are estimated from numerical simulations of with parameters given in section \[sub:Numerical-implementation\], and $\alpha=0.073$. $p_{mean}$ is always positive, meaning that the average Reynolds’ stresses are intensifying the zonal flow $U(\phi)$ at each latitude. We see that fluctuations of Reynolds’ stresses are lower at the jet extrema ($p_{fluct}$ is relatively small), in particular close to the equator $\phi=0$. This can be understood as a consequence of the depletion of vorticity at the stationary streamline [@Bouchet_Morita_2010PhyD]. Error bars are not shown here, see Figures \[fig:energy-balance-mean\] and \[fig:energy-balance-fluct\].](r_U)
Empirical validation of the time scale separation hypothesis\[sub:Empirical-validation-of\]
-------------------------------------------------------------------------------------------
In this paper we assumed a large separation in time scales: the eddies $\delta\omega$ evolves much faster than the zonal flow $U$, permitting the quasilinear approximation. It has been shown in Ref. that for the linearized dynamics close to a zonal jet $U$, the autocorrelation function of both the eddy velocity and the Reynolds stresses are finite in the limit $\alpha\to0$, even if the dissipation vanishes in this limit. An effective dissipation takes place, thanks to the Orr mechanism (see Refs. ). This result ensures that time scale separation assumption is valid for small enough $\alpha$ (the eddies $\delta\omega$ evolve on a time scale of order one, and the zonal flow $U$ evolves on a time scale of order $1/\alpha$).
The consistency of this assumption for any value of $\alpha$ can also be tested numerically. For this purpose, we compute the maximum correlation time of the Reynolds’ stress divergence $f(\phi)$, defined as[^7] $$\tau_{max}^\alpha\equiv\max_{\phi}\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\int_{0}^{t}\frac{\mathbb{E}_{U}^\alpha\left[\left[f\left(\phi,s_{1}\right)f\left(\phi,s_{2}\right)\right]\right]}{2\mathbb{E}_{U}^\alpha\left[\left[f^{2}\left(\phi\right)\right]\right]}\,\mbox{d}s_{1}\mbox{d}s_{2}.\label{eq:autocorrelation-time-phi}$$ We check whether or not $\tau_{max}^\alpha \ll 1/\alpha$, where $1/\alpha$ is the dissipative time scale. The results are summarized in Figure \[fig:autocorrelation-time-latitude\]. We observe that $\tau_{max}^\alpha$ converges to a finite value as $\alpha$ decreases, as expected from the theoretical analysis [@Bouchet_Nardini_Tangarife_2013_Kinetic; @tangarife-these], and this value is smaller than the inertial time scale (equal to one by definition of the time units). This means that the typical time scale of evolution of the Reynolds’ stress divergence is much smaller than the dissipative time scale $1/\alpha$ as soon as $1/\alpha$ is much larger than one, justifying the time scale separation hypothesis.
![\[fig:autocorrelation-time-latitude\]Solid line: maximum correlation time of the Reynolds’ stress divergence as a function of the damping rate $\alpha$. We clearly see the convergence of $\tau_{max}^\alpha$ to a finite value as $\alpha\to0$. The correlation time is of the order of the inertial time scale (equal to one by definition of the units, here represented by the dashed line), and much smaller than the dissipative time $1/\alpha$ (not represented here), showing the time scale separation between dissipative and inertial processes in the quasi-linear barotropic dynamics.](tau_corr_max)
Large deviations of Reynolds stresses\[sec:Large\_Deviations\_Reynolds\_stresses\] {#sub:Application-of-the}
==================================================================================
In section \[sub:Gaussian-approximation-of\], we studied the effective energy balance for the zonal flow $U(\phi)$ using numerical simulations of the linearized barotropic dynamics . This effective description of zonal jet dynamics takes into account the low-order statistics of Reynolds’ stresses: average and covariance. In order to study rare events in zonal jet dynamics, we must employ the large deviation principle. The goal of this section is to apply the theoretical tools presented in sections \[sub:Large-time-large-deviations\] and \[sec:LD-estimation-SCGF\] to the study of rare events statistics in zonal jet dynamics.
Large Deviation Principle for the time-averaged Reynolds’ stresses\[sub:LD-LDP-time-average-stress\]
----------------------------------------------------------------------------------------------------
We first formulate the Large Deviation Principle for the quasi-linear barotropic equations in the regime $\alpha\ll1$, and present some properties of the large deviations functions. The numerical results are presented in section \[sec:LD-numerical-results\]. The Large Deviation Principle presented here is equivalent to the one presented in a more general setting in section \[sub:Large-time-large-deviations\].
Consider the evolution of $\omega_z$ from the first equation of (\[eq:barotropic-quasi-linear\]). Over a time scale $\Delta t$ much smaller than $1/\alpha$ but much larger than the correlation time $\tau$ we can write $$\frac{\Delta \omega_z}{\Delta t} \equiv \frac{1}{\alpha}\frac{\omega_z(t+\Delta t) - \omega_z(t)}{\Delta t} \simeq \frac{1}{\Delta t}\int_t^{t+\Delta t} R(u)\,\mathrm{d} u - \omega_z(t)\,,
\label{eq:LD-omega_z-integral}$$ where we have used the fact that $\omega_z$ has not evolved much from $t$ and $t+\Delta t$ (because $\Delta t\ll1/\alpha$), while $R(u)$ has evolved according to with a fixed $\omega_z$ (or equivalently a fixed $U$). We also neglect hyper-viscosity in the evolution of $\omega_z$, which is natural in the turbulent regime we are interested in. Note however that some hyper-viscosity is still present in the numerical simulations of (\[eq:frozen-QL\]), in order to ensure numerical stability. For consistency, we make sure that the hyper-viscous terms have no influence on the numerical results (see Figure \[fig:H-section-5\]).\
We denote by $P_{\Delta t}\left[\frac{\Delta \omega_z}{\Delta t}\right]$ the probability distribution function of $\frac{\Delta \omega_z}{\Delta t}$, with a fixed $t$ (and thus a fixed $\omega_z(t)$), but with an increasing $\Delta t$. This regime is consistent with the limit of time scale separation $\alpha\to0$, where $\omega_z$ is nearly frozen while $\delta\omega$ keeps evolving. From , $P_{\Delta t}\left[\frac{\Delta \omega_z}{\Delta t}\right]$ is also the probability density function of the time-averaged advection term $\frac{1}{\Delta t}\int_t^{t+\Delta t} R(u)\,\mathrm{d} u$. The Large Deviation Principle gives the asymptotic expression of $P_{\Delta t}\left[\frac{\Delta \omega_z}{\Delta t}\right]$ in the regime $\Delta t \gg \tau$, namely $$\ln P_{\Delta t}\left[\frac{\Delta \omega_z}{\Delta t}\right]\underset{\Delta t\to\infty}{\sim} -\Delta t\, \mathcal{L}\left[\frac{\Delta \omega_z}{\Delta t}\right]\,.
\label{eq:LD-LDP-deltaomega_z}$$ The function $\mathcal{L}$ is called the large deviation rate function. It characterizes the whole distribution of $\frac{\Delta \omega_z}{\Delta t}$ in the regime $\Delta t\gg\tau$, including the most probable value and the typical fluctuations.\
Our goal in the following is to compute numerically $\mathcal{L}\left[\frac{\Delta \omega_z}{\Delta t}\right]$. This can be done through the scaled cumulant generating function . Using , the definition can be reformulated as $$H[\theta] = \lim_{\Delta t\to\infty}\frac{1}{\Delta t}\ln \int \mathrm{d} \dot{\omega}_z \, P_{\Delta t}\left[\dot{\omega}_z\right] \exp\left(\theta\cdot \Delta t\, \dot{\omega}_z\right)
\label{eq:LD-SCGF-Gartner-Ellis}$$ Because $\omega_z$ is a field, here $\theta$ is also a field depending on the latitude $\phi$, and $H$ is a functional. For simplicity, we stop denoting the dependency of $H$ in $\omega_z$. In , we also have used the notation $\theta_{1}\cdot\theta_{2}\equiv\int\mbox{d}\phi\,\cos\phi\,\theta_{1}(\phi)\theta_{2}(\phi)$ for the canonical scalar product on the basis of spherical harmonics.
Using in and using a saddle-point approximation to evaluate the integral in the limit $\Delta t\to\infty$, we get $H[\theta] = \sup_{\dot{\omega}_z}\left\lbrace \theta\cdot\dot{\omega}_z - \mathcal{L}\left[\dot{\omega}_z\right] \right\rbrace$, i.e. $H$ is the Legendre-Fenschel transform of $\mathcal{L}$. Assuming that $H$ is everywhere differentiable, we can invert this relation as $$\mathcal{L}\left[\frac{\Delta \omega_z}{\Delta t}\right] = \sup_{\theta}\left\lbrace \theta\cdot\frac{\Delta \omega_z}{\Delta t} - H[\theta] \right\rbrace\,.
\label{eq:LD-Legendre}$$
The scaled cumulant generating function $H[\theta]$ can be computed either from a time series of $\delta\omega$ (see section \[sec:LD-estimation-SCGF\]) or solving the Ricatti equation (see section \[sub:Quasi-linear-systems-with\]). Then the large deviation rate function $\mathcal{L}$ can be computed using , and this gives the whole probability distribution of $\frac{\Delta \omega_z}{\Delta t}$ (or equivalently of the time-averaged Reynolds’ stresses) through the Large Deviation Principle .
In the following, we implement this program and discuss the physical consequences for zonal jet statistics. We first give a simpler expression of $H[\theta]$, that makes its numerical computation easier.
Decomposition of the scaled cumulant generating function
--------------------------------------------------------
Using the Fourier decomposition , we can decompose the perturbation vorticity as $\delta\omega(\lambda,\phi) = \sum_m \omega_m(\phi)\mathrm{e}^{im\lambda}$, where $\omega_m$ satisfies $$\frac{\partial\omega_{m}}{\partial u}=-L_{U,m}\left[\omega_{m}\right]+\sqrt{2}\eta_{m},\label{eq:virtual-fast-barotropic-QL}$$ where the Fourier transform of the linear operator (\[eq:LD-linear-operator\]) reads $$L_{U,m}\left[\omega_{m}\right](\phi)=-\frac{im}{\cos\phi}\left(U(\phi)\omega_{m}(\phi)+\gamma(\phi)\psi_{m}(\phi)\right)-\alpha\omega_{m}(\phi)-\nu_{n}\left(-\Delta_{m}\right)^{n}\omega_{m}(\phi).\label{eq:LD-linear-operator-m}$$ In (\[eq:virtual-fast-barotropic-QL\]), $\eta_{m}\left(\phi,t\right)$ is a Gaussian white noise such that $\eta_{-m}=\eta_{m}^{*}$, with zero mean and with correlations $$\mathbb{E}\left[\eta_{m}\left(\phi_{1},t_{1}\right)\eta_{m}^{*}\left(\phi_{2},t_{2}\right)\right]=c_{m}\left(\phi_{1},\phi_{2}\right)\delta(t_{1}-t_{2}),$$ $$\mathbb{E}\left[\eta_{m}\left(\phi_{1},t_{1}\right)\eta_{m}\left(\phi_{2},t_{2}\right)\right]=0,$$ where $c_{m}$ is the $m$-th coefficient in the Fourier decomposition of $C$ in the zonal direction.
Using the Fourier decomposition, the zonally averaged advection term can be written $R(\phi)=\sum_{m}R_{m}(\phi)$ with $R_{m}(\phi)=-\frac{im}{\cos\phi}\partial_{\phi}\left(\psi_{m}\cdot\omega_{-m}\right)$. Using this expression and the fact that $\omega_{m_{1}}$ and $\omega_{m_{2}}^{*}$ are statistically independent for $m_{1}\neq m_{2}$, the scaled cumulant generating function can be decomposed as[^8] $$\begin{aligned}
H[\theta] &\equiv \lim_{\Delta t\to\infty}\frac{1}{\Delta t}\ln \mathbb{E}_U\left[\exp\left(\theta\cdot \int_{0}^{\Delta t}\left(R(u)-\omega_{z}\right)\,\mbox{d}u \right)\right]\\
&=-\theta\cdot\omega_{z}+\sum_{m}H_{m}\left[\theta\right],\\
\end{aligned}\label{eq:SCGF-omegaz-H_m}$$ with $$H_{m}\left[\theta\right]=\lim_{\Delta t\to\infty}\frac{1}{\Delta t}\log\mathbb{E}_{U}\exp\left[\int\mbox{d}\phi\,\cos\phi\,\theta\left(\phi\right)\int_{0}^{\Delta t}R_{m}\left(\phi,u\right)\,\mbox{d}u\right].
\label{eq:SCGF-QG-m}$$ We recall that $\mathbb{E}_U$ is the average in the statistically stationary state of .
In the following, we consider the case where only one Fourier mode $m$ is forced, for simplicity and to highlight deviations from Gaussian statistics. If several modes are forced, their contributions to the scaled cumulant generating function add up, according to (\[eq:SCGF-omegaz-H\_m\]).\
Finally, consider the decomposition of the zonally averaged advection term into spherical harmonics , $R_m(\phi)=\sum_{\ell}R_{m,\ell}~P_{\ell}^{0}(\sin \phi)$. Using $\theta(\phi)=\theta_{\ell} P_{\ell}^{0}(\sin \phi)$ in , we investigate the statistics of the $\ell$-th coefficient $R_{m,\ell}$. The associated scaled cumulant generating function is denoted $H_{m,\ell}\left(\theta\right)\equiv H_{m}\left[\theta P_{\ell}^{0}(\sin \phi)\right]$, and the large deviation rate function is denoted $$\mathcal{L}_{m,\ell}\left(\dot{ \omega}_{\ell}\right) = \sup_{\theta_\ell}\left\lbrace \theta_\ell \,\dot{ \omega}_{\ell} - H_{m,\ell}(\theta_\ell) \right\rbrace\,.
\label{eq:LD-Legendre-mell}$$
Numerical results\[sec:LD-numerical-results\]
---------------------------------------------
The function $H_{m,\ell}$ defined in previous section can be computed either from a time series of $\omega_m(\phi,u)$ using the method described in section \[sec:LD-estimation-SCGF\], or solving the Ricatti equation as described in section \[sub:Quasi-linear-systems-with\]. Then, the large deviation rate funtion is computed using . We now show the results of these computations and discuss the physical consequences. We describe the results obtained by solving numerically the linearized barotropic equation , where we use the mean flow $U$ the flow obtained from a quasilinear simulation as described in the end of section \[sub:Numerical-implementation\], and represented in Figure \[fig:U\].
### Scaled cumulant generating function
An example of computation of $H_{m,\ell}\left(\theta\right)$ is shown in Figure \[fig:H-section-5\], with $m=10$, $\ell=3$ and $\alpha=0.073$. The linearized barotropic equation (\[eq:virtual-fast-barotropic-QL\]) is integrated over a time $T_{max}=54,500$, with fixed mean flow given in Figure \[fig:U\], and the value of $R_{m,\ell}$ is recorded every $0.03$ time units (the units are defined in section \[sec:LD-energy-balance-sphere\]).
The scaled cumulant generating function (\[eq:SCGF-QG-m\]) is estimated following the procedure described in section \[sec:LD-estimation-SCGF\] (thick black curve in Figure \[fig:H-section-5\]). Because the time series of $R_{m,\ell}$ is finite, $H_{m,\ell}(\theta)$ can only be computed with accuracy on a restricted range of values of $\theta$ (see section \[sub:Sampling-the-SCGF\] for details), here $\theta\in[\theta_{min}/2,\theta_{max}/2] = [-0.6,1.1]$.\
The scaled cumulant generating function (\[eq:SCGF-QG-m\]) is also computed solving numerically the Ricatti equation (\[eq:NL-Lya\]) and using (\[eq:SCGF-quasilinear\]) (yellow curve in Figure \[fig:H-section-5\]). We observe almost perfect agreement between the direct estimation of $H_{m,\ell}$ (black curve in Figure \[fig:H-section-5\]) and the computation of $H_{m,\ell}$ using the Ricatti equation (yellow curve). The integration of the Ricatti equation was done with a finer resolution and a lower hyper-viscosity than in the simulation of the linearized barotropic equation (\[eq:virtual-fast-barotropic-QL\]), the agreement between both results in Figure \[fig:H-section-5\] thus shows that the resolution used in the simulation of (\[eq:virtual-fast-barotropic-QL\]) is high enough, and that the effect of hyper-viscosity is negligible.\
We stress that the computation of $H_{m,\ell}(\theta)$ using the Ricatti equation (\[eq:NL-Lya\]) does not require the numerical integration of the linear dynamics (\[eq:virtual-fast-barotropic-QL\]). Typically, the integration of (\[eq:virtual-fast-barotropic-QL\]) over a time $T_{max}=54,500$ takes about one week, while the resolution of the Ricatti equation (\[eq:NL-Lya\]) for a given value of $\theta$ is a matter of a few seconds. This enables the investigation of the statistics of rare events (large values of $\left|\theta\right|$ in Figure \[fig:H-section-5\]) extremely easily, as we now explain in more detail.
![\[fig:H-section-5\]Thick black line: scaled cumulant generating function $H_{10,3}\left(\theta\right)$ estimated from the numerical simulation of the linearized barotropic dynamics (\[eq:virtual-fast-barotropic-QL\]), with parameters defined in section \[sub:Numerical-implementation\] and $\alpha=0.073$. Statistical error bars are smaller than the width of this curve. Yellow curve: scaled cumulant generating function $H_{10,3}\left(\theta\right)$ computed from numerical integration of the Ricatti equation (\[eq:NL-Lya\]), using (\[eq:SCGF-quasilinear\]). The spectral cutoff in the Ricatti calculation is $L=120$ (compared to $L=80$ for the simulation of (\[eq:virtual-fast-barotropic-QL\])), and the hyper-viscosity coefficient is such that the smallest scale has a damping rate of 4 (i.e. it is half of the hyperviscosity coefficient in the case $L=80$). The estimated scaled cumulant generating function is in agreement with the one computed from the Ricatti equation, showing that the finite spectral cutoff and hyperviscosity are negligible in the calculation of $H_{10,3}\left(\theta\right)$. The numerical integration of the Ricatti equation enables access to larger values of $\left|\theta\right|$ (rarer events) extremely easily, see also Figure \[fig:L-section-5\].](Ricatti)
### Rate function and departure from Gaussian statistics
The main goal of this study is to investigate the statistics of rare events in zonal jet dynamics, that cannot be described by the effective dynamics studied in section \[sub:Gaussian-approximation-of\]. Using the previous numerical results, we now show how to quantify the departure from the effective description.
The large deviation rate function $\mathcal{L}_{m,\ell}$ entering in the Large Deviation Principle can be computed from $H_{m,\ell}$ using . The result of this calculation[^9] is shown in Figure \[fig:L-section-5\] (yellow curve).\
Because of the relation , $\mathcal{L}_{m,\ell}$ can also be interpreted as the large deviation rate function for the time-averaged advection term, denoted $\bar{R}_{m,\ell,\Delta t}\equiv \frac{1}{\Delta t}\int_0^{\Delta t}R_{m,\ell}(u)\,\mathrm{d} u$. In other words, the probability distribution function of $\bar{R}_{m,\ell,\Delta t}$ in the regime $\Delta t\gg\tau$ satisfies $$\ln P_{m,\ell,\Delta t}\left(\bar{R}\right) \underset{\Delta t\gg\tau}{\sim} -\Delta t \,\mathcal{L}_{m,\ell}\left(\bar{R}\right).
\label{eq:LD-LDP-Lmell}$$
The Central Limit Theorem states that for large $\Delta t\gg\tau$, the statistics of $\bar{R}_{m,\ell,\Delta t}$ around its mean $\mathcal{R}_{m,\ell}\equiv\mathbb{E}_U\left[\bar{R}_{m,\ell,\Delta t}\right]=\mathbb{E}_U\left[R_{m,\ell}\right]$ are nearly Gaussian. A classical result in Large Deviation Theory is that the Central Limit Theorem can be recovered from the Large Deviation Principle [@freidlin2012random]. Indeed, using the Taylor expansion of $H_{m,\ell}$ in powers of $\theta$ (\[eq:SCGF-expansion-theta\]) and computing the Legendre-Fenschel transform , we get $$\mathcal{L}_{m,\ell}\left(\bar{R}\right) = \frac{1}{2\mathcal{Z}_{m,\ell}}\left(\bar{R} - \mathcal{R}_{m,\ell}\right)^2 + O\left(\left(\bar{R} - \mathcal{R}_{m,\ell}\right)^3\right)
\label{eq:LD-expansion-Lmell}$$ with $\mathcal{Z}_{m,\ell}\equiv \lim_{\Delta t\to\infty}\Delta t\,\mathbb{E}_{U}\left[\left[\bar{R}_{m,\ell,\Delta t}^2\right]\right]$. Using the Large Deviation Principle , this means that the statistics of $\bar{R}_{m,\ell,\Delta t}$ for small fluctuations around $\mathcal{R}_{m,\ell}$ are Gaussian with variance $\mathcal{Z}_{m,\ell}/\Delta t$, which is exactly the result of the Central Limit Theorem. Then, the difference between the actual rate function $\mathcal{L}_{m,\ell}\left(\bar{R}\right)$ and its quadratic approximation (right-hand side of ) gives the departure from the Gaussian behaviour of $\bar{R}_{m,\ell,\Delta t}$.\
From , the Gaussian behaviour is expected to apply roughly for $\left|\bar{R} - \mathcal{R}_{m,\ell}\right|\leq \sigma_{m,\ell,\Delta t}$ with $\sigma_{m,\ell,\Delta t} \equiv \sqrt{\mathcal{Z}_{m,\ell}/\Delta t}$. The values of $\mathcal{R}_{m,\ell}\pm\sigma_{m,\ell,\Delta t}$ are represented by the black vertical lines in Figure \[fig:L-section-5\][^10]. The quadratic approximation of the rate function is also shown in Figure \[fig:L-section-5\] (purple curve). As expected, the curves are indistinguishable from each other between the vertical lines (typical fluctuations), and departures from the Gaussian behaviour are observed away from the vertical lines (rare fluctuations). Namely, the probability of a large negative fluctuation is much larger than the probability of an equally large fluctuation for a Gaussian process with same mean and variance as $\bar{R}_{m,\ell,\Delta t}$. On the contrary, the probability of a large positive fluctuation is much smaller than the the probability of the same fluctuation for a Gaussian process with same mean and variance as $\bar{R}_{m,\ell,\Delta t}$.\
The kinetic description basically amounts at replacing $\bar{R}_{m,\ell,\Delta t}$ by a Gaussian process with same mean and variance. From the results summarized in Figure \[fig:L-section-5\], we see that such approximation leads to a very inaccurate description of rare events statistics. Understanding the influence of the non-Gaussian behavior of $\bar{R}_{m,\ell,\Delta t}$ on zonal jet dynamics is naturally a very interesting perspective of this work.
![\[fig:L-section-5\]Yellow curve: large deviation rate function $\mathcal{L}_{10,3}(\bar{R})$ computed from numerical integration of the Ricatti equation (\[eq:NL-Lya\]), using (\[eq:SCGF-quasilinear\]) and , with parameters defined in section \[sub:Numerical-implementation\] and $\alpha=0.073$. Purple curve: quadratic fit that corresponds to a Gaussian process with same mean and variance as $\bar{R}_{10,3,\Delta t}$, the time-averaged advection term. Black vertical lines: standard deviation of $\bar{R}_{10,3,\Delta t}$ around its mean. Outside the vertical lines, we observe non-Gaussian behaviour of $\bar{R}_{10,3,\Delta t}$, in particular negative fluctuations are much more probable than positive ones.](L_a0073_gauss)
Conclusions and perspectives\[conclusions\_perspectives\]
=========================================================
In this work we carried out a first study of the typical and large fluctuations of the Reynolds stress in fluid mechanics. Reynolds stress is certainly a key quantity in studying the largest scales of turbulent flows. This is especially true whenever a time scale separation is present, in which case it can be expected that an effective slow equation governs the large scale flow evolution (see equation ). Not only the averaged momentum flux (the Reynolds stress) and averaged advection terms are essential, but also their fluctuations (that we call the Reynolds stress fluctuations).
We studied the case of a zonal jet for the barotropic equation on a sphere, in a regime for which time scale separation is relevant. For this case, we show that the probability distribution function of the equal-time (without time average) advection term has a distribution with typical fluctuations which are very large compared to the average, and with heavy tails. These probability distribution functions have exponential tails, both for the quasilinear and fully non-linear dynamics cases. For quasilinear dynamics we gave a simple explanation for these exponential tails.
When one is interested in the low frequency evolution of the jet, these high frequency fluctuations of the advection term and momentum fluxes are not relevant. We discussed that the natural quantity to study is the large deviation rate function for the time averaged advection term (that we call the Reynolds stress large deviation rate function). We have proposed two methods to compute this rate function. First an empirical method, directly from the time series of the advection term, that could be applied to any dynamics. Second we show that for the quasilinear dynamics, the Reynolds stress large deviation rate function can be computed as the contraction of a solution of a matrix Riccati equation. We demonstrated that such a computation can be performed by generalizing classical algorithms used to compute Lyapunov equations. Solving the matrix Riccati equation is much more computationally efficient, by several orders of magnitude, compared to accumulating statistics by numerical simulation, and gives direct and easy access to the probability of rare events. The approach is however limited to the quasilinear dynamics so far.
We discussed the Reynolds stress large deviation rate again for the specific case of a zonal jet that arises in turbulent barotropic flow on the rotating sphere. We illustrated the computation of the Reynolds stress large deviation rate, both using the empirical method and the Riccati equation. These two approaches give a very good agreement. This large deviation rate function clearly illustrate the existence of non-Gaussian fluctuations. The non Gaussian fluctuations are much more rare than Gaussian ones for positive values of the Reynolds stress component and much less rare than Gaussian for negative values.
Our work illustrates the possibility to compute Reynolds stress large deviation rate functions. It opens up a number of perspectives. A next step would be to study the spatial structure of the Reynolds stress fluctuation, and describe it from a fluid mechanics perspective. It would help to answer the following questions: What are the dominant spatial pattern for the fluctuations of the Reynolds stresses? What causes them? What is their effect on the low frequency variability of the large scale flow? The most interesting application of the Reynolds stress large deviation rate functions may be the study of rare long term evolutions of the large scale flow. For instance, in many examples, rare transitions between turbulent attractors have been observed, leading to a bistability phenomenology. In order to study quantitatively such a bistability phenomenology, for instance in order to compute transitions rates and transitions paths between attractors, one could consider equation (\[eq:Slow\_Stochastic\_Dynamics\]) in the framework of Freidlin–Wentzell theory. The large deviation rate function we studied in this work would then be the basic building block, that would allow to define an action that should be minimized to compute transition paths and transition rates. In order to compute the action, the large deviation rate function should then be computed for any flow $U$ along a possible transition path, as described in section \[sec:LD-numerical-results\] for a single example of a flow $U$.
An essential question, at a more mathematical level, is the validity of the quasilinear approximation as far as rare events are concerned. The self consistency of the quasilinear approach has been discussed theoretically by focusing on the average Reynolds stress [@Bouchet_Nardini_Tangarife_2013_Kinetic]. This point has also been verified numerically in this work, through the study of properties of the energy balance (see section \[sub:energy-balance\]) and through the verification of the fact that the linear equation correlation time has a limit when $\alpha \rightarrow 0$ (see section \[sub:Empirical-validation-of\]). However this does not necessarily imply that the quasilinear approximation is self-consistent as far as fluctuations, and more specifically rare fluctuations, are concerned. This could be addressed by studying the properties of solutions to the Ricatti equation in the limit $\alpha\to0$ to assess whether or not the small scale dissipative mechanism (either viscosity or hyperviscosity) affects the statistics of the rare fluctuations. This problem is left as a prospect for future work.
The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013 Grant Agreement No. 616811) (F. Bouchet and T. Tangarife) and from the US NSF under Grant No. DMR-1306806 (J. B. Marston). J. B. Marston would also like to thank the Laboratoire de Physique de l’ENS de Lyon and CNRS for hosting a visit where some of this work was carried out. We thank the reviewers for their extremely careful reading of our paper and for their useful suggestions.
[^1]: A program that implements spectral DNS for the non-linear and quasi-linear equations, solves the non-linear Riccati equation, and includes graphical tools to visualize statistics, is freely available. The application “GCM” is available for OS X 10.9 and higher on the Apple Mac App Store at URL http://appstore.com/mac/gcm
[^2]: We can restrict ourselves to real $\xi_{m}$ decomposing $\omega_{m}$ and $\psi_{m}$ into real and imaginary parts.
[^3]: The application “GCM” integrates the equation \[eq:Lyapunov\] and the effective dynamics \[eq:CE2\].
[^4]: Note that the ordering of products with $L_{x}$ and $L_{x}^{T}$ differs between and .
[^5]: Explicitely, $$\mathbb{E}_{\frac{T}{\Delta t}}\left[\left(\int_{t_{0}}^{t_{0}+\Delta t}\left(R(s)-\bar{R}_{T}\right)\,\mbox{d}s\right)^{2}\right]=\frac{\Delta t}{2T}\sum_{k=0}^{\frac{2T}{\Delta t}-2}\left(\int_{k\Delta t/2}^{k\Delta t/2+\Delta t}\left(R(u)-\bar{R}_{T}\right)\,\mbox{d}u\right)^{2}\,,\label{eq:empirical-average-welch}$$ assuming for simplicity that $T/\Delta t$ is an integer. Generalisations to any $T,\Delta t$ is straightforward, replacing $2T/\Delta t$ by its floor value. The 50% overlap is suggested by Welch’s estimator of the power spectrum of a random process [@welch1967use].
[^6]: The statistical error bars for $p_{fluct}$ are computed from the error in the estimation of $Z[U](\phi)$, which is similar to the estimation of the correlation time $\tau$ described in section \[sub:Estimation-of-the\]. The statistical error bars for $p_{mean}$ are computed from the error in the estimation of the average $F$, given by $(\delta F)^{2}=\frac{1+2\tau/\Delta t}{N}\mbox{var}(F)$ where $\tau$ is the autocorrelation time of $F$, $\Delta t$ the time step between measurements of the Reynolds’ stress and $N$ the total number of data points [@newman1999monte].
[^7]: In this spherical geometry the maximum is taken over the inner jet region $\phi\in[-\pi/7,\pi/7]$.
[^8]: The time $t$ in the upper and lower bounds of the integral in are not relevant here, as we are considering the statistically stationary state of .
[^9]: Here the Legendre-Fenschel transform is estimated as $\mathcal{L}_{m,\ell}(\dot{\omega}_z)=\theta^\star \cdot \dot{\omega}_z - H_{m,\ell}\left(\theta^\star\right) $ where $\theta^\star$ is the solution of $\dot{\omega}_z = \partial_\theta H_{m,\ell}\left(\theta^\star\right) $. Other estimators could be considered [@rohwer2014convergence].
[^10]: The value of $\Delta t$ used in this estimation is the optimal one $\Delta t^\star$, defined in section \[sec:LD-estimation-SCGF\].
|
---
abstract: 'We obtain a [*complete set*]{} of one-loop RGE’s for a set of combinations of neutrino parameters for the case of two-fold degenerate hierarchical three-neutrino models. The requirement of consistency of exact solutions to these RGE’s with the two-fold degeneracy yields conditions which have previously been obtained perturbatively/numerically. These conditions, in the limit $|U_{e\nu_{3}}|=0$, are shown to lead to a strong cancellation in the matrix element of neutrinoless double beta decay.'
author:
- 'Mu-Chun Chen'
- 'K.T. Mahanthappa'
bibliography:
- 'ref2.bib'
title: 'Implications of the Renormalization Group Equations in Three-Neutrino Models with Two-fold Degeneracy'
---
It has been a puzzle that the mixing in the leptonic sector is so large while the mixing in the quark sector is so small. Many attemps have been made to explain this fact. One possible scenario is to utilize the flavour symmetry combined with GUT symmetry at some high energy scale $\Lambda$ [@Albright:2000sz]. Viable symmetries are those giving rise to large mixing in the lepton mass matrices. Most models of this kind suffer from fine-tunning and the difficulty of constructing a viable superpotential in the flavour symmetry sector that gives rise to the required vacua. An alternative to this scenario is the idea of infrared fixed point (IRFP) [@Pendleton:1981as]. Contrary to the idea of flavour combined with GUT symmetry, in the IRFP scenario, the low energy physics is governed by the low energy dynamics, namely, the renormalization group equations below the scale $\Lambda$. Physics above the scale $\Lambda$ plays no role in the predictions at low energies. Therefore, if there exists any IRFP which leads to viable phenomenology, one does not have to deal with the fine-tunning problem and the difficulty of finding the correct vacua. The focus of our attention in this note is the hierarchical three neutrino models with two-fold denegeracy, and the implications of the exact solutions to one-loop renormalization equations (RGE’s) to these models.
In the flavour basis where the charged leptons are diagonal, the neutrino flavour eigenstates and mass eigenstates are related by $|\nu_{\alpha}> =
U_{\alpha i} |\nu_{i}>$, where $\alpha$ and $i$ are the flavour and mass eigenstate indices respectively. The mass matrix $m_{\nu}$ can be diagonalized as follows $$U^{T}m_{\nu}U = diag(m_{1},m_{2},m_{3})$$ We adopt the usual parametrization for the leptonic mixing matrix $U$ $$U=diag(e^{i\delta_{e}},e^{i\delta_{\mu}},e^{i\delta_{\tau}}) \cdot V \cdot
diag(e^{-i\phi/2},e^{-i\phi^{'}/2},1)$$ $$V=\left(
\begin{array}{ccc}
c_{2}c_{3} & c_{2}s_{3} & s_{2}e^{-i\delta}\\
-c_{1}s_{3}-s_{1}s_{2}c_{3}e^{i\delta} &
c_{1}c_{3}-s_{1}s_{2}c_{3}e^{i\delta} & s_{1}c_{2}\\
s_{1}s_{3}-c_{1}s_{2}c_{3}e^{i\delta} &
-s_{1}c_{3}-c_{1}s_{2}s_{3}e^{i\delta} & c_{1}c_{2}
\end{array}
\right)$$ where $c_{i} \equiv \cos \theta_{i}$ and $s_{i} \equiv
\sin \theta_{i}$ and $0 \leq \theta_{i} \leq \pi/2$. The $\delta_{e,\mu,\tau}$ are three unphysical phases which can be absorbed by phase redefinition of the neutrino flavour eigenstates. There are three physical phases: $\delta$ is the universal phase (analog of the phase in the CKM matrix), and $\phi$ and $\phi^{'}$ are the Majorana phases. By properly choosing the phases $\phi$ and $\phi^{'}$ all three mass eigenvalues $m_{i}$ can be made positive. We therefore assume, without loss of generality, that $(m_{1}, m_{2}, m_{3})$ are positive. If any of these three phases is not zero or not $\pi$, CP violation in the lepton sector is implied. Note that in the limit $\theta_{2} = 0$, $\theta_{1}$ is identified as the atmospheric mixing angle, $\theta_{atm}$, and $\theta_{3}$ is identified as the solar mixing angle, $\theta_{\odot}$. In general, the mixing matrix elements are related to the physical observables, the atmospheric and solar mixing angles, by $\sin^{2}2\theta_{atm} \equiv 4|U_{\mu 3}|^{2}(1-|U_{\mu3}|^{2})$, and $\sin^{2}2\theta_{\odot} \equiv 4|U_{e2}|^{2}(1-|U_{e2}|^{2})$. Recent results indicate [@Gonzalez-Garcia:2000sq] that for atmospheric neutrino oscillations, $\Delta m_{atm}^{2} = 3.1 \times 10^{-3} eV^{2}$, $\sin^{2}2\theta_{atm}= 0.972$ [@Fukuda:1998mi]; for solar neutrino anomaly problem, there exists four solutions: (i)VO: $\Delta m_{\odot}^{2} =8.0 \times 10^{-11} eV^{2}$, $\sin^{2}2\theta_{\odot}=0.75$, (ii)LOW: $\Delta m_{\odot}^{2} = 7.9 \times 10^{-8} eV^{2}$, $\sin^{2}2\theta_{\odot}=0.96$, (iii)LAMSW: $\Delta m_{\odot}^{2} = 1.8 \times 10^{-5} eV^{2}$, $\sin^{2}2\theta_{\odot}=0.76$, (iv)SAMSW: $\Delta m_{\odot}^{2} = 5.4 \times 10^{-6} eV^{2}$, $\sin^{2}2\theta_{\odot}=6.0 \times 10^{-3}$ [@solar:2000] ; and the matrix element $|U_{e3}|=\sin \theta_{2}$ is constrained by the CHOOZ experiment to be $|U_{e3}| < 0.16$ [@Apollonio:1999ae].
The observed relation $\Delta m_{atm}^{2} \equiv |m_{3}^{2}-m_{2}^{2}| \gg \Delta m_{\odot}^{2}
\equiv |m_{2}^{2}-m_{1}^{2}|$ in the two-fold degenerate, hierarchical model implies $m_{3} \gg m_{2} \simeq m_{1}$, and this in turn implies $\nabla_{21} \gg \nabla_{32} \simeq \nabla_{31} \simeq 1$ with $\nabla_{ij} \equiv (m_{i}+m_{j})(m_{i}-m_{j})^{-1}$.
We assume that the neutrino masses are generated by a dimension-5 effiective Majorana mass operator in the MSSM $$\label{lagrangian}
\mathcal{L} \supset - k_{ij} (H_{u} L_{i}) (H_{u} L_{j}) + h.c.$$ The neutrino mass matrix $(m_{\nu})_{ij}$ is related to $k_{ij}$ by $m_{\nu} = k_{ij} v^{2} / 2$, where $v^{2} \equiv
v_{u}^{2} + v_{d}^2 = (246 eV)^{2}$ is the squared vacuum expectation value of the SM Higgs. The effective dimension-5 operator is generated by some mechanism at the high energy scale $\Lambda$. The seesaw mechanism is the most common way to generate this operator. Since we are only interested in physics below the scale $\Lambda$, we will start with the effective Lagrangian Eq. (\[lagrangian\]) without specifing the origin of this effective operator.
The general one-loop RGE of the effective left-handed Majorana neutrino mass operator is given by [@Chankowski:1993tx] $$\label{rgem}
\frac{d m_{\nu}}{dt}=-\{\kappa_{u}m_{\nu}+m_{\nu}P+P^{T}m_{\nu}\}$$ where $t \equiv \ln \mu$. In the MSSM, $P$ and $\kappa_{u}$ are given by, $$\begin{aligned}
P & = & -\frac{1}{32\pi^{2}} \frac{Y_{e}^{\dagger}Y_{e}}{\cos^{2} \beta}
\simeq -\frac{1}{32\pi^{2}} \frac{h_{\tau}^{2}}{\cos^{2}\beta} diag(0,0,1)\\
\kappa_{u} & = &
\frac{1}{16\pi^{2}}[\frac{6}{5}g_{1}^{2} + 6g_{2}^{2}
- 6 \frac{Tr(Y_{u}^{\dagger}Y_{u})}{\sin^{2}\beta}] \nonumber\\
& \simeq &
\frac{1}{16\pi^{2}}[\frac{6}{5}g_{1}^{2} + 6g_{2}^{2}
- 6 \frac{h_{t}^{2}}{\sin^{2}\beta}] \end{aligned}$$ where $g_{1}^{2}=\frac{5}{3}g_{Y}^{2}$ is the $U(1)$ gauge coupling constant, $Y_{u}$ and $Y_{e}$ are the $3 \times 3$ Yukawa coupling matrices for the up-quarks and charged leptons respectively, and $h_{t}$ and $h_{\tau}$ are the SM $t$- and $\tau$-Yukawa couplings. Since $\kappa_{u}$ gives rise to an overall rescaling of the mass matrix, it has no effects on the running of the mixing matrix $U$. Eq. (\[rgem\]) can be solved analytically by integrating out its right-hand side [@Ellis:1999my]. Note that at one-loop level, since the evolutions of the gauge coupling constants $g_{1,2}(t)$ and of the diagonal Yukawa couplings $h_{t,\tau}(t)$ are known, it is indeed possible to carry out the integrations on the right-hand side without making any further assumptions. However, the diagonalization procedure of the resulting $3 \times 3$ complex symmetric matrix, $m_{\nu}(t)$, is very complicated. It is thus hard to infer analytically the behaviours of the physical observables, the mixing angles and phases. An alternative to this“run-and-diagonalize” procedure is the “diagonalize-and-run” procedure. It is convenient to work with the RGE’s of mass eigenvalues and the diagonalization matrix, given by [@Casas:1999tg] $$\label{rge:m} \frac{d m_{i}}{dt}=-2m_{i}\hat{P_{ii}}-m_{i}Re\{\kappa_{u}\}$$ $$\label{rge:u} \frac{dU}{dt}=UT$$ where $$T_{ii} \equiv i\hat{Q_{ii}}$$ $$\begin{aligned}
T_{ij} & \equiv &
( \frac{1}{ m_{i}^{2} - m_{j}^{2} } ) \{ ( m_{i}^{2} + m_{j}^{2} )
\hat{P_{ij}} + 2m_{i}m_{j} \hat{P_{ij}^{\ast}} \} + i \hat{Q_{ij}} \nonumber\\
& = & \nabla_{ij} Re\{\hat{P_{ij}}\} + i \nabla_{ij}^{-1} Im\{\hat{P_{ij}}\}
+ i\hat{Q_{ij}}\end{aligned}$$ Here $\hat{P}$ and $\hat{Q}$ are defined as $$\hat{P} \equiv \frac{1}{2} U^{\dagger} (P+P^{\dagger})U, \quad
\hat{Q} \equiv \frac{-i}{2} U^{\dagger} (P-P^{\dagger})U$$ Eq. (\[rgem\]), (\[rge:m\]), and (\[rge:u\]) have been studied before , but the analyses have been done either numerically or perturbatively. (Exact solutions to the RGE’s in a two flavour case have been investigated recently in [@Balaji:2000ma]). Due to the large interfamily hierarchy in the charged lepton sector, we keep only the $\tau$-Yukawa coupling. We will further assume that $h_{\tau}$ does not evolve throughout the entire range of the RG running. This is a valid assumption for the hierarchical case with two-fold degeneracy, as $\nabla_{21}$ is very large. Under these assumptions, the above quantities are given in terms of the masses and the diagonalization matrix elements as $$\hat{P_{ij}}=-\frac{h_{\tau}^{2}}{32\pi^{2}}U_{3i}^{\ast}U_{3j}, \quad
\hat{Q}=0$$
The evolutions of $\nabla_{ij}$, $(\hat{P_{ii}}-\hat{P_{jj}})$ and $Re(\hat{P_{ij}})$ can be derived from Eq. (\[rge:m\]) and (\[rge:u\]). For $(i,j)=(2,1)$, with $\nabla_{21} \gg \nabla_{31} \simeq \nabla_{32} \simeq 1$, the RGE’s for these three functions form a [*complete set*]{} of coupled differential equations as follows:
\[coupled\] $$\begin{aligned}
\frac{d\nabla_{21}}{dt} & = & \nabla_{21}^{2} (\hat{P_{22}}-\hat{P_{11}})
\label{equationa}
\\
\frac{d(\hat{P_{22}}-\hat{P_{11}})}{dt} &
= & -4\nabla_{21}[Re(\hat{P_{21}})]^{2}
\label{equationb}
\\
\frac{d Re( \hat{P_{21}})}{dt} & =& \nabla_{21}
( \hat{P_{22}} - \hat{P_{11}}) Re( \hat{P_{21}} )
\label{equationc}
\end{aligned}$$
The exact solutions to these coupled differential equations are given by
\[sol\] $$\begin{aligned}
\nabla_{21}(t) & = & a_{0}Z(t)^{-1/2}
\label{equationa}
\\
(\hat{P_{22}}(t)-\hat{P_{11}}(t)) & =
& (b_{0}^{2}+4c_{0}^{2}(1-Z(t)^{-1}))^{1/2}
\label{equationb}
\\
Re(\hat{P_{21}}(t)) & = & c_{0}Z(t)^{-1/2}
\label{equationc}
\end{aligned}$$
where $$Z(t)\equiv 1-2a_{0}b_{0}t+a_{0}^{2}(b_{0}^{2}+4c_{0}^{2})t^{2}$$ and $a_{0}$, $b_{0}$ and $c_{0}$ are the initial values at the high energy scale $\Lambda$: $$\begin{aligned}
a_{0} & \equiv & \nabla_{21}(0); \quad
b_{0} \equiv (\hat{P_{22}}(0)-\hat{P_{11}}(0))\nonumber\\
c_{0} & \equiv & Re(\hat{P_{21}}(0)).\end{aligned}$$ The behaviours of these three functions are shown in Fig. (\[sol.1\])-(\[sol.3\]). Note that $\nabla_{21}(t)$ and $Re(\hat{P_{21}}(t))$ flow to zero, while $(\hat{P_{22}}(t)-\hat{P_{11}}(t))$ flows to a constant value of $(b_{0}^2 + 4c_{0}^2)^{1/2}$ in the infrared. This set of parameters, $(\nabla_{21}(t^{\ast}), Re(\hat{P_{21}}(t^{\ast})),
(\hat{P_{22}}(t^{\ast})-\hat{P_{11}}(t^{\ast})))
= (0,0,(b_{0}^2 + 4c_{0}^2)^{1/2})$ is an infrared stable fixed point; however, it is unrealistic. The function $\nabla_{21}(t)$ decreases to $O(1)$ very fast as the energy scale goes down, for any non-vanishing $b_{0}$ and $c_{0}$, however small they are. This is phenomenologically unacceptable. In addition, it contradicts with the assumption $\nabla_{21} \gg \nabla_{32},
\nabla_{31}$ we made in order to arrive at Eq. (\[coupled\]). For the consistency of the calculations, we thus require the following two conditions at the initial high energy scale $\Lambda$: $$\begin{aligned}
b_{0} &\equiv&
(\hat{P_{22}}(0)-\hat{P_{11}}(0)) = 0 \label{relation1}\\
c_{0} & \equiv &
Re(\hat{P_{21}}(0)) = 0
\label{relation2} \end{aligned}$$ We emphasize that these conditions have been obtained by demanding that the exact solutions to the above RGE’s Eq. (\[coupled\]) be consistent with $\nabla_{21} \gg 1$. It is to be noted that these conditions have been obtained before numerically and perturbatively [@Casas:1999tg; @Casas:1999ac; @Chankowski:1999xc]. When these conditions are satisfied, all three equations in Eq. (\[coupled\]) do not evolve. The first relation, Eq. (\[relation1\]), gives rise to $|V_{32}|^{2} = |V_{31}|^{2}$ which translates into $$\frac{c_{1}^{2}s_{2}^{2}-s_{1}^{2}}{\sin
2\theta_{1} \cdot s_{2}} = \tan 2\theta_{3} \cdot \cos \delta$$ The second relation, Eq. (\[relation2\]), gives rise to $$\begin{aligned}
\frac{Re(V_{32}^{\ast}V_{31})}{Im(V_{32}^{\ast}V_{31})}
& = & \tan(\frac{\phi^{'}-\phi}{2})\\
& = & \frac{\sin 2\theta_{3}
(c_{1}^{2} s_{2}^{2} - s_{1}^{2})}
{\sin \delta \cdot \sin 2\theta_{1} \cdot s_{2}}
+ \frac{\cos 2\theta_{3}}{\tan \delta} \nonumber \end{aligned}$$ Combining these two relations, we obtain a very simple relation among $\theta_{3}$ and three CP violating phases $\delta, \phi, \phi^{'}$: $$\cos 2\theta_{3} = -\frac{1}{\tan \delta} \cdot
\frac{1}{\tan(\frac{\phi-\phi^{'}}{2})}$$ We have studied the RGE’s involving various functions, $\nabla_{ij}$, $(\hat{P_{ii}}-\hat{P_{jj}})$ and $Re(\hat{P_{ij}})$, for the case $(i,j)=(3,1)$ and $(3,2)$. Upon imposing the above consistency conditions Eq. (\[relation1\]) and (\[relation2\]), we deduce that the functions $\hat{P_{11}}$, $\hat{P_{22}}$, $\hat{P_{33}}$, $Re(\hat{P_{31}})$ and $Re(\hat{P_{32}})$ do not run. These results cannot be tested experimentally at present.
Now we discuss the implications of Eq. (\[relation1\]) and (\[relation2\]) in the limit $\theta_{2} = 0$ (Recently, it has been pointed out that this could be a consequence of the so-called 2-3 symmetry [@Lam:2001fb]). They imply $$\label{condition}
\cos (\frac{\phi-\phi^{'}}{2}) s_{1}^2 c_{3} s_{3} = 0, \qquad
s_{1}^{2} (c_{3}^{2} - s_{3}^{2}) = 0$$ Since the atmospheric angle $\theta_{1} = \pi / 4$ is non-vanishing, these two relations can be satisfied simultaneously only if (i) the solar mixing angle is maximal, i.e. $\theta_{3} = \pi / 4$, and (ii) the Majorana phase difference $(\phi-\phi^{'})=\pi$. The phases $\phi$ and $\phi^{'}$ occur in the matrix element $\left< M_{ee} \right>$ for the neutrinoless double beta decay: $$\begin{aligned}
\left< M_{ee} \right> &\equiv&| \sum_{i=1,2,3} U_{ei}^{2} m_{i} |
\\
&=&|m_{1} e^{-i\phi} c_{2}^{2} c_{3}^{2} +
m_{2} e^{-i\phi^{'}} c_{2}^{2} s_{3}^{2} +
m_{3} s_{2}^{2} e^{-2i\delta} | < B \nonumber\end{aligned}$$ where index $i$ denotes the mass eigenstates. Currently, the most stringent bound is given by $B=0.2 eV$ [@Baudis:1999xd]. In the limit $\theta_{2}=0$ with nearly degenerate $m_{1} \simeq m_{2}$, $M_{ee}$ becomes $$\label{0nub}
M_{ee} \simeq | m_{1} c_{2}^{2} ( e^{-i\phi} c_{3}^{2} +
e^{-i\phi^{'}} s_{3}^{2}) |$$ It is obvious that when $(\phi-\phi^{'})=\pi$ and $\theta_{3}=\pi/4$ the r.h.s. of Eq. (\[0nub\]) is exactly zero. Thus we conclude that neutrinoless double beta decay is very highly suppressed.
It is interesting to speculate the reason(s) for consistency conditions of Eq. (\[relation1\]) and (\[relation2\]). It could be due to the existence of a symmetry at a high energy scale $\Lambda$. The other possibility is that these two relations are the fixed point relations of the RGE’s for new physics above the scale $\Lambda$.
\[\]\[\][$\nabla_{21}(t)$]{} ![The function $\nabla_{21}(t)$. Initial values at $\Lambda$ are $(a_{0},b_{0},c_{0})=(1000,1,1)$.[]{data-label="sol.1"}](sol.1.eps "fig:")
\[\]\[\][$(\hat{P_{22}}(t)-\hat{P_{11}}(t))$]{} ![The function $(\hat{P_{22}}(t)-\hat{P_{11}}(t))$. Initial values at $\Lambda$ are $(a_{0},b_{0},c_{0})=(1000,1,1)$.[]{data-label="sol.2"}](sol.2.eps "fig:")
\[\]\[\][$Re(\hat{P_{21}}(t))$]{} ![The function $Re(\hat{P_{21}}(t))$. Initial values at $\Lambda$ are $(a_{0},b_{0},c_{0})=(1000,1,1)$[]{data-label="sol.3"}](sol.3.eps "fig:")
This work was supported, in part, by the U.S. Department of Energy under grant number DE FG03-05ER40894.
|
---
abstract: 'We present a new algorithm based on an gradient ascent for a general Active Exploration bandit problem in the fixed confidence setting. This problem encompasses several well studied problems such that the Best Arm Identification or Thresholding Bandits. It consists of a new sampling rule based on an online lazy mirror ascent. We prove that this algorithm is asymptotically optimal and, most importantly, computationally efficient.'
author:
- |
Pierre Ménard\
Institut de Mathématiques de Toulouse, Université de Toulouse, Toulouse\
IRT Saint Exupéry, Toulouse\
`pierre.menard@univ-toulouse.fr`\
bibliography:
- 'biblio-BLB.bib'
title: Gradient Ascent for Active Exploration in Bandit Problems
---
Introduction
============
Several recent and less recent analyses of bandit problems share the remarkable feature that an instance-dependant lower-bound analysis permits to show the existence of an *optimal proportion of draws*, which every efficient strategy needs to match, and which is used as a basis for the design of optimal algorithms. This is the case in Active Exploration bandit problems, see @chernoff1959sequential, @soare2014best, @russo2016simple and @garivier2016optimal but also for the Regret Minimization bandit problems, from the simplest multi-armed bandit setting @garivier2018explore to more complex setting @lattimore2017end, @combes2017minimal. To reach the asymptotic lower bounds one needs to sample asymptotically according to this optimal proportion of draws. A natural strategy is to sample according to the optimal proportion of draws associated with the current estimate of the true parameter, with some extra exploration. See for example @antos2008active, @garivier2016optimal, @lattimore2017end and @combes2017minimal. This strategy has a major drawback, computing the optimal proportion of draws requires to solve an often involved *concave optimization problem*. Thus, this can lead to rather computationally inefficient strategy since one must solve exactly at each steps a new concave optimization problem.
In this paper we propose to use instead a gradient ascent to solve in an online fashion the optimization problem thus merging the Active Exploration problem and the computation of the optimal proportion of draws. Precisely we perform an online lazy mirror ascent, see @shalev2012online, @bubeck2011introduction, adding an new link between stochastic bandits and online convex optimization. Hence, it is sufficient to compute at each steps only a (sub-)gradient, which greatly improves the computational complexity. As a byproduct the obtained algorithm is quite generic and can be applied in various Active Exploration bandit problems, see Appendix \[app:examples\].
The paper is organized as follows. In Section \[sec:problem\_description\] we define the framework. A general asymptotic lower bound is presented in Section \[sec:lower\_bound\] . In Section \[sec:intuition\] we motivate the introduction of the gradient ascent. The main result, namely the asymptotic optimality of Algorithm \[alg:gradient\_ascent\] and its proof compose Section \[sec:gradient\_ascent\]. Section \[app:examples\] regroups various examples that are described by the general setting introduced in Section \[sec:problem\_description\]. Section \[sec:experiments\] reports results of some numerical experiments comparing Algorithm \[alg:gradient\_ascent\] to its competitors. \[sec:intro\]
#### Notation.
For $K\in\operatorname{\mathbb{N}}^*$, let $[1,K]=\{1,\ldots,K\}$ be the set of integers lower than or equal to $K$. We denote by $\Sigma_K$ the simplex of dimension $K-1$ and by $\{e_a\}_{a\in [1,K]}$ the canonical basis of $\operatorname{\mathbb{R}}^K$. A distribution on $[1,K]$ is assimilated to an element of $\Sigma_K$. The Kullback-Leibler divergence between two probability distributions $w,w'$ on $[1,K]$ is (with the usual conventions) $$\operatorname{kl}(w,w')=\sum_{a=1}^K w_a \log\!\!\left(\frac{w_a}{w'_a}\right)\,.$$
Problem description {#sec:problem_description}
-------------------
For $K\geq 2$, we consider a Gaussian bandit problem $\big( \operatorname{\mathcal{N}}(\mu_1,\sigma^2),\ldots,\operatorname{\mathcal{N}}(\mu_K,\sigma^2)\big)$, which we unambiguously refer to by the vector of means $ \mu=\big(\mu_1,\ldots,\mu_K\big)$. Without loss of generality, we set in the following $\sigma^2=1$. We denote by ${\mathcal{M}}$ the set of Gaussian bandit problems. Let $\operatorname{\mathbb{P}}_{\mu}$ and $\operatorname{\mathbb{E}}_{\mu}$ be respectively the probability and the expectation under the bandit problem $ \mu$.
We fix a finite number of subsets of bandit problems ${\mathcal{S}}_i \subset {\mathcal{M}}$ for $i \in {\mathcal{I}}$ with $|{\mathcal{I}}|<\infty$ and we assume that the subsets ${\mathcal{S}}_i$ are pairwise disjoint, open and convex. We will explain latter why we need these assumptions on the sets ${\mathcal{S}}_i$. For a certain bandit problem $\mu$ in ${\mathcal{S}}:=\cup_{i\in{\mathcal{I}}}{\mathcal{S}}_i$ our objective is to identify to which set it belongs, i.e. to *find $i(\mu)$ such that $\mu \in S_{i(\mu)}$*. Namely, we consider algorithms that output a subset index ${\widehat{i}}\in {\mathcal{I}}$ after $\tau>0$ pulls. This setting is quite general and encompasses several Active Exploration bandit problems, see Section \[app:examples\].
Two approaches for this problem have been proposed: first, one may consider a given budget $\tau$ and try to minimize the probability to predict a wrong subset index, this is the *Fixed Budget setting*, see @bubeck2012regret, @audibert2010best and @LocatelliGC16. The second approach is the *Fixed Confidence setting*, where we fix a confidence level $\delta$ and try to minimize the expected number of sample $\operatorname{\mathbb{E}}_\mu[\tau_\delta]$ under the constraint that the predicted subset index is the right one with probability at least $1-\delta$, see @chernoff1959sequential, @even2002pac, @mannor2004sample and @KaCaGa16. In this paper we will consider the second approach.
The game goes as follow: at each round $t\in\operatorname{\mathbb{N}}^*$ the agent chooses an arm $A_t \in \{1,\ldots,K\}$ and observes a sample $Y_t\sim\operatorname{\mathcal{N}}(\mu_{A_t},1)$ conditionally independent from the past. Let ${\mathcal{F}}_{t}=\sigma (A_1,Y_1,\ldots, A_t,Y_t)$ be the information available to the agent at time $t$. In order to respect the confidence constraint the agent must follow a *$\delta$-correct* algorithm comprised of:
- a *sampling rule* $(A_t)_{t\geq 1}$, where $A_t$ is ${\mathcal{F}}_{t-1}$-measurable,
- a *stopping rule* $\tau_\delta$, a stopping time for the filtration $({\mathcal{F}}_t)_{t\geq 1}$,
- a *decision rule* ${\widehat{i}}$ ${\mathcal{F}}_{\tau_\delta}$-measurable,
such that for all $\mu\in{\mathcal{S}}$ the fixed confidence condition is satisfied $\operatorname{\mathbb{P}}_{\mu}\big({\widehat{i}}\neq i(\mu)\big)\leq \delta$ and that the algorithm stop almost surely $\operatorname{\mathbb{P}}_{\mu}\big(\tau_\delta < \infty)= 1$. In this paper we will focus our attention on the *sampling rule* since stopping rules are now well understood and decision rule are straightforward to find.
Lower Bound {#sec:lower_bound}
-----------
The Kullback-Leibler divergence between two Gaussian distributions $\operatorname{\mathcal{N}}(\mu_1,1)$ and $\operatorname{\mathcal{N}}(\mu_2,1)$ is defined by $${\mathrm{d}}(\mu_1,\mu_2):=\frac{(\mu_1-\mu_2)^2}{2}\,.$$ The set of alternatives of the problem $\mu\in{\mathcal{S}}$ is denoted by ${\mathcal{A}\textit{lt}}(\mu):=\cup_{i\neq i(\mu)}{\mathcal{S}}_i$. One can prove the following generic asymptotic lower bound on the expected number of samples when the confidence level $\delta$ tends to zero, see @garivier2016optimal and @garivier2017thresholding.
\[th:lb\_asympt\_threshold\] For all $\mu\in{\mathcal{S}}$, for all $0<\delta<1/2$, $$\operatorname{\mathbb{E}}_{\mu}[\tau_\delta]\geq T^\star (\mu) \operatorname{kl}(\delta,1-\delta)\,,
\label{eq:LB_asymp}$$ where the characteristic time $T^\star (\mu)$ is defined by $$T^\star (\mu)^{-1}=\max_{w\in\Sigma_K}\inf_{\lambda \in {\mathcal{A}\textit{lt}}(\mu)} \sum_{a=1}^{K} w_a {\mathrm{d}}(\mu_a, \lambda_a)\,.
\label{eq:charateristic_time}$$ In particular implies that $$\liminf\limits_{\delta\rightarrow 0} \frac{\operatorname{\mathbb{E}}_{\mu}[\tau_\delta]}{\log(1/\delta)}\geq T^\star(\mu)\,.
\label{eq:LB_asymp_lim}$$
As already explained by @chernoff1959sequential, it is interesting to note that asymptotically we end up with a zero-sum game where the agent first plays a proportion of draws $w$ trying to minimize the sum in then the “nature” plays an alternative $\lambda$ trying to do the opposite. The value of this game is exactly $T^\star(\mu)^{-1}$. In the sequel we denote by $$\label{eq:def_F}
F(w,\mu):= \inf_{\lambda \in {\mathcal{A}\textit{lt}}(\mu)} \sum_{a=1}^{K} w_a {\mathrm{d}}(\mu_a, \lambda_a)\,,$$ the function that the agent needs to maximize against a “nature” that plays optimally. An algorithm is thus asymptotically optimal if the reverse inequality of holds with a limsup instead of a liminf.
Intuition: what is the idea behind the algorithm? {#sec:intuition}
-------------------------------------------------
To get an asymptotically optimal algorithm the agent wants to play accordingly to an optimal proportion of draws ${w^\star}(\mu)$, defined by $${w^\star}(\mu)\in \operatorname*{arg\,max}_{w\in\Sigma_K}\inf_{\lambda \in {\mathcal{A}\textit{lt}}(\mu)} \sum_{a=1}^{K} w_a {\mathrm{d}}(\mu_a, \lambda_a)\,,
\label{eq:def_w_star}$$ in order to minimize the characteristic time in . But, of course, the agent has not access to the true vector of means. One way to settle this problem is to track the optimal proportion of the current empirical means. Let ${\widehat{\mu}}(t)$ be the vector of empirical means at time $t$: $${\widehat{\mu}}_a(t)=\frac{1}{N_a(t)}\sum_{s=1}^{t} Y_s\, {\mathds{1}}_{\{A_s=a\}}\,,$$ where $N_a(t) = \sum_{s=1}^t {\mathds{1}}_{\{ A_s = a \}}$ denotes the number of draws of arm $a$ up to and including time $t$. We will denote by $w(t) = N(t)/t$ the empirical proportion of draws at time $t$. Following this idea, the sampling rule could be $$A_{t+1}\in\operatorname*{arg\,max}_{a\in [1,K]} {w^\star}_a\big({\widehat{\mu}}(t)\big) -w_a(t)\,.$$ This rule is equivalent to the direct tracking rule (without forced exploration, see below) by @garivier2016optimal. But this approach has a major drawback, at each time $t$ we need to solve exactly the concave optimization problem in . And it appears that in some case we can not solve it analytically, see for example @garivier2017thresholding. Even if there exists an efficient way to solve the optimization problem numerically like for example in the Best Arm Identification problem some simplest and efficient algorithms give experimentally comparable results. We can cite for example Best Challenger type algorithms, see @garivier2016optimal and @russo2016simple.
The idea of our algorithm is best explained on the simple example of the Thresholding Bandit problem (see Section \[sec:thresholding\_bandit\]), where the set of all arms larger than the threshold ${\mathfrak{T}}$ is to be identified. There exists a natural and efficient sampling rule (see @LocatelliGC16): $$A_{t+1}\in\operatorname*{arg\,min}_{a\in[1,K]} N_a(t) {\mathrm{d}}\big({\widehat{\mu}}_a(t),{\mathfrak{T}}\big)\,.
\label{eq:sampling_rule_threshold}$$It turns out that this sampling rule leads to an asymptotically optimal algorithm. We are not aware of a reference for this fact. In order to give an interpretation of this sampling rule, let takes one step back. In this problem we want to maximize with respect to the first variable the following concave function (see Section \[sec:thresholding\_bandit\]) $$\label{eq:def_F_thresholding}
F(w,\mu) = \min_{a\in[1,K]} w_a {\mathrm{d}}(\mu_a, {\mathfrak{T}})\,.$$ The sub-gradient of $F(\cdot,\mu)$ at $w$, denoted by $\partial F(w,\mu)$, is a convex combination of the vectors $$\nabla F(w,\mu)=\begin{bmatrix}
(0)\\
{\mathrm{d}}(\mu_b, {\mathfrak{T}})\\
(0)
\end{bmatrix}\!,$$ for the active coordinates $b$ that attain the minimum in . With this notation, the sampling rule can be rewritten in the following form $$e_{A_{t+1}}\in\operatorname*{arg\,max}_{w\in\Sigma_K} w\cdot \nabla F\big(w(t),{\widehat{\mu}}(t)\big)\,,$$ where $\nabla F\big(w(t),{\widehat{\mu}}(t)\big)$ is some element in the sub-gradient $\partial F\big(w(t),{\widehat{\mu}}(t)\big)$. Then the update of the empirical proportion of draws follows the simple rule $$\label{eq:update_franck_wolf}
w(t+1)= \frac{t}{t+1} w(t) + \frac{1}{t+1} e_{A_{t+1}}\,.$$ Here we recognize surprisingly one step of the Frank-Wolfe algorithm [@frank1956algorithm] for maximizing the concave function $F\big(\cdot,{\widehat{\mu}}(t)\big)$ on the simplex. The exact same analysis can be done with a variant of the Best Challenger sampling rule for the Best Arm Identification problem. This is described in Section \[sec:BAI\]. It is not the first time that Frank-Wolfe algorithm appears in the stochastic bandits field, see for example @berthet2017fast. Precisely in the aforementioned reference they interpret the classical UCB algorithm as an instance of this algorithm with an “optimistic” gradient. The main difficulty here, which does not appear in the Regret Minimization problem, is that the function $F(\cdot,\mu)$ *is not smooth* in general (as an infimum of linear functions). Thus we can not directly leverage the analysis of Frank-Wolfe algorithm in our setting as @berthet2017fast. In particular it is not obvious that the sampling rule driven by the Frank-Wolfe algorithm will converge to the maximum of $F(\cdot,\mu)$, for the general problem presented in Section \[sec:intro\], even in the absence of noise (i.e. $\sigma=0$).
But we can keep the idea of using a concave optimizer in an online fashion instead of computing at each steps the optimal proportion of draws. Indeed there is a candidate of choice for optimizing non-smooth concave function namely the *sub-gradient ascent*. Now the strategy is clear, at each steps we will perform one step of sub-gradient ascent for the function $F\big(\cdot,{\widehat{\mu}}(t)\big)$ on the simplex. Nevertheless, the update of the proportion of draws will be more intricate than in , we will need to track the average of weights proposed by the sub-gradient ascent and force some exploration, see next section for details. Note that this greatly improve the computational complexity of the algorithm since one just needs to compute an element of the sub-gradient of $F$ at each time step. In various setting this computation is straightforward, see Appendix \[app:examples\], in general it boils down to compute the projection of the vector of empirical means on the closure of alternative sets thanks to the particular form of the function $F$, see . Since the set $S_i$ are convex, if the weights $w(t)$ are strictly positive (which will be the case in Algorithm \[alg:gradient\_ascent\]) the projection always exists.
Gradient Ascent {#sec:gradient_ascent}
===============
Before presenting the algorithm we need to fix some notations. Since ${\widehat{\mu}}(t)$ does not necessary lie in the set ${\mathcal{S}}$, we first extend $F(w,\cdot)$ on the entire set ${\mathcal{M}}$, by setting $${\mathcal{A}\textit{lt}}(\mu) = \begin{cases}
{\mathcal{S}}\text{ if }\mu \notin {\mathcal{S}}\\
\bigcup_{i\neq i(\mu)} {\mathcal{S}}_{i} \text{ else}
\end{cases}
\,.$$ Then, $\nabla F(w,\mu)$ will denote some element of the sub-gradient $\partial F(w,\mu)$ of $F(\cdot,\mu)$ at $w$.
As motivated in Section \[sec:intuition\], we will perform a gradient ascent on the concave function $F\big(\cdot,{\widehat{\mu}}(t)\big)$ to drive the sampling rule. More precisely we use an online lazy mirror ascent (see @bubeck2015convex) on the simplex, using the Kullback-Leibler divergence to the uniform distribution $\pi$ as mirror map: $$\begin{aligned}
{\widetilde{w}}(t+1) = \operatorname*{arg\,max}_{w\in\Sigma_K} \eta_{t+1} \sum_{s=K}^{t} w\cdot \operatorname{\mathrm{Clip}}_s\!\Big(\nabla F\big({\widetilde{w}}(s),{\widehat{\mu}}(s)\big)\!\Big)-\operatorname{kl}(w,\pi) \,,\end{aligned}$$ where, for an arbitrary constant $M>0$, we clipped the gradient $\operatorname{\mathrm{Clip}}_t(x)=[\min(x_a,M\sqrt{t})]_{a\in[1,K]}$. This is just a technical trick to handle the fact that the gradient may be not uniformly bounded in the very first steps. In practice, however, this technical trick seems useless and we recommend to ignore it (that is, take $M=+\infty$). There is a closed formula for the weights ${\widetilde{w}}(t+1)$, see Appendix \[app:proof\_online\_regret\]. Note that it is crucial here to use an anytime optimizer since we do not know in advance when the algorithm will stop. Then we skew the weights ${\widetilde{w}}(t)$ toward the uniform distribution $\pi$ to force exploration $$w'(t+1)=(1-\gamma_t) {\widetilde{w}}(t+1)+ \gamma_t \pi\,.$$ This trick is quite usual as for example in the EXP3.P algorithm, see @bubeck2012regret. In some particular settings this extra exploration is not necessary, for example in the Thresholding Bandits problem. We believe that there is a more intrinsic way to perform exploration but this is out of the scope of this paper. Since we perform step size of order $\eta_t\sim 1/\sqrt{t}$ we can not use the same simple update rule of the empirical proportion of draws as in where the steps size is of order $1/t$. But we can track the cumulative sum of weights $w'$ as follows $$A_{t+1}\in \operatorname*{arg\,max}_{a\in[1,K]} \sum_{s=1}^{t+1} w'_a(s)- N_a(t)\,.$$ It is important to track the cumulative sum of weights here because the analysis of the online mirror ascent provides only guarantees on the *cumulative regret*.
For the stopping rule we use the classical Chernoff stopping rule , see @chernoff1959sequential, @garivier2016optimal, @garivier2017thresholding, That is, we stop when the vector of empirical means is far enough from any alternative with respect to the empirical Kullback-Leibler divergence. Note that, here, the threshold $\beta(N(t),\delta)$ does not depend directly on $t$, but via the vector of counts $N(t)$. This allows to use the maximal inequality of Proposition \[prop:max\_ineq\], which yields a very short and direct proof of $\delta$-correctness: see Section \[sec:delta\_correctness\].
The decision rule just chooses the closest set ${\mathcal{S}}_i$ to the vector of empirical means with respect to the empirical Kullback-Leibler divergence. Putting all together, we end up with Algorithm \[alg:gradient\_ascent\].
**Initialization** Pull each arms once and set ${\widetilde{w}}(t)=w'(t)=\pi$ for all $1\leq t \leq K$\
**Sampling rule**, for $t\geq K$\
Update the weights (sub-gradient ascent) $${\widetilde{w}}(t+1) = \operatorname*{arg\,max}_{w\in\Sigma_K} \eta_{t+1} \sum_{s=K}^{t} w\cdot \operatorname{\mathrm{Clip}}_s\!\left(\nabla F\big({\widetilde{w}}(s),{\widehat{\mu}}(s)\big)\right)-\operatorname{kl}(w,\pi) \,,
\label{eq:sampling_rule_gradient_ascent}$$ $$w'(t+1)=(1-\gamma_t) {\widetilde{w}}(t+1)+ \gamma_t \pi\,.
\label{eq:sampling_rule_forced_exploration}$$ Pull the arm (track the cumulative sum of weights) $$A_{t+1}\in \operatorname*{arg\,max}_{a\in[1,K]} \sum_{s=1}^{t+1} w'_a(s)- N_a(t)\,.
\label{eq:sampling_rule_tracking}$$ **Stopping rule**\
$$\displaystyle\tau_\delta=\inf\Big\{ t\geq K:\, \inf_{\lambda \in {\mathcal{A}\textit{lt}}({\widehat{\mu}}(t))} \sum_{a=1}^{K} N_a(t) {\mathrm{d}}\big({\widehat{\mu}}_a(t), \lambda_a\big)\geq \beta(N(t),\delta)
\Big\}\,.
\label{eq:stopping_rule}$$ **Decision rule**\
$$\label{eq:decision_rule}
{\widehat{i}}\in\operatorname*{arg\,min}_{i\in{\mathcal{I}}}\inf_{\lambda\in {\mathcal{S}}_i}\sum_{a=1}^{K} N_a(\tau_\delta) {\mathrm{d}}({\widehat{\mu}}_a(\tau_\delta), \lambda_a)\,.$$
In order to preform a gradient descent we need that the sub-gradient of $F(\cdot,\mu)$ is bounded in a neighborhood of $\mu$. For the examples presented in Appendix \[app:examples\] or if the ${\mathcal{S}}_i$ are bounded this assertion holds but for some pathological examples this assertion can be wrong (see Appendix \[app:counter\_example\]). That why we make the following assumption where we denote by ${\mathcal{B}}_\infty(x,\kappa)$ the ball of radius $\kappa$ for the infinity norm $|\cdot|_\infty$ centered at $x$.
\[assp:bounded\_gradient\] We assume that for all $\mu\in {\mathcal{S}}$ there exists $\kappa_0$ that may depend on $\mu$ such that: $$\forall w\in \Sigma_K,\ \forall\mu'\in {\mathcal{B}}_{\infty}(\mu,\kappa_0),\ \forall a\in[1,K],\,\qquad 0 \leq \nabla_a F(w,\mu')\leq L\,.$$
We can now state the main result of the paper.
For $\beta(N(t),\delta)$ given by , $\eta_t=1/\sqrt{t}$, $\gamma_t=1/(4\sqrt{t})$, Algorithm \[alg:gradient\_ascent\] is $\delta$-correct and asymptotically optimal, i.e. $$\limsup\limits_{\delta\rightarrow 0} \frac{\operatorname{\mathbb{E}}_{\mu}[\tau_\delta]}{\log(1/\delta)}\leq T^\star(\mu)\,.$$ \[th:asymptotic\_optimality\]
In the rest of this section we will present the main lines of the proof of Theorem \[th:asymptotic\_optimality\]. A detailed proof can be found in Appendix \[app:proof\_main\_result\].
$\delta$-correctness of Algorithm \[alg:gradient\_ascent\] {#sec:delta_correctness}
----------------------------------------------------------
The $\delta$-correctness of Algorithm \[alg:gradient\_ascent\] is a simple consequence of the following maximal inequality, see Appendix \[app:deviations\] for a proof.
For $\delta>0$ and the choice of the threshold $$\begin{aligned}
\beta\big(N(t),\delta\big)=\log(1/\delta)+K\log\!\big(4\log(1/\delta)+1\big)+6\sum_{a=1}^K \log\!\Big(\log\!\big(N_{a}(t)\big)+3\Big)+K\widetilde{C}\label{eq:def_beta}\end{aligned}$$ where $\widetilde{C}$ is a universal constant defined in the proof of Proposition \[prop:max\_ineq\_diag\] in Appendix \[app:deviations\], it holds $$\label{eq:maximal_inequality_chernoff}
\operatorname{\mathbb{P}}_\mu\!\!\left(\exists t\geq K,\, \sum_{a=1}^K N_a(t) {\mathrm{d}}({\widehat{\mu}}_a(t), \mu_a) \geq \beta\big(N(t),\delta\big)\right)\leq \delta\,.$$ \[prop:max\_ineq\]
Indeed, if the algorithm returns the wrong index ${\widehat{i}}\neq i(\mu)$ we know that the true parameter is in the set of alternatives at time $\tau$, i.e. $\mu\in{\mathcal{A}\textit{lt}}\big({\widehat{\mu}}(\tau_\delta)\big)$. Therefore thanks to the stopping rule then it holds $$\begin{aligned}
\operatorname{\mathbb{P}}_\mu\!\big({\widehat{i}}\neq i(\mu)\big)\leq \operatorname{\mathbb{P}}_\mu\!\Bigg( \sum_{a=1}^{K} N_a(\tau_\delta) {\mathrm{d}}({\widehat{\mu}}_a(\tau_\delta), \mu) \geq \beta(N(\tau_\delta),\delta\big)\!\!\Bigg)\leq \delta\,.\end{aligned}$$ We will prove that $\tau_\delta$ is finite almost surely in the next section.
Asymptotic Optimality of Algorithm \[alg:gradient\_ascent\] {#sec:gradient_ascent_proof}
-----------------------------------------------------------
First we need some properties of regularity of the function $F$ around $\mu$ in order to prove a regret bound on the online lazy mirror ascent. In Appendix \[app:other\_proofs\] we derive the following proposition.
For all $\mu\in{\mathcal{S}}$ and ${\varepsilon}>0$ there exists constants $\kappa_{\varepsilon}\leq \kappa_0,L >0$ that may depend on $\mu$ such that ${\mathcal{B}}_{\infty}(\mu,\kappa_{\varepsilon})\subset{\mathcal{S}}_{i(\mu)}\,,$ and $\forall \mu'\mu''\in{\mathcal{B}}_{\infty}(\mu,\kappa_{\varepsilon}),\,\forall w\in \Sigma_K$ it holds $$|\mu'-\mu''|_{\infty}\leq \kappa_{\varepsilon}\Rightarrow |F(w,\mu')-F(w,\mu'')|\leq {\varepsilon}\,.$$ \[prop:regularity\]
Fix ${\varepsilon}>0$ some real number and consider the typical event $${\mathcal{E}}_{\varepsilon}(T)=\bigcap_{t\geq g(T)}^T\big\{{\widehat{\mu}}(t)\in{\mathcal{B}}_{\infty}(\mu,\kappa_{\varepsilon})\big\}\,.$$ where $g(T)\sim T^{1/4}$, for some horizon $T$. We want to prove that for $T$ large enough, on the event ${\mathcal{E}}_{\varepsilon}(T)$, the difference between the maximum of $F$ for the true parameter, namely $T^\star(\mu)^{-1}=F\big({w^\star}(\mu),\mu\big)$ and its empirical counterpart at time $T$, $F\big(w(T),{\widehat{\mu}}(T)\big)$ is small, precisely of order ${\varepsilon}$. To this aim we will use the following regret bound for the online lazy mirror ascent proved in Appendix \[app:proof\_online\_regret\].
\[prop:regret\_bound\] For the weights ${\widetilde{w}}(t)$ given by , and a constant $C_0$ that depends on $K, L,M$, on the event ${\mathcal{E}}_{\varepsilon}(T)$ it holds $$\sum_{t=g(T)}^T F\big({w^\star}(\mu),{\widehat{\mu}}(t)\big)-F\big({\widetilde{w}}(t),{\widehat{\mu}}(t)\big)\leq C_0\sqrt{T}\,.
\label{eq:regret_bound}$$ The expression of $C_0$ can be found in Appendix \[app:proof\_online\_regret\].
We then need a consequence of the tracking and the forced exploration, proved in Appendix \[app:proof\_tracking\], to relate $F\big({\widetilde{w}}(T),{\widehat{\mu}}(t)\big)$ to $F\big(w(T),{\widehat{\mu}}(t)\big)$.
\[prop:tracking\_tw\] Thanks to the sampling rule, precisely and , for the choice $\gamma_t=1/(4\sqrt{t})$ it holds for all $t\geq 1$ $$\left|\sum_{s=1}^t {\widetilde{w}}(s) -N(t)\right|_{\infty}\leq 2K\sqrt{t}\,,\qquad N_a(t)\geq \frac{\sqrt{t}}{4K}-2K\ \forall a\in[1,K]\,.
\label{eq:tracking_tw}$$
Using Proposition \[prop:regularity\], \[prop:regret\_bound\] and \[prop:tracking\_tw\] one can proves that for $T\gtrsim 1/{\varepsilon}^2$, on the event ${\mathcal{E}}_{\varepsilon}(T)$ $$F\big(w(T),{\widehat{\mu}}(T)\big)\gtrsim F\big({w^\star}(\mu),\mu\big)-{\varepsilon}=T^\star(\mu)^{-1}-{\varepsilon}\,.$$ Hence if we rewrite the stopping rule $$\frac{\beta\big(N(t),\delta\big)}{T} \leq F\big(w(T),{\widehat{\mu}}(T)\big) \,,$$ since $\beta\big(N(T),\delta\big)\sim \log(1/\delta)$ the algorithm will stop as soon as $T\gtrsim \log(1/\delta)/(T^\star(\mu)-{\varepsilon})$. Thus for such $T$ we have the inclusion ${\mathcal{E}}_{\varepsilon}(T) \subset \{\tau_\delta \leq T\}$. But thanks to the forced exploration, see Lemma \[lem:deviation\_E\_epsilon\], we know that $\operatorname{\mathbb{P}}_\mu\!\big({\mathcal{E}}_{\varepsilon}(T)\big) \lesssim e^{-C_{{\varepsilon}} T^{1/16}}$. Therefore we obtain $$\begin{aligned}
\operatorname{\mathbb{E}}_\mu[\tau_\delta]= \sum_{T=0}^{+\infty}\operatorname{\mathbb{P}}_\mu(\tau_\delta>T) &\lesssim \frac{\log(1/\delta)}{T^\star(\mu)^{-1}-{\varepsilon}}+1/{\varepsilon}^2+\sum_{T=1}^{\infty} e^{-C_{\varepsilon}T^{1/16}}\,.\end{aligned}$$ Thus dividing the above inequality by $\log(1/\delta)$ and letting $\delta$ go to zero then ${\varepsilon}$ go to zero allows us to conclude.
Numerical Experiments {#sec:experiments}
=====================
For the experiments we consider the Best Arm Identification problem described in Section \[sec:BAI\]. Precisely we restrict our attention to the simple, arbitrary, 4-armed bandit problem $\mu=[1,\, 0.85,\, 0.8,\, 0.75]$. The optimal proportion of draws is ${w^\star}(\mu)=[0.403,\,0.366,\,0.147,\, 0.083]$. The experiments compare several algorithms: the Lazy Mirror Ascent (LMA) described in Algorithm \[alg:gradient\_ascent\], the same algorithm but with a constant learning rate (LMAc), the Best Challenger (BC) algorithm given in Section \[sec:BAI\], the Direct Tracking (DT) algorithm by @garivier2016optimal, Top Two Thompson Sampling (TTTS) by @russo2016simple and finally the uniform Sampling (Unif) as baseline. See Appendix \[app:details\_nume\_exp\] for details. Note in particular that all of them use the same Chernoff Stopping rule with the same threshold $\beta(t,\delta) = \log((\log(t)+1)/\delta)$ and the same decision rule . This allows a fair comparison between the sampling rules. Indeed it is known (see @garivier2017thresholding) that the choice of the stopping rule is decisive to minimize the expected number of sample. We only investigate here the effects of the sampling rule here because it is where the trade-off between uniform exploration and selective exploration takes place.
![Expected number of draws $\operatorname{\mathbb{E}}_\mu[\tau_\delta]$ (expectations are approximated over $1000$ runs) of various algorithms for the bandit problem $\mu=[1,\, 0.85,\, 0.8,\, 0.7]$. The black dots are the expected number of draws, the orange solid lines the medians.[]{data-label="fig:comp"}](fig_grad_asc.pdf){width="0.92\columnwidth"}
-0.2in
Algorithm BC TTTS LMAc LMA DT Unif
------------------ ---- ------ ------ ----- ---- ------
Time (in second)
: Average time (over 100 runs) of one step of various algorithms for the bandit problems $\mu=[1,\, 0.85,\, 0.8,\, 0.7]$.[]{data-label="tab:execution_time"}
Figure \[fig:comp\] displays the average number of draws of each aforementioned algorithms for two different confidence levels $\delta=0.1$ and $\delta=0.01$. The associated theoretical expected number of draws is respectively $T^\star(\mu)\log(1/\delta) \approx 1066$ for $\delta=0.1$ and $T^\star(\mu)\log(1/\delta) \approx 2133$ for $\delta= 0.01$. Table \[tab:execution\_time\] displays the average execution time of one step of these algorithms. Unsurprisingly all the algorithms perform better than the uniform sampling. LMA compares to the other algorithms but with slightly worse results. This may due to the fact that lazy mirror ascent (with a learning rate of order $1/\sqrt{t}$) is less aggressive than Frank Wolfe algorithm for example. Indeed using a constant learning rate (LMAc) we recover the same results as BC. But doing so we loose the guaranty of asymptotic optimality. The four mentioned algorithms share roughly the same (one step) execution time which is normal since they have the same complexity, see Appendix \[app:details\_nume\_exp\]. The Direct Tracking of the optimal proportion of draws performs slightly better than the other algorithms but the execution time is much longer (approximately 100 times longer) due to the extra cost of computing the optimal weights. Note that TTTS also tends to be slow when the posteriors are well concentrated, since it is then hard to sample the challenger. But it is the only algorithm that does not explicitly force the exploration.
Conclusion
==========
In this paper we developed an unified approach to Bandit Active Exploration problems. In particular we provided a general, computationally efficient, asymptotically optimal algorithm. To avoid obfuscating technicalities, we treated only the case of Gaussian arms with known variance and unknown mean, but the results can easily be extended to other one-parameter exponential families. For this, we just need to replace the maximal inequality of Proposition \[prop:max\_ineq\] by the one of Theorem 14 by @kaufmann2018mixture and to adapt the threshold accordingly.
Several questions remain open. It would be interesting to provide an analysis for the moderate-confidence regime as argued by @simchowitz2017simulator. An other way of improvement could be to explore further the connection with the Frank-Wolfe algorithm. Nevertheless the main open question, from the author point of view, is to find a natural way to explore instead of forcing the exploration. One possibility could be to use in this setting the principle of optimism. Because even for the Active Exploration problems there is trade-off between uniformly explore the distributions of the arms and selectively explore the distribution of specific arms to find in which set the bandit problem lies.
Examples {#app:examples}
========
In this appendix we present some classical and less classical active exploration bandit problems that can be described by the general framework presented in Section \[sec:problem\_description\]. Note that for all examples presented below Assumption \[assp:bounded\_gradient\] holds. For the three first examples it is a direct consequence of the expression of the sub-gradient. For the last one just needs to remark that the projection $\lambda$ of a certain $\mu$ on an alternative set ${\mathcal{S}}_i$ (for $i\neq i(\mu)$) is such that $\lambda_i$ belongs to the interval $[\min_{x\in\{\mu_1,\ldots,\mu_K,S} x,\, \max_{y\in\{\mu_1,\ldots,\mu_K,S\}} y]$ for all $i\in[K]$.
Thresholding Bandits {#sec:thresholding_bandit}
--------------------
We fix a threshold ${\mathfrak{T}}\in \operatorname{\mathbb{R}}$. The objective here is to identify the set of arms $a$ above this threshold, $\{ a :\, \mu_a > {\mathfrak{T}}\}$. Therefore, to see this problem as a particular case of the one presented in Section \[sec:problem\_description\] we choose ${\mathcal{I}}={\mathcal{P}}\big([1,K]\big)$ the power set of $[1,K]$ and $${\mathcal{S}}_i=\big\{\mu'\in{\mathcal{M}}:\ \{ a :\, \mu'_a > {\mathfrak{T}}\}=i\big\}\,.$$ For $\mu\in{\mathcal{S}}$, it turns out that there is an explicit expression for $F$ and the characteristic time in this particular case, $$\label{eq:F_threshold}
F(w,\mu) = \min_{a\in[1,K]} w_a {\mathrm{d}}(\mu_a, {\mathfrak{T}}) \quad T^\star(\mu)=\sum_{a=1}^K\frac{1}{{\mathrm{d}}(\mu_a,{\mathfrak{T}})}\,.$$ In the function $F$ we recognize the minimum of the costs (with respect to the weights $w$) for moving the mean of one arm to the threshold. Thanks to this rewriting the computation of the sub-gradient is direct $$\nabla F(w,\mu)=
\begin{bmatrix}
(0)\\
{\mathrm{d}}(\mu_a, {\mathfrak{T}})\\
(0)
\end{bmatrix}\!,$$ for $a$ that realize the minimum in (the non-zero coordinate is at position $a$).
Best Arm Identification {#sec:BAI}
-----------------------
Here the objective is to identify the arm with the greatest mean. We set ${\mathcal{I}}=[1,K]$ and $${\mathcal{S}}_i=\big\{\mu'\in{\mathcal{M}}:\ \mu'_i>\mu'_a,\, \forall a\neq i\big\}\,.$$ For $\mu\in {\mathcal{S}}_i$, we can simplify a bit the expression of the characteristic time. Indeed, using well chosen alternatives, see @garivier2016optimal, we have $$\label{eq:F_best_arm}
F(w,\mu)=\min_{a\neq i}\, w_i {\mathrm{d}}(\mu_i,{\bar{\mu}}_{i,a}^w)+w_a {\mathrm{d}}(\mu_a,{\bar{\mu}}_{i,a}^w)\,,$$ where ${\bar{\mu}}_{i,a}^w$ is the mean between the optimal mean $\mu_i$ and the mean $\mu_a$ with respect to the weights $w$: $${\bar{\mu}}_{i,a}^w=\frac{w_i}{w_i+w_a}\mu_i+\frac{w_a}{w_i+w_a}\mu_a\,.$$ We can see the weighted divergence that appears in as the cost for moving the mean of arm $a$ above the optimal one $\mu_i$ and thus make the arm $a$ optimal. Precisely we move at the same time $\mu_i$ and $\mu_a$ to the weighted mean ${\bar{\mu}}_{i,a}^w$. The computation of the sub-gradient is also straightforward in this case $$\nabla F(w,\mu)=
\begin{bmatrix}
(0)\\
{\mathrm{d}}(\mu_i,{\bar{\mu}}_{i,a}^w)\\
(0)\\
{\mathrm{d}}(\mu_a,{\bar{\mu}}_{i,a}^w)\\
(0)
\end{bmatrix}\!,$$ for active coordinates $a\neq i$ that realize the minimum in (the non-zero coordinates are at positions $i$ and $a$). A variant of the Best Challenger sampling rule introduced by @garivier2016optimal, see also @russo2016simple, is given by $$\begin{aligned}
& C_t \in \operatorname*{arg\,min}_{a\in[1,K]/i_t} w_{i_t}(t) {\mathrm{d}}\big({\widehat{\mu}}_{i_t}(t),{\bar{\mu}}_{i_t,a}^{w(t)}(t)\big)+w_a(t) {\mathrm{d}}\big({\widehat{\mu}}_a(t),{\bar{\mu}}_{i_t,a}^{w(t)}(t)\big)\nonumber\\
&A_{t+1} =\begin{cases}
i_t &\text{ if } {\mathrm{d}}\big({\widehat{\mu}}_{i_t}(t),{\bar{\mu}}_{i_t,C_t}^{w(t)}(t)\big)>{\mathrm{d}}\big({\widehat{\mu}}_{C_t}(t),{\bar{\mu}}_{i_t,C_t}^{w(t)}(t)\!\big)\\
C_t &\text{ else},
\end{cases}\label{eq:bai_best_challenger}\end{aligned}$$ where we denote by $i_t$ the current optimal arm (the one with the greatest mean) at time $t$. At a high level, we select the best challenger $C_t$ of the current best arm $i_t$ with respect to the cost that appear in . Then we greedily choose between $C_t$ and $i_t$ the one that increases the most this cost. Again, as in the previous example, this sampling rule rewrites as one step of the Frank-Wolfe algorithm for the function $F\big(\cdot,{\widehat{\mu}}(t)\big)$ $$\begin{aligned}
e_{A_{t+1}}&\in\operatorname*{arg\,max}_{w\in\Sigma_K} w\cdot \nabla F\big(w(t),{\widehat{\mu}}(t)\big)\nonumber\\
w(t+1) &= \frac{t}{t+1} w(t) + \frac{1}{t+1} e_{A_{t+1}}\label{eq:def_frank_wolfe_based}\,.\end{aligned}$$
Signed Bandits {#sec:signed_bandits}
--------------
This is a variant of the Thresholding Bandits problem where we add the assumption that all the means lie above or under a certain threshold ${\mathfrak{T}}$. Thus we choose ${\mathcal{I}}=\{+,\,-\}$ and $${\mathcal{S}}_+=\{\mu' \in {\mathcal{M}}:\ \mu'_a>{\mathfrak{T}}\}\quad {\mathcal{S}}_-=\{\mu' \in {\mathcal{M}}:\ \mu'_a<{\mathfrak{T}}\}\,.$$ It is easy to see, for $\mu\in{\mathcal{S}}$, that the function $F$ and the characteristic time reduce to $$\label{eq:signed_bandit} F(w,\mu)= \sum_{a=1}^K w_a{\mathrm{d}}(\mu_a,{\mathfrak{T}}) \quad T^\star(\mu)=\frac{1}{\max_{a\in[1,K]}{\mathrm{d}}(\mu_a,{\mathfrak{T}})}\,.$$ In the function $F$ we recognize the cost (with respect to the weights $w$) for moving all the means to the threshold ${\mathfrak{T}}$. The sub-gradient of $F(\cdot,\mu)$ at $w$ is $$\nabla F(w,\mu)=
\begin{bmatrix}
{\mathrm{d}}(\mu_1,{\mathfrak{T}})\\
\vdots\\
{\mathrm{d}}(\mu_a, {\mathfrak{T}})\\
\vdots\\
{\mathrm{d}}(\mu_K, {\mathfrak{T}})
\end{bmatrix}\!.$$ This example is interesting because if we follow a sampling rule based on the Frank-Wolfe algorithm, see (which is equivalent to track the optimal proportion of draws in this case), it would boil down to a kind of Follow the Leader sampling rule. And it is well known that it can fail to sample asymptotically according to the optimal proportion of draws which is in this case: $${w^\star}_a=\begin{cases} 1/L\text{ if }a \in \operatorname*{arg\,max}_b{\mathrm{d}}(\mu_b,{\mathfrak{T}})\\
0\text{ else}
\end{cases}\,,$$ where $L$ is the number of arms that attain the maximum that appears in the definition of the characteristic time, see . This highlights the necessity to force in some way the exploration.
Monotonous thresholding bandit {#sec:monotonous_bandit}
------------------------------
It is again a variant of the Thresholding Bandit problem with some additional structure. We fix a threshold ${\mathfrak{T}}$ and assume that sequence of means is increasing. The objective is to identify the arm with the closest mean to the threshold. Hence, we choose ${\mathcal{I}}=[1,K]$ and $${\mathcal{S}}_i=\{\mu'\in {\mathcal{M}}:\ \mu_1<\ldots<\mu_K,\,|\mu_i-{\mathfrak{T}}|< |\mu_a-{\mathfrak{T}}|\ \forall a\neq i \}.$$ Unfortunately there is no explicit expressions for $F$ neither for the characteristic time in this problem. But it is possible to compute efficiently an element of the sub-gradient of $F$ using isotonic regressions, see @garivier2017thresholding.
Details on Numerical Experiments {#app:details_nume_exp}
================================
As stated in the Section \[sec:experiments\] we consider the Best Arm Identification problem (see Appendix \[app:examples\]) for $\mu = [1, 0.85, 0.8, 0.75
]$. For all the algorithms we used the same stopping rule with the threshold $\beta(t,\delta) = \log((\log(t)+1)/\delta)$ and decision rule . We consider the following sampling rules:
- *BC*: it is the sampling rule given by plus forced exploration as proposed by @garivier2016optimal (if the number of pulls of one arm is less than $\sim\sqrt{t}$ then this arm is automatically sampled). The complexity of one step is of order $O(K)$, see .
- *TTTS*: it is basically the sampling rule of Top Tow Thompson Sampling by @russo2016simple. We use a Gaussian prior $\operatorname{\mathcal{N}}(0,1)$ for each arms and we slightly alter the rule to choose between the best sampled arm $I$ and its re-sampled challenger $J$. Inspired by , if we denote by $\mu'$ the sample from the posterior where $I$ is optimal and by $\mu''$ the re-sample where $J$ is optimal, we choose arm $I$ if ${\mathrm{d}}(\mu'_I,\mu''_I)>{\mathrm{d}}(\mu'_J,\mu''_J)$, $J$ else. Here the complexity of one step is dominated by the sampling phase, in particular the sampling of the challenger, which can be costly if the posterior are concentrated.
- *LMA*: this is Algorithm \[alg:gradient\_ascent\]. We do not try to optimize the parameters. We choose a learning rate of the form $\eta_t=1/(L\sqrt{t})$ where $L$ is of order the norm of the sub-gradients and the same exploration rate $\gamma_t$ as Theorem \[th:asymptotic\_optimality\]. The complexity of one step is of order $O(K)$ (for computing the sub-gradient).
- *LMAc*: Exactly the same as above but with a constant learning rate.
- *DT*: this is the Direct Tracking (DT) algorithm by @garivier2016optimal, it basically tracks the optimal weights associated to the vector of empirical means plus some forced exploration (same as BC). For the Best Arm Identification problem, to compute the optimal weights, one needs to find the root of an increasing function, e.g. by the bisection method, whose evaluations requires the resolution of K scalar equations.
- *Unif*: the arm is selected at random.
Proof of Theorem \[th:asymptotic\_optimality\] {#app:proof_main_result}
==============================================
Fix ${\varepsilon}>0$ some real number and consider the typical event $${\mathcal{E}}_{\varepsilon}(T)=\bigcap_{t\geq g(T)}^T\big\{{\widehat{\mu}}(t)\in{\mathcal{B}}_{\infty}(\mu,\kappa_{\varepsilon})\big\}\,.$$ where $g(T):=\floor{T^{1/4}}
$, for some horizon $T$ such that $T\geq K$ and $2 g(T)\leq T$ ($T\geq 3$ is sufficient). We also impose $T$ to be greater than the smallest integer $T_M$ such that $M\sqrt{g(T_M)}\geq L$. This condition allows to get rid of the effects of clipping the gradient on ${\mathcal{E}}_{\varepsilon}(T)$. Using Proposition \[prop:regularity\] we can replace the vector of empirical means ${\widehat{\mu}}(t)$ by the true vector of means $\mu$ in the first sum of at cost ${\varepsilon}T$ $$\begin{aligned}
\sum_{t=g(T)}^T \Big|F\big({w^\star}(\mu),{\widehat{\mu}}(t)\big)-F\big({w^\star}(\mu),\mu\big) \Big|&\leq {\varepsilon}T \,,\end{aligned}$$ similarly, we can replace ${\widehat{\mu}}(t)$ by ${\widehat{\mu}}(T)$ in the second sum $$\begin{aligned}
\sum_{t=g(T)}^T \Big|F\big({\widetilde{w}}(t),{\widehat{\mu}}(t)\big)-F\big({\widetilde{w}}(t),{\widehat{\mu}}(T)\big) \Big|&\leq{\varepsilon}T\,.\end{aligned}$$ Hence, we deduce from , with ${\widetilde{T}}=(T-g(T)+1)$, on the event ${\mathcal{E}}_{\varepsilon}(T)$ $$\label{eq:regret_bound_2}
{\widetilde{T}}F\big({w^\star}(\mu),\mu\big)-\sum_{t=g(T)}^T F\big({\widetilde{w}}(t),{\widehat{\mu}}(T)\big)\leq C_0\sqrt{T}+2{\varepsilon}T\,.$$ Now we need to compare the sum in with the quantity ${\widetilde{T}}F\big(w(T),{\widehat{\mu}}(T)\big)$. To this end we will use Proposition \[prop:regret\_bound\], which is a consequence of the tracking and the forced exploration, see and . Thus, using the concavity of $F\big(\cdot,{\widehat{\mu}}(t)\big)$ then Proposition \[prop:regularity\] we have $$\begin{aligned}
\sum_{t=g(T)}^T F\big({\widetilde{w}}(t),{\widehat{\mu}}(T)\big) &\leq {\widetilde{T}}F\!\!\left(\frac{1}{{\widetilde{T}}}\sum_{t=g(T)
}^T{\widetilde{w}}(t),{\widehat{\mu}}(T)\right)\\
&\leq {\widetilde{T}}F\big(w(T),{\widehat{\mu}}(T)\big)+{\widetilde{T}}L K \left|w(T) -\frac{1}{{\widetilde{T}}} \sum_{t=g(T)
}^T{\widetilde{w}}(t) \right|_{\infty}\\
$$ Before applying Proposition \[prop:tracking\_tw\] we need to handle the fact that the sum in the last inequality above begins at $g(T)$. But it is not harmful because $g(T)$ is small enough, one can proves: $$\label{eq:get_ride_of_gT}
\left|w(T) -\frac{1}{{\widetilde{T}}} \sum_{t=g(T)
}^T{\widetilde{w}}(t) \right|_{\infty} \!\!\!\leq \left|w(T) -\frac{1}{T} \sum_{t=1
}^T{\widetilde{w}}(t) \right|_{\infty}\!\!\!\!+ \frac{2}{\sqrt{T}}.$$ Indeed, using the triangular inequality we have $$\begin{aligned}
\left|w(T)- \frac{1}{{\widetilde{T}}} \sum_{t=g(T)}^T{\widetilde{w}}(t)\right|_{\infty}\leq \left|w(T)- \frac{1}{T} \sum_{t=1}^T{\widetilde{w}}(t)\right|_{\infty}+ \left|\frac{1}{T} \sum_{t=1}^T{\widetilde{w}}(t)- \frac{1}{{\widetilde{T}}} \sum_{t=g(T)}^T{\widetilde{w}}(t)\right|_{\infty}\,.\end{aligned}$$ It remains to notice that $$\begin{aligned}
\left|\frac{1}{T} \sum_{t=1}^T{\widetilde{w}}(t) -\frac{1}{{\widetilde{T}}} \sum_{t=g(T)
}^T{\widetilde{w}}(t) \right|_{\infty}&\leq \left|\frac{1}{T} \sum_{t=1}^T{\widetilde{w}}(t) -\frac{1}{T} \sum_{t=g(T)
}^T{\widetilde{w}}(t) \right|_{\infty}+ \left|\frac{1}{T} \sum_{t=g(T)}^T{\widetilde{w}}(t) -\frac{1}{{\widetilde{T}}} \sum_{t=g(T)
}^T{\widetilde{w}}(t) \right|_{\infty}\\
&\leq \frac{g(T)}{T}+ \left(\frac{1}{{\widetilde{T}}}-\frac{1}{T}\right){\widetilde{T}}\leq 2\frac{g(T)}{T}\\
&\leq \frac{2}{\sqrt{T}}\,,\end{aligned}$$ where in the last line we used $g(T)\leq \sqrt{T}$, by definition. Now, using then we obtain $$\sum_{t=K}^T F\big({\widetilde{w}}(t),{\widehat{\mu}}(T)\big) \leq {\widetilde{T}}F\big(w(T),{\widehat{\mu}}(T)\big)+{\widetilde{T}}\frac{4 L K^2 }{\sqrt{T}}\,.$$ Thus, using the above inequality in and dividing by ${\widetilde{T}}$ we get $$\begin{aligned}
F\big({w^\star}(\mu),\mu\big)-\!F\big(w(T),{\widehat{\mu}}(T)\big)\!&\leq \frac{C_0 \sqrt{T}}{{\widetilde{T}}}+ \frac{2{\varepsilon}T}{{\widetilde{T}}}+\frac{4 L K^2}{\sqrt{T}}\\
&\leq \underbrace{(2C_0\! +\! 4 K^2 L\big)}_{:=C_1}\frac{1}{\sqrt{T}}+4{\varepsilon},\end{aligned}$$ where in the last line we used $T/{\widetilde{T}}\leq 2$, thanks to the choice of $T$. For $T\geq (C_1/{\varepsilon})^2$, we finally obtain the bound announced in Section \[sec:gradient\_ascent\_proof\] $$F\big(w(T),{\widehat{\mu}}(T)\big)\geq F\big({w^\star}(\mu),\mu\big)-5{\varepsilon}=T^\star(\mu)^{-1}-5{\varepsilon}\,.$$ Hence the algorithm will stop at $T$ if $\beta\big(N(T),\delta\big)/T\leq T^\star(\mu)^{-1}-5{\varepsilon}$. We use the following technical lemma (proved in Appendix \[app:other\_proofs\]) to characterize such $T$.
There exits a constant $C_3({\varepsilon})$ that depends on ${\varepsilon}$ and $K$, such that for $$T\geq \frac{\log(1/\delta)+K\log\!\big(4\log(1/\delta)+1\big)}{T^\star(\mu)^{-1}-6{\varepsilon}} +C_3({\varepsilon})\,,$$ it holds $$\beta\big(N(T),\delta\big)/T\leq T^\star(\mu)^{-1}-5{\varepsilon}\,.$$ \[lem:invers\_log\_log\]
We also need to use that ${\mathcal{E}}_{\varepsilon}(T)$ is a typical event. Quantitatively, using the consequence of the forced exploration , we can prove the following deviation inequality, see Appendix \[app:other\_proofs\].
There exists two constants $C_4({\varepsilon})$ and $C_5({\varepsilon})$, that depend on ${\varepsilon}$, $\mu$ and $K$, such that $$\operatorname{\mathbb{P}}_\mu\big({\mathcal{E}}_{\varepsilon}(T)^{c}\big) \leq C_5({\varepsilon}) T e^{-C_4({\varepsilon}) T^{1/8}}\,.$$ \[lem:deviation\_E\_epsilon\]
Putting all together, for $T$ large enough, for example: $$\begin{aligned}
T\geq & \frac{\log(1/\delta)+K\log\!\big(4\log(1/\delta)+1\big)}{T^\star(\mu)^{-1}-6{\varepsilon}} +C_3({\varepsilon})+ (C_1/{\varepsilon})^2 +K+ T_M +3\,,\end{aligned}$$ we have the inclusion ${\mathcal{E}}_{\varepsilon}(T) \subset \{\tau_\delta \leq T\}$, hence using Lemma \[lem:deviation\_E\_epsilon\] $$\operatorname{\mathbb{P}}_\mu(\tau_\delta > T)\leq \operatorname{\mathbb{P}}\big( {\mathcal{E}}_{{\varepsilon}}(T)^c \big)\leq C_5({\varepsilon}) T e^{-C_4({\varepsilon}) T^{1/8}}\,.$$ It remains to remark that, using the above inequalities, $$\begin{aligned}
\operatorname{\mathbb{E}}_\mu[\tau_\delta]= \sum_{T=0}^{+\infty}\operatorname{\mathbb{P}}_\mu(\tau_\delta>T) &\leq \frac{\log(1/\delta)+K\log\!\big(4\log(1/\delta)+1\big)}{T^\star(\mu)^{-1}-6{\varepsilon}}+C_3({\varepsilon})+ (C_1/{\varepsilon})^2 +K\nonumber\\
&+T_M +3+\sum_{T=1}^{\infty}C_5({\varepsilon}) T e^{-C_4({\varepsilon}) T^{1/8}} \!.\label{eq:presque}\end{aligned}$$ Thus dividing by $\log(1/\delta)$ and letting $\delta$ go to zero, we obtain $$\limsup_{\delta \rightarrow 0}\frac{\operatorname{\mathbb{E}}_\mu[\tau_\delta]}{\log(1/\delta)}\leq \frac{1}{T^\star(\mu)^{-1}-6{\varepsilon}}\,,$$ letting ${\varepsilon}$ go to zero allows us to conclude.
Deviations Inequality {#app:deviations}
=====================
Let $\theta$ be a certain parameter in $\operatorname{\mathbb{R}}^d$. We consider the linear model $$X_t= \theta\cdot A_t+\eta_t\,,$$ where $\{\eta_t\}_{t\in\operatorname{\mathbb{N}}^\star}$ are i.i.d. from a Gaussian distribution $\operatorname{\mathcal{N}}(0,1)$ and $A_t\in\operatorname{\mathbb{R}}^d$ is a random variable $\sigma(A_1, X_1,\ldots,A_{t-1}, X_{t-1})$-measurable. Let $V_t:=\sum_{t=1}^t A_s A_s^\top$ be the Gram matrix and ${\widehat{\theta}}_t$ be the least square estimator of $\theta$ (defined when $V_t$ is invertible) $${\widehat{\theta}}_t=V_t^{-1}\sum_{s=1}^t A_s X_s\,.$$ We assume that $A_s=e_s$ the $s$-nth vector of the canonical basis of $\operatorname{\mathbb{R}}^d$ for $1 \leq s \leq d$, such that $V_t$ is invertible for $t\geq d$. We want to prove a maximal inequality on the self-normalized following quantity $$\label{eq:def_S_t_V_t}
\frac{|{\widehat{\theta}}_t-\theta|^2_{V_t}}{2}=\frac{|S_t|^2_{V_t^{-1}}}{2}\,,$$ where $S_t:=\sum_{s=1}^t A_s \eta_s$ and $|x|_V:= x^{\top} V x$ is the norm induced by the symmetric positive definite matrix $V$. In addition we will assume that for all $t\geq1$ the random variable $A_t\in (e_l)_{l\in[1,d]}$ is an element of the canonical basis. Thus the Gram matrix $V_t$ is diagonal and for all $l\in [1,d]$ $$V_{t,l,l}= N_{t,l}:=\sum_{s=1}^t {\mathds{1}}_{\{A_s = e_l\}}\,.$$
For $\delta>0$ and $1>\beta>0$, $$\operatorname{\mathbb{P}}\left(\exists n\geq t\geq d,\, |S_t|_{V_t^{-1}}^2/2 \geq \log(1/\delta)+(1+\beta)d\operatorname{\log\!\log}(n)+o_{\delta,\beta}\big(\operatorname{\log\!\log}(n)\big)\right)\leq \delta\,,
\label{eq:max_ineq_diag_n}$$ see the end of the proof for an explicit formula. And if we do not care about the constant in front of the term in $\operatorname{\log\!\log}(n)$, it holds $$\operatorname{\mathbb{P}}\left(\exists t\geq d,\, |S_t|_{V_t^{-1}}^2/2 \geq \log(1/\delta)+6\sum_{l=1}^d \log\!\big(\log(N_{t,l})+3\big)+d\widetilde{C}\right)\leq \delta\,,
\label{eq:max_ineq_diag_N}$$ see the end of the proof for an explicit expression of the constant $\widetilde{C}$. \[prop:max\_ineq\_diag\]
Proposition \[prop:max\_ineq\] is a simple rewriting of for $d=K$. Indeed, the Kullback-Leibler divergence in rewrites with the diagonal assumption on the Gram matrix $$\label{eq:S_t_V_t_in_chernoff}
\frac{|S_t|^2_{V_t^{-1}}}{2}=\sum_{l=1}^d N_{t,l}{\mathrm{d}}({\widehat{\theta}}_{t,l},\theta_l)\,.$$ The constant in front of the $\operatorname{\log\!\log}(n)$ in is optimal when $\beta$ goes to $0$ with respect to the Law of the Iterated Logarithm, for the particular case of uniform sampling, i.e. $A_t= t \mod K$, see Lemma 2 of @finkelstein1971law. The proof of Proposition \[prop:max\_ineq\_diag\] is a variation on the method of mixtures, see @pena2008self for an introduction to the method, @lattimore2018bandit and @abbasi2011improved for the use of this methods in the bandit setting. It turns out that the prior used is really close to the one used by @balsubramani2014sharp in their proof of Lemma 12.
We will use the method of mixtures with the prior on $\operatorname{\mathbb{R}}^d$ $${\widetilde{f}}(\lambda)= \prod_{l=1}^d f(\lambda_l)\,,$$ with $f$ a density on $\operatorname{\mathbb{R}}$ given by $$f(\lambda)=\frac{C_\beta}{|\lambda|\Big(\big|\log|\lambda|\big|+2\Big)^{1+\beta}}\,,$$ where $C_\beta$ is the normalizing constant. Hence, we consider the martingale $$M_t=\int e^{\lambda\cdot S_t -|\lambda|^2_{V_t}/2} {\widetilde{f}}(\lambda){\mathop{}\!\mathrm{d}}{\lambda}\,.$$ We can rewrite this martingale to make appear the quantity of interest $$M_t= e^{|S_t|_{V_t^{-1}}^2/2}\prod_{l=1}^d \int e^{-(S_{t,l}/N_{t,l}-\lambda_l)^2 N_{t,l}/2}f(\lambda_l){\mathop{}\!\mathrm{d}}{\lambda}\,.$$ Using that $f$ is symmetric and non-increasing on $\operatorname{\mathbb{R}}^+$, we can lower bound the martingale as follows $$\begin{aligned}
M_t &\geq e^{|S_t|_{V_t^{-1}}^2/2}\prod_{l=1}^d \int_{S_{t,a}/N_{t,a}-\sqrt{2/N_{t,a}}}^{S_{t,a}/N_{t,a}+\sqrt{2/N_{t,a}}} e^{-(S_{t,l}/N_{t,l}-\lambda_l)^2 N_{t,l}/2}f(\lambda_l){\mathop{}\!\mathrm{d}}{\lambda}\\
&\geq e^{|S_t|_{V_t^{-1}}^2/2}\prod_{l=1}^d \frac{2C_\beta e^{-1}}{\big(|S_{t,a}|/\sqrt{2 N_{t,a}} +1\big) \left(\Big|\log\big(|S_{t,a}|/N_{t,a})+\sqrt{2/N_{t,a}}\big)\Big|+2 \right)^{1+\beta}}\,.\end{aligned}$$
Thanks to the method of mixtures this lower bound leads to the following maximal inequality $$\begin{aligned}
\operatorname{\mathbb{P}}\Bigg( \exists t\geq d,\, &|S_t|_{V_t^{-1}}^2/2 \geq \log(1/\delta)+\sum_{l=1}^d \log\big(|S_{t,a}|/\sqrt{2 N_{t,a}} + 1\big)+\nonumber\\
&(1+\beta)\sum_{l=1}^d \log\!\left(\Big|\log\big(|S_{t,a}|/N_{t,a}+\sqrt{2/N_{t,a}}\big)\Big|+2 \right)+ d\Big(1+\log\big(1/(2C_\beta)\big)\Big)\Bigg)\leq \delta\label{ineq_raw_diag}\end{aligned}$$ We can simplify a bit the expression in using that $$\begin{aligned}
\log\!\left(\Big|\log\big(|S_{t,a}|/N_{t,a})+\sqrt{2/N_{t,a}}\big)\Big|+2 \right)&\leq \log\!\left(\big|\log(\sqrt{N_{t,a}/2})\big|+2+\log\big(|S_{t,a}|/\sqrt{2 N_{t,a}}+1\big) \right)\\
&\leq \log\big(\log(N_{t,a})+3\big)+\frac{\log\big(|S_{t,a}|/\sqrt{2 N_{t,a}}+1\big)}{2}\,,\end{aligned}$$ where we used in the last line the fact that $\log(x+y)\leq \log(x)+y/x$ for $x,y >0$. Indeed, injecting this inequality in we obtain $$\begin{aligned}
\operatorname{\mathbb{P}}\Bigg( \exists t\geq d,\, |S_t|_{V_t^{-1}}^2/2 \geq \log(1/\delta)&+\sum_{l=1}^d 2\log\big(|S_{t,l}|/\sqrt{2 N_{t,l}} + 1\big)+\nonumber\\
&(1+\beta)\sum_{l=1}^d \log\!\big(\log(N_{t,l})+3\big)+ d\Big(1+\log\big(1/(2C_\beta)\big)\Big)\Bigg)\leq \delta\label{ineq_refined_diag}\end{aligned}$$ Now we will bootstrap this inequality to get rid of the $|S_{t,l}|/\sqrt{2N_{t,l}}$ inside the $\log$. Noting that by concavity of the logarithm and $\log(x+1)\leq x/2+\log(2)$ for $x>0$ $$\begin{aligned}
\sum_{l=1}^d 2\log\big(|S_{t,l}|/\sqrt{2 N_{t,l}} + 1\big)&\leq \sum_{l=1}^d \log\big(|S_{t,l}|^2/(2 N_{t,l}) + 1\big)+d\log(2)\\
&\leq d \log\big(|S_t|_{V_t^{-1}}^2/(2 d) + 1\big)+d\log(2)\\
& \leq |S_t|_{V_t^{-1}}^2/4+2d\log(2)\end{aligned}$$ we can degrade , with the choice $\beta=0.5$, to $$\begin{aligned}
\operatorname{\mathbb{P}}\Bigg( \exists t\geq d,\, |S_t|_{V_t^{-1}}^2/4 \geq \log(1/\delta)+
2\sum_{l=1}^d \log\!\big(\log(N_{t,l})+3\big)+ d\big(1+2\log(1/C_{1/2})\big)\Bigg)\leq \delta\end{aligned}$$ This last inequality implies the following one $$\label{ineq_bootstrap_diag}
\operatorname{\mathbb{P}}\Bigg( \exists t\geq d,\, |S_t|_{V_t^{-1}}^2/2 \geq 4 \log\!\left(\frac{\prod_{l=1}^d \big(\log(N_{t,l})+3\big) C}{\delta} \right)\Bigg)\leq \delta\,,$$ where $C$ is a constant such that $\log(C)=1-2\log(C_{1/2})$. Let $A$ be the event that appears in with $\delta/2$ instead of $\delta$, $B$ be the event that appears in with $\delta/2$ instead of $\delta$ and $D$ be such that $$\begin{aligned}
D:=\bigg\{\exists t\geq d,\ |S_t|_{V_t^{-1}}^2/2 &\geq \log(2/\delta)+\sum_{l=1}^d 2\log\big(|S_{t,l}|/\sqrt{2 N_{t,l}} + 1\big)+\nonumber\\
&d \log\left(4/d \log\!\left(2\frac{\prod_{l=1}^d \big(\log(N_{t,l})+3\big) C}{\delta} \right) + 1\right)+ d\Big(1+2\log\big(1/(C_\beta)\big)\Big)\Big\}\,.\end{aligned}$$ By and , it holds $$\begin{aligned}
\operatorname{\mathbb{P}}(D)&\leq \operatorname{\mathbb{P}}(D\cap B^c) +\operatorname{\mathbb{P}}(B)\\
&\leq \operatorname{\mathbb{P}}(A)+\operatorname{\mathbb{P}}(B)\leq \delta\,.\end{aligned}$$ We just proved that $$\begin{aligned}
\operatorname{\mathbb{P}}\Bigg( \exists t\geq d,\, &|S_t|_{V_t^{-1}}^2/2 \geq \log(2/\delta)+(1+\beta)\sum_{l=1}^d \log\!\big(\log(N_{t,l})+3\big)+\nonumber\\
& d \log\left(\frac{4}{d} \log\!\left(2\frac{\prod_{l=1}^d \big(\log(N_{t,l})+3\big) C}{\delta} \right) + 1\right)+ d\Big(1+2\log\big(1/(C_\beta)\big)\Big)\Bigg)\leq \delta\label{ineq_final_diag} \,.\end{aligned}$$ To conclude we will specify in two ways. First if $t\leq n$, using that in this case $N_{t,l}\leq n$, we obtain $$\begin{aligned}
\operatorname{\mathbb{P}}\Bigg( \exists d\leq t\leq n,\, &|S_t|_{V_t^{-1}}^2/2 \geq \log(2/\delta)+(1+\beta)d \log\!\big(\log(n)+3\big)+\nonumber\\
& d \log\left(\frac{4}{d}\log\!\left(2\frac{\big(\log(n)+3\big) C}{\delta} \right) + 1\right)+ d\Big(1+2\log\big(1/(C_\beta)\big)\Big)\Bigg)\leq \delta\,.\end{aligned}$$ And using again $\log(x+y)\leq \log(x) +x/y$, for $\beta=1/2$ and $\widetilde{C}:= 5\log(2C)$ we get $$\operatorname{\mathbb{P}}\left(\exists t\geq d,\, |S_t|_{V_t^{-1}}^2/2 \geq \log(1/\delta)+d\log\!\big(4\log(1/\delta)+1\big)+6\sum_{l=1}^d \log\!\big(\log(N_{t,l})+3\big)+d\widetilde{C}\right)\leq \delta\,.$$
Tracking results {#app:proof_tracking}
================
This section is devoted to prove Proposition \[prop:tracking\_tw\]. We will need one tool extracted from @garivier2016optimal, namely the next tracking lemma which corresponds to Lemma 15 of the aforementioned reference.
For all $t\geq 1$ $$\left| \sum_{s=1}^t w'(s) -N(t) \right|_{\infty}\leq K\,.$$ \[lem:tracking\]
Thanks to Lemma \[lem:tracking\] and the definition of the weights in we have $$\begin{aligned}
\left| \sum_{s=1}^t {\widetilde{w}}(s)- N(t) \right|_{\infty}&=\left|\sum_{s=1}^t w'(s) -\frac{\gamma_s}{1-\gamma_s}\pi+\frac{\gamma_s}{1-\gamma_s}w'(s) -N(t) \right|_{\infty}\\
&\leq \left|\sum_{s=1}^t w'(s) -N(t) \right|_{\infty}+\sum_{s=1}^t\frac{\gamma_s}{1-\gamma_s}|\pi - w'(s)|_{\infty}\\
&\leq K+\sum_{s=1}^t 2 \gamma_s\\
&\leq K+\sqrt{T}\leq 2 K \sqrt{T}\,,
\end{aligned}$$ where in the last lines we used that $\gamma_t=1/(4\sqrt{t})$ and a comparison series integral. This proves the first part of the proposition, for the second part we just use that $w'_a(t)\geq \gamma_t/K $ and Lemma \[lem:tracking\], $$\begin{aligned}
N_a(t)&\geq \sum_{s=1}^t w_a'(s) -\left| N_a(t) - \sum_{s=1}^t w'_a(s) \right|\\
&\geq \sum_{s=1}^t\frac{\gamma_s}{K}-K\geq\frac{\sqrt{t+1}-1}{2K}-K \geq \frac{\sqrt{t}}{4 K}-2K\,.
\end{aligned}$$
Online Concave Optimization {#app:proof_online_regret}
===========================
We consider the classical setting of online optimization on the simplex $\Sigma_K$. Consider a sequence of gain $f_t \in \operatorname{\mathbb{R}}^K$ such that $0\leq f_{t,a}\leq C_t$, for some constant $C_t$. The objective is to minimize the regret against any constant strategy ${w^\star}\in\Sigma_K$, $$\sum_{t=1}^T f_t\cdot ({w^\star}-w_t)\,.$$ To this aim we can use the Exponential Weights algorithm: let $w_1=\pi$ be the uniform distribution and define the other weights as follow $$w_{t+1} = \operatorname*{arg\,max}_{w\in\Sigma_K} \eta_{t+1} \sum_{s=1}^{t} w \cdot f_s-\operatorname{kl}(w,\pi)\,,$$ where $\eta_t$ is the learning rate. There is a closed formula for these weights $$\label{eq:closed_formula_exp}
w_{t+1,a}=\frac{e^{\eta_{t+1} G_{t,a}}}{\sum_{b=1}^K e^{\eta_{t+1} G_{t,b}}}\,,$$ where $G_t=\sum_{s=1}^t f_s$ with the convention $G_0=0$. The next lemma is a simple adaptation of the Theorem 2.4 of [@bubeck2011introduction]. We add its proof for the sake of completeness.
If $\eta_t$ is non-increasing, for all ${w^\star}\in\Sigma_K$, $$\sum_{t=1}^T f_t\cdot ({w^\star}-w_t)\leq \frac{\log(K)}{\eta_T}+\sum_{t=1}^T 2\eta_t C_{t}^2\,.$$ \[lem:regret\_online\_linear\]
We decompose the following quantity in two terms $$\label{eq:decomposition_regret}
-w_t\cdot f_t = \frac{1}{\eta_t} \log \operatorname{\mathbb{E}}_{a\sim w_t} e^{\eta_t (f_{t,a}-\operatorname{\mathbb{E}}_{b\sim w_t}f_{t,b})}-\frac{1}{\eta_t}\log\operatorname{\mathbb{E}}_{a\sim w_t} e^{\eta_t f_{t,a}}\,.$$ To bound the first term we use the Hoeffding inequality $$\label{eq:regret_first_term}
\frac{1}{\eta_t} \log \operatorname{\mathbb{E}}_{a\sim w_t} e^{\eta_t (f_{t,a}-\operatorname{\mathbb{E}}_{b\sim w_t}f_{t,b})}\leq 2\eta_t C_t^2\,.$$ For the second term, we consider the potential function $$\Phi_t(\eta)=\frac{1}{\eta}\log\left( \frac{1}{K} \sum_{a=1}^{K} e^{\eta G_{t,a}}\right)\,,$$ with the convention $\Phi_0(t)=0$. Thanks to we have $$\begin{aligned}
-\frac{1}{\eta_t}\log\operatorname{\mathbb{E}}_{a\sim w_t} e^{\eta_t f_{t,a}} &=-\frac{1}{\eta_t}\log\frac{\sum_{a=1}^K e^{\eta_t G_{t, a}}}{\sum_{a=1}^K e^{\eta_t G_{t-1, a}}} \nonumber\\
&= \Phi_{t-1}(\eta_t) -\Phi_t(\eta_t) \label{eq:regret_second_term}\,.\end{aligned}$$ Putting together , , and summing over $t$ we obtain $$\sum_{t=1}^T f_t\cdot ({w^\star}-w_t)\leq \sum_{t=1}^T 2\eta_t C_t^2+\sum_{t=1}^T \big(\Phi_{t-1}(\eta_t)-\Phi_{t}(\eta_t)\big)+\sum_{t=1}^T f_t\cdot {w^\star}\,.$$ An Abel transformation on the penultimate term of the previous inequality leads to $$\sum_{t=1}^T \big(\Phi_{t-1}(\eta_t)-\Phi_{t}(\eta_t)\big) = \sum_{t=1}^{T-1}\big( \Phi_t(\eta_{t+1})-\Phi_t(\eta_t)\big)-\Phi_T(\eta_T)\,,$$ where we used that $\Phi_0(\eta_1)=0$. Since it holds that $$\begin{aligned}
-\Phi_T(\eta_T)=\frac{1}{\eta_T}\log(K)-\frac{1}{\eta_T}\log\left(\sum_{a=1}^K e^{\eta_T G_{T,a}}\right)
\leq \frac{1}{\eta_T}\log(K)-\max_{a\in[1,K]}G_{T,a}\,,\end{aligned}$$ we get $$\sum_{t=1}^T f_t\cdot ({w^\star}-w_t)\leq \frac{\log(K)}{\eta_T}+\sum_{t=1}^T 2\eta_t C_t^2+\sum_{t=1}^{T-1} \big(\Phi_{t}(\eta_{t+1})-\Phi_{t}(\eta_t)\big)\,.$$ To conclude it remains to show that $\Phi_t(\cdot)$ is non-decreasing for all $t$ since $\eta_t$ is non-increasing. To this end we just check that $\Phi'_t(\eta)\geq 0$, $$\begin{aligned}
\Phi_t'(\eta)&=\frac{-1}{\eta^2}\log\left(\frac{1}{K}\sum_{a=1}^K e^{\eta G_{a,t}}\right)+\frac{1}{\eta}\frac{\sum_{a=1}^{K}e^{\eta G_{t,a}}G_{t,a}}{\sum_{a=1}^{K}e^{\eta G_{t,a}}}\\
&= \frac{1}{\eta^2}\operatorname{kl}(w_t^\eta,\pi)\geq 0\,,\end{aligned}$$ where $w_{t,a}^\eta= e^{\eta G_{t,a}}/(\sum_{b=1}^K e^{\eta G_{t,b}})$.
We are now ready to prove Proposition \[prop:regret\_bound\].
We will use Lemma \[lem:regret\_online\_linear\] with the choices $$C_t=\begin{cases}
M\sqrt{t}&\text{ if } t< g(T)\\
L &\text{ else}
\end{cases},
\qquad \eta_t=\frac{1}{\sqrt{t}}\,.$$ Indeed thanks to Assumption \[assp:bounded\_gradient\] and the definition of ${\mathcal{E}}_{\varepsilon}(T)$ we know that $0\leq\nabla_a F\big({\widetilde{w}}(t),{\widehat{\mu}}(t)\big)\leq C_t $ on this event. Therefore, using Lemma \[lem:regret\_online\_linear\] up to a translation of all the indices by $K-1$, we obtain the following regret bound $$\label{eq:regret_beg_1}
\sum_{t=K}^T \operatorname{\mathrm{Clip}}_s\!\Big(\nabla F \big({\widetilde{w}}(t),{\widehat{\mu}}(t)\big)\Big)\cdot\big({w^\star}(\mu)-{\widetilde{w}}(t)\big)\leq \log(K)\sqrt{T}+\sum_{t=K}^{T}\frac{2 C_{t}}{\sqrt{t}}\,.$$ It remains to control the terms inside the sums for $t\leq g(T)$. Using that the clipped sub-gradient is bounded by $C_t$ and Holder’s inequality, we have $$\begin{aligned}
\sum_{t=K}^{g(T)-1}\Big|\operatorname{\mathrm{Clip}}_s\!\Big(\nabla F \big({\widetilde{w}}(t),{\widehat{\mu}}(t)\big)\Big)\cdot\big({w^\star}(\mu)-{\widetilde{w}}(t)\big)\Big|&\leq \sum_{t=K}^{g(T)-1} K M \sqrt{t}\leq K M \int_{x=0}^{T^{1/4}} \sqrt{x}{\mathop{}\!\mathrm{d}}{x}\nonumber\\
&= K M \frac{2 T^{3/8}}{3}\leq K M \sqrt{T}\,.
\label{eq:first_sum_regret_gt}\end{aligned}$$ Similarly, one obtains, using the definition of $C_t$ $$\begin{aligned}
\sum_{t=1}^T \frac{2 C_t^2}{\sqrt{t}}&\leq \sum_{t=g(T)}^T \frac{2 L^2}{\sqrt{t}}+2 M^2\sum_{t=1}^{g(T)-1} \sqrt{t}\nonumber\\
&\leq \int_{0}^T \frac{2L^2}{\sqrt{x}}{\mathop{}\!\mathrm{d}}{x}+2M^2\int_{0}^{T^{1/4}}\sqrt{x}{\mathop{}\!\mathrm{d}}{x}\nonumber\\
&=4L^2\sqrt{T}+\frac{4 M^2}{3}T^{3/8}\leq (4 L^2+2 M^2)\sqrt{T}.
\label{eq:second_sum_regret_gt}\end{aligned}$$ Thus, combining , and , we get $$\sum_{t=g(T)}^T \nabla F \big({\widetilde{w}}(t),{\widehat{\mu}}(t)\big)\cdot\big({w^\star}(\mu)-{\widetilde{w}}(t)\big)\leq \underbrace{(\log(K)+K M + 4 L^2+2 M^2)}_{:=C_0}\sqrt{T}\,.$$ Note that the clipping has no effects since $t\geq g(T)$ and $T\geq T_M $. The concavity of $F\big(\cdot,{\widehat{\mu}}(t)\big)$ allows us to conclude $$\sum_{t=g(T)}^T F\big({w^\star},{\widehat{\mu}}(t)\big)- F\big({\widetilde{w}}(t),{\widehat{\mu}}(t)\big)\leq C_0\sqrt{T}\,.$$
Other Proofs {#app:other_proofs}
============
We regroup in this section proofs of auxiliary results.
Technical lemmas
----------------
Let $C_3({\varepsilon})>0$ a constant that depends on ${\varepsilon}$ and $K$ be such that for all $T\geq C_3({\varepsilon})$, $$6K \log\big(\log(T)+3\big)+K\widetilde{C}\leq {\varepsilon}T\,.$$ Then, using that $N_a(T)\leq T$, for all $a$, for $$T\geq \max\left(C_3({\varepsilon}),\frac{\log(1/\delta)+K\log\!\big(4\log(1/\delta)+1\big)}{T^\star(\mu)^{-1}-6{\varepsilon}}\right)\,,$$ it holds $$\begin{aligned}
\frac{\beta\big(N(T),\delta\big)}{T}&\leq \frac{\log(1/\delta)+K\log\!\big(4\log(1/\delta)+1\big)+6 K \log\big(\log(T)+3)+K\widetilde{C}}{T}\\
&\leq \frac{\log(1/\delta)+K\log\!\big(4\log(1/\delta)+1\big)}{T}+{\varepsilon}\\
&\leq T^\star(\mu)^{-1}-5{\varepsilon}\,,
\end{aligned}$$ which concludes the proof.
It is an adaptation of the proof of Lemma 19 by @garivier2016optimal with the Chernoff inequality for Gaussian distributions. We have $$\begin{aligned}
\operatorname{\mathbb{P}}\mu\big({\mathcal{E}}_{\varepsilon}(T)^{c}\big)& \leq \sum_{t=g(T)}^{T}\operatorname{\mathbb{P}}_\mu\big({\widehat{\mu}}(t)\notin {\mathcal{B}}_{\infty}(\mu,\kappa_{\varepsilon})\big)\\
&= \sum_{t=g(T)}^T\sum_{a=1}^K \left(\operatorname{\mathbb{P}}_\mu\big({\widehat{\mu}}_a(t)\leq \mu_a-\kappa_{\varepsilon}\big)+\operatorname{\mathbb{P}}_\mu\big({\widehat{\mu}}_a(t)\geq \mu_a+\kappa_{\varepsilon}\big)\right)\,.
\end{aligned}$$ Thanks to we know that for all $a$, $\sqrt{t}/(4K)-2K\leq N_a(t)\leq t$. Let denote by ${\widehat{\mu}}_{a,n}$ the empirical mean of the first $n$ samples from arm $a$ (such that ${\widehat{\mu}}_a(t)={\widehat{\mu}}_{a,N_a(t)}$). Using the union bound then Chernoff inequality, we get $$\begin{aligned}
\operatorname{\mathbb{P}}_\mu\big({\widehat{\mu}}_a(t)\leq \mu_a-\kappa_{\varepsilon}\big)&\leq \sum_{\sqrt{t}/(4K)-2K\leq n\leq t} \operatorname{\mathbb{P}}_\mu({\widehat{\mu}}_{a,n}\leq \mu_a-\kappa_{\varepsilon})\\
&\leq \sum_{\sqrt{t}/(4K)-2K\leq n\leq t} e^{-n \kappa_{\varepsilon}^2/2}\leq \frac{e^{-(\sqrt{t}/(4K)-2K) \kappa_{\varepsilon}^2/2}}{1- e^{-\kappa_{\varepsilon}^2/2}}\\
&\leq \frac{2}{\kappa_{\varepsilon}^2}e^{-(\sqrt{t}/(4K)-2K-1) \kappa_{\varepsilon}^2/2}\,.
\end{aligned}$$ similarly $$\operatorname{\mathbb{P}}_\mu\big({\widehat{\mu}}_a(t)\geq \mu_a+\kappa_{\varepsilon}\big)\leq \frac{2}{\kappa_{\varepsilon}^2}e^{-(\sqrt{t}/(4K)-2K-1) \kappa_{\varepsilon}^2/2}\,.$$ Thus for the choice of the constants $C_4({\varepsilon})$ and $C_5({\varepsilon})$ $$C_4({\varepsilon}):= \frac{\kappa_{\varepsilon}^2}{16K}\qquad C_5({\varepsilon}):= \frac{4 K}{\kappa_{\varepsilon}^2}e^{(2K+1) \kappa_{\varepsilon}^2/2}$$ it holds $$\operatorname{\mathbb{P}}_\mu\big({\mathcal{E}}_{\varepsilon}(T)^{c}\big) \leq \sum_{t=g(T)}^{T} C_5({\varepsilon})e^{-C_4({\varepsilon}) 4\sqrt{t}}\leq C_5({\varepsilon}) T e^{-C_4({\varepsilon}) 4\sqrt{g(T)}}\leq C_5({\varepsilon}) T e^{-C_4({\varepsilon}) T^{1/8}}\,.$$
Using the triangular inequality we have $$\begin{aligned}
\left|w(T)- \frac{1}{{\widetilde{T}}} \sum_{t=g(T)}^T{\widetilde{w}}(t)\right|_{\infty}\leq \left|w(T)- \frac{1}{T} \sum_{t=1}^T{\widetilde{w}}(t)\right|_{\infty}+ \left|\frac{1}{T} \sum_{t=1}^T{\widetilde{w}}(t)- \frac{1}{{\widetilde{T}}} \sum_{t=g(T)}^T{\widetilde{w}}(t)\right|_{\infty}\,.\end{aligned}$$ It remains to notice that $$\begin{aligned}
\left|\frac{1}{T} \sum_{t=1}^T{\widetilde{w}}(t) -\frac{1}{{\widetilde{T}}} \sum_{t=g(T)
}^T{\widetilde{w}}(t) \right|_{\infty}&\leq \left|\frac{1}{T} \sum_{t=1}^T{\widetilde{w}}(t) -\frac{1}{T} \sum_{t=g(T)
}^T{\widetilde{w}}(t) \right|_{\infty}+ \left|\frac{1}{T} \sum_{t=g(T)}^T{\widetilde{w}}(t) -\frac{1}{{\widetilde{T}}} \sum_{t=g(T)
}^T{\widetilde{w}}(t) \right|_{\infty}\\
&\leq \frac{g(T)}{T}+ \left(\frac{1}{{\widetilde{T}}}-\frac{1}{T}\right){\widetilde{T}}\leq 2\frac{g(T)}{T}\\
&\leq \frac{2}{\sqrt{T}}\,,\end{aligned}$$ where in the last line we used $g(T)\leq \sqrt{T}$, by definition.
Proof of Proposition \[prop:regularity\]
----------------------------------------
The fact that there exists $\kappa<\kappa_0$ such that ${\mathcal{B}}_{\infty}(\mu,\kappa)\subset{\mathcal{S}}_{i(\mu)}$ is just a consequence of the fact that ${\mathcal{S}}_{i(\mu)}$ is open. For such $\kappa$ we know that for any $\mu'\in{\mathcal{B}}_{\infty}(\mu,\kappa)$ $$F(w,\mu')=\min_{i\neq i(\mu)}\inf_{\lambda\in{\mathcal{S}}_i} \sum_{a=1}^K w_a{\mathrm{d}}(\mu_a',\lambda_a)\,.$$ Then thanks to Theorem 4 of @degenne2019pure (1.), for all $i$, the functions $$(w,\mu')\to \inf_{\lambda\in{\mathcal{S}}_i} \sum_{a=1}^K w_a{\mathrm{d}}(\mu_a',\lambda_a)\,,$$ are continuous on $\Sigma_K\times\Bar{{\mathcal{B}}}_{\infty}(\mu,\kappa/2)$ (where $\Bar{B}$ denotes the closure of the set $B$), thus $F$ is continuous and then uniformly continuous on this compact set. Thus for all ${\varepsilon}>0$ the exists $\kappa_{\varepsilon}\leq \kappa/2$ such that $$|\mu'-\mu''|_{\infty}\leq \kappa_{\varepsilon}\Rightarrow |F(w,\mu')-F(w,\mu'')|\leq {\varepsilon}\,.$$
Counter example for Assumption \[assp:bounded\_gradient\] {#app:counter_example}
---------------------------------------------------------
We now present an example of problem where the sub-gradients can be unbounded. We set ${\mathcal{M}}= \operatorname{\mathbb{R}}^2$, ${\mathcal{I}}=[1,2]$ and $${\mathcal{S}}_1={\mathcal{B}}_\infty\big( (0,0), 1/4\big) \qquad {\mathcal{S}}_2 = \{ (x,y)\in{\mathcal{M}}:\ x>0,\,y>1/x\}\,.$$ For the bandit problem $\mu = (0,0)$ we have $i(\mu)=1$ and $$F(w,\mu) = \frac{1}{2} \frac{w_2^2}{w_1} + \frac{1}{2} \frac{w_1^2}{w_2}\,.$$ Thus the gradient of $F(\cdot,\mu)$ at $w$ in the interior of the simplex is $$\nabla F(w,\mu)=
\begin{bmatrix}
\frac{w_1}{w_2}-\frac{1}{2}\frac{w_2^2}{w_1^2}\\
\frac{w_2}{w_1}-\frac{1}{2}\frac{w_1^2}{w_2^2}\end{bmatrix}\!,$$ which is unbounded when for example $w_1$ goes to 0 and $w_2$ is fixed.
|
---
abstract: 'Several generic summarization algorithms were developed in the past and successfully applied in fields such as text and speech summarization. In this paper, we review and apply these algorithms to music. To evaluate this summarization’s performance, we adopt an extrinsic approach: we compare a Fado Genre Classifier’s performance using truncated contiguous clips against the summaries extracted with those algorithms on 2 different datasets. We show that , LexRank and all improve classification performance in both datasets used for testing.'
author:
- 'Francisco Raposo, Ricardo Ribeiro, David Martins de Matos, [^1] [^2] [^3] [^4] [^5]'
bibliography:
- 'on-the-application-of-generic-summarization-algorithms-to-music.bib'
title: On the Application of Generic Summarization Algorithms to Music
---
Introduction
============
Several algorithms to summarize music have been published [@Chai; @Cooper2003; @Peeters2002; @Peeters2003; @Chu2000; @Cooper2002; @Glaczynski2011; @Bartsch2005], mainly for popular music songs whose structure is repetitive enough. However, those algorithms were devised with the goal of producing a thumbnail of a song as its summary, the same way an image’s thumbnail is that image’s summary. Therefore, the goal is to output a shorter version of the original song so that people can quickly get the gist of the whole piece without listening to all of it. These algorithms usually extract continuous segments because of their human consumption-oriented purpose.
Generic summarization algorithms have also been developed for and are usually applied in text summarization. Their application, in music, to extract a thumbnail is not ideal, because a “good” thumbnail entails requirements such as coherence and clarity. These summaries are composed of small segments from different parts of the song which makes them unsuitable for human enjoyment and thus may help evade copyright issues. Nevertheless, most of these algorithms produce summaries that are both concise and diverse.
We review several summarization algorithms, in order to summarize music for automatic, instead of human, consumption. The idea is that a summary clip contains more relevant and less redundant information and, thus, may improve the performance of certain tasks that rely on processing just a portion of the whole audio signal. We evaluate the summarization’s contribution by comparing the performance of a Portuguese music style Fado Genre Classifier[@Girao2014] using the extracted summaries of the songs against using contiguous clips (truncated from the beginning, middle and end of the song). We summarize music using , LexRank, and also with a method for music summarization called Average Similarity for comparison purposes. We present results on 2 datasets showing that , LexRank and improve classification performance under certain parameter combinations.
Section \[sec:related-work\] reviews related work on summarization. Specifically, the following algorithms are reviewed: Average Similarity in section \[sub:avg-sim\], in section \[sub:mmr\], LexRank in section \[sub:lexrank\] and in section \[sub:lsa\]. Section \[sec:experiments\] describes the details of the experiments we performed for each algorithm and introduces the Fado Classifier. Section \[sec:results\] reports and discusses our classification results and section \[sec:conclusions\] concludes this paper with some remarks and future work.
Summarization\[sec:related-work\]
=================================
Several algorithms for both generic and music summarization have been proposed. However, music summarization algorithms were developed to extract an audible summary so that any person can listen to it coherently. Our focus is on automatic consumption, so coherence and clarity are not mandatory requirements for our summaries.
LexRank [@Erkan2004] and TextRank [@Mihalcea2004] are centrality-based methods that rely on the similarity between every sentence. These are based on Google’s PageRank [@Brin1998] algorithm for ranking web pages and are successfully applied in text summarization. GRASSHOPPER [@Zhu2007] is another method applied in text summarization, as well as social network analysis, focusing on improving diversity in ranking sentences. [@Zechner2000; @Murray2005], applied in speech summarization, is a query-specific method that selects sentences according to their similarity to the query and to the sentences previously selected. [@Gong2001] is another method used in text summarization based on the mathematical technique .
Music-specific summarization structurally segments songs and then selects which segments to include in the summary. This segmentation aims to extract meaningful segments (e.g. chorus, bridge). [@Chai] presents two approaches for segmentation: using a to detect key changes between frames and to detect repeating structure. In [@Cooper2003], segmentation is achieved by correlating a Gaussian-tempered “checkerboard” kernel along the main diagonal of the similarity matrix of the song, outputting segment boundaries. Then, a segment-indexed similarity matrix is built, containing the similarity between every detected segment. is applied to that matrix to find its rank-K approximation. Segments are, then, clustered to output the song’s structure. In [@Peeters2002; @Peeters2003], songs are segmented in 3 stages. First, a similarity matrix is built and it is analyzed for fast changes, outputting segment boundaries. These segments are clustered to output the “middle states”. Finally, an is applied to these states, producing the final segmentation. These algorithms then follow some strategies to select the appropriate segments. [@Chu2000] groups (based on the divergence) and labels similar segments of the song and then the summary is generated by taking the longest sequence of segments belonging to the same cluster. In [@Cooper2002; @Glaczynski2011], a method called Average Similarity is used to extract a thumbnail $L$ seconds long that is most similar to the whole piece. Another method for this task is the Maximum Filtered Correlation [@Bartsch2005] which starts by building a similarity matrix and then a filtered time-lag matrix, which has the similarity between extended segments embedded in it. Finding the maximum value in the latter is finding the starting position of the summary.
To apply generic summarization algorithms to music, first we need to segment the song into musical words/terms. This fixed segmentation differs a lot from the structural segmentation used in music-specific algorithms. Fixed segmentation does not take into account the human perception of musical structure. It simply allows us to look at the variability and repetition of the signal and use them to find the most important parts. Structural segmentation aims to find meaningful segments (to people) of the song so that we can later select those segments to include in the summary. This type of segmentation often leads to audible summaries which violate copyrights of the original songs. Fixed segmentation combined with generic summarization algorithms may help evade those issues.
In the following sections we review the algorithms we chose to evaluate: Average Similarity, , LexRank, and .
Average Similarity\[sub:avg-sim\]
---------------------------------
This approach to summarization has the purpose of finding a fixed-length continuous music segment, of duration $L$, most similar to the entire song. This method was introduced in [@Cooper2002] and later used in other research efforts such as [@Glaczynski2011].
The method consists of building a similarity matrix for the song and calculating an aggregated measure of similarity between the whole song and every $L$ seconds long segment.
In [@Cooper2002], 45 s are computed but only the 15 with highest variance are kept. The cosine distance is used to calculate pairwise similarities.
In [@Glaczynski2011], the first 13 s and the spectral centre of gravity (sound “brightness”) are used. The Tchebychev distance was selected for building the similarity matrix.
Once the similarity between every frame is calculated, we build a similarity matrix $S$ and embed the similarity values between feature vectors $v_{i}$ and $v_{j}$ in it: $S\left(i,j\right)=s\left(v_{i},v_{j}\right)$.
The average similarity measure can be calculated by summing up columns (or rows, since the similarity matrix is symmetric) of the similarity matrix, according to the desired summary length $L$, starting from different initial frames. The maximum score will correspond to the segment that is most similar to the whole song. To find the best summary of length $L$, we must compute the score $Q_{L}\left(i\right)$:
$$Q_{L}\left(i\right)=\bar{S}\left(i,i+L\right)=\frac{1}{NL}\sum_{m=i}^{i+L}\sum_{n=1}^{N}S\left(m,n\right)$$
$N$ is the number of frames in the entire piece. The index $1\leq i\leq
\left(N-L\right)$ of the best summary starting frame is the one that maximizes $Q_{L}\left(i\right)$.
The evaluations of this method in the literature are subjective (human) evaluations that take into account whether the generated summaries include the most memorable part(s) of the song [@Cooper2002]. Other evaluations are averages of scores given by test subjects, regarding specific qualities of the summary such as Clarity, Conciseness and Coherence [@Glaczynski2011].
\[sub:mmr\]
-----------
[@Carbonell1998], selects sentences from the signal according to their relevance and to their diversity against the already selected sentences in order to output low-redundancy summaries. This approach has been used in speech summarization [@Zechner2000; @Murray2005]. It is a query-specific summarization method, though it is possible to produce generic summaries by taking the centroid vector of all the sentences (as in [@Murray2005]) as the query.
iteratively selects the sentence $S_i$ that maximizes the following mathematical model:
$$\lambda\left({Sim_{1}}\left(S_{i},Q\right)\right)-\left(1-\lambda\right)\max_{S_{j}}Sim_{2}\left(S_{i},S_{j}\right)$$
$Sim_{1}$ and $Sim_{2}$ are the, possibly different, similarity metrics; $S_{i}$ are the unselected sentences and $S_{j}$ are the previously selected ones; $Q$ is the query and $\lambda$ is a configurable parameter that allows the selection of the next sentence to be based on its relevance, its diversity or a linear combination of both. Usually sentences are represented as scores vectors. The cosine similarity is frequently used as $Sim_{1}$ and $Sim_{2}$.
LexRank\[sub:lexrank\]
----------------------
LexRank [@Erkan2004] is a centrality-based method that relies on the similarity for each sentence pair. This centrality-based method is based on Google’s PageRank [@Brin1998] algorithm for ranking web pages. The output is a list of ranked sentences from which we can extract the most central ones to produce a summary.
First, we compare all sentences, normally represented as scores vectors, to each other using a similarity measure. LexRank uses the cosine similarity. After this step, we build a graph where each sentence is a vertex and edges are created between every sentence according to their pairwise similarity. Usually, the similarity score must be higher than some threshold to create an edge. LexRank can be used with both weighted and unweighted edges. Then, we perform the following calculation iteratively for each vertex until convergence is achieved (when the error rate of two successive iterations is below a certain threshold for every vertex):
$$S\left(V_{i}\right)=\frac{\left(1-d\right)}{N}+ S_{1}\left(V_i\right)$$
$$S_{1}\left(V_{i}\right)=d\times\sum_{V_{j}\in
adj\left[V_{i}\right]}\frac{Sim\left(V_{i},V_{j}\right)}{\sum_{V_{k}\in adj\left[V_{j}\right]}Sim\left(V_{j},V_{k}\right)}S\left(V_{j}\right)$$
$d$ is a damping factor to guarantee the convergence of the method, $N$ is the total number of vertices and $S\left(V_{i}\right)$ is the score of vertex $i$. This is the case where edges are weighted. When using unweighted edges, the equation is simpler:
$$S\left(V_{i}\right)=\frac{\left(1-d\right)}{N}+d\times\sum_{V_{j}\in
adj\left[V_{i}\right]}\frac{S\left(V_{j}\right)}{D\left(V_{j}\right)}$$
$D\left(V_{i}\right)$ is the degree (i.e., number of edges) of vertex $i$. We can construct a summary by taking the highest ranked sentences until a certain summary length is reached.
This method is based on the fact that sentences recommend each other. A sentence very similar to many other sentences will get a high score. Sentence score is also determined by the score of the sentences recommending it.
\[sub:lsa\]
-----------
is based on the mathematical technique that was first used for text summarization in [@Gong2001]. is used to reduce the dimensionality of an original matrix representation of the text. To perform -based text summarization, we start by building a T terms by N sentences matrix A.
Each element of A, $a_{ij}=L_{ij}G_{i}$, has two weight components: a local weight and a global weight. The local weight is a function of the number of times a term occurs in a specific sentence and the global weight is a function of the number of sentences that contain a specific term.
Applying to matrix A will result in a decomposition formed by three matrices: $U$, a $T\times N$ matrix of left singular vectors (its columns); $\Sigma$, a $N\times N$ diagonal matrix of singular values; and $V^{T}$, a $N\times N$ matrix of right singular vectors (its rows): $A=U\Sigma V^{T}$.
Singular values are sorted by descending order in matrix $\Sigma$ and are used to determine topic relevance. Each latent dimension corresponds to a topic. We calculate the Rank $K$ approximation by taking the first $K$ columns of $U$, the $K\times K$ sub-matrix of $\Sigma$ and the first $K$ rows of $V^{T}$. We can extract the most relevant sentences by iteratively selecting sentences corresponding to the indices of the highest values for each (most relevant) right singular vector.
In [@Steinberger2004], two limitations of this approach are discussed: the fact that $K$ is equal to the number of sentences in the summary, which, as it increases, tends to include less significant sentences; and that sentences with high values in several dimensions (topics), but never the highest, will never be included in the summary. To compensate for these problems, a sentence score was introduced and $K$ is chosen so that the $K^{th}$ singular value does not fall under half of the highest singular value: $score\left(j\right)=\sqrt{\sum_{i=1}^{k}v_{ij}^{2}\sigma_{i}^{2}}$.
Experiments\[sec:experiments\]
==============================
To evaluate these algorithms on music, we tested their impact on a Fado classifier. This classifier simply classifies a song as Fado or non-Fado. Fado is a Portuguese music genre whose instrumentation usually consists solely of stringed instruments, such as the classical guitar and the Portuguese guitar. The classifier is a [@Chang2011].
The features used by the consist of a 32-dimensional vector per song, which is a concatenation of 4 features: average vector of the first 13 s of the song; energy; high frequencies 9-dimensional rhythmic features; and low frequencies 9-dimensional rhythmic features.
These rhythmic features are computed based on the coefficients on the 20 Hz to 100 Hz range (low frequencies) and on the 8000 Hz to 11025 Hz range (high frequencies). Assuming $v$ is a matrix of FFT coefficients with frequency varying through columns and time through lines, each component of the 9-dimensional vector is: $maxamp$: max of the average $v$ along time; $minamp$: min of the average $v$ along time; number of $v$ values above 80% of $maxamp$; number of $v$ values above 15% of $maxamp$; number of $v$ values above $maxamp$; number of $v$ values below $minamp$; mean distance between peaks; standard deviation of distance between peaks; max distance between peaks.
These features capture rhythmic information in both low and high frequencies. Fado does not have much information in the low frequencies as it does not contain, for example, drum kicks. However, due to the string instruments used, Fado information content is higher in the high frequencies, making these features good for distinguishing it from other genres.
We used 2 datasets in our experiments which consist of 500 songs from which half of them are Fado songs and the other half are not. The 250 Fado songs are the same in both datasets. The datasets are encoded in mono, 16-bit, 22050 Hz Microsoft WAV files. We will make the post-summarization datasets available upon request.
We used 5-fold cross validation when calculating classification performance. The classification performance was calculated first for the beginning, middle and end sections (of 30s) of the songs to get a baseline and then we compared it with the classification using the summaries (also 30s) for each parameter combination and algorithm.
For feature extraction we used OpenSMILE’s [@opensmile2013] implementation, namely, to extract feature vectors. We also used the Armadillo library [@armadillo2010] for matrix operations and the Marsyas library [@marsyas1999] for synthesizing the summaries.
For Average Similarity, we experimented with 3 different frame sizes (0.25, 0.5, and 1 s) with both 50% and no overlap. We also experimented with vector sizes of 12 and 24.
To use the generic summarization algorithms, however, we need additional processing steps. We adapted those algorithms to the music domain by mapping the audio signal frames (represented as vectors) to a discrete representation of words and sentences. For each piece being summarized, we cluster all of its frames using the mlpack’s [@mlpack2013] K-Means algorithm implementation which calculates the vocabulary for that song (i.e., each frame is now a word from that vocabulary). Then, we segment the whole piece into fixed-size sentences (e.g., 5-word sentences). This allows us to represent each sentence as a vector of word occurrences/frequencies (depending on the type of weighting chosen) which lets us compare sentences with each other using the cosine distance.
In our implementation of , we calculate the similarity between every sentence only once and then apply the algorithm until the desired summary length is reached. We experimented using 3 different values for $\lambda$ (0.3, 0.5 and 0.7) and 4 different weighting types: raw (counting of the term), binary (presence of the term), and “dampened” (same as but takes logarithm of TF instead of TF itself).
The damping factor used in LexRank was 0.85 and the convergence threshold was set to 0.0001. We also calculated the similarity between every sentence only once, applying the iterative algorithm and picking sentences until the desired summary length is reached. We also tested LexRank using the same weighting types as for .
We used Armadillo’s [@armadillo2010] implementation of the operation to implement . After sentence/word segmentation, we apply to the term by sentences matrix (column-wise concatenation of all sentence vectors). We then take the rank-K approximation of the decomposition where the $K$th singular value is not smaller than half of the $\left(K-1\right)$th singular value. Then, we calculate the sentence score (as explained in section \[sub:lsa\]) for each sentence and pick sentences according to that ranking until the desired summary length is reached. We tested with both raw and binary weighting.
We tested , LexRank, and , with all combinations of the following parameter values: frame size of 0.5s with no overlap and with 50% (0.25s hops) overlap; vocabulary size of 25, 50, and 100 words; and sentence size of 5, 10, and 20 words. We used vectors (of size 12) as features for these experiments, they are widely used in many MIR tasks including music summarization in [@Cooper2003; @Chu2000; @Cooper2002; @Glaczynski2011].
Results\[sec:results\]
======================
We present only the most interesting results, since we tried many different parameter combinations for each algorithm. The Frame/Hop Size columns indicate the frame/hop sizes in seconds, which can be interpreted as overlap (e.g., the pair 0.5, 0.25 stands for frames of 0.5s duration with a hop size of 0.25s, which corresponds to a 50% overlap between frames). The classification accuracy results for the 30s contiguous segments which constitute the baseline are 95.8%, 96.2%, and 94% for the beginning, middle, and end sections, respectively, on dataset 1 and 85.2%, 92%, and 90.4%, on dataset 2.
The Average Similarity algorithm was successful in improving classification performance on dataset 1 (98.8% as maximum accuracy obtained with frame size of 0.5 s, no overlap, 24 s), but not on dataset 2 (90.8% maximum accuracy with frame size of 0.25 s, no overlap, 12 s).
In table \[tab:results\], we can see that although not all parameter combinations for yielded an increase in classification performance on both datasets, some combinations did do that. For example, the best combination on the dataset 1 yielded 100% accuracy but on dataset 2 it yielded only 90.8% which is lower than the baseline (92%). However, all other parameter combination presented in those tables yield a better result than the baseline for both datasets. We also noticed that smaller values of $\lambda$ would result in worse accuracy scores.
We can also see that the best parameter combination for LexRank on dataset 1 was also the best on dataset 2. Besides that, all other presented combinations are better when compared to the corresponding baseline, which suggests that these parameter combinations might also be good for other datasets.
Our experiments show that works best with binary weighting when applied to music. This has to do with the fact that some musical sentences, namely, at the beginning of the songs, are strings with very few repeating terms, which increases term-frequency scores. Moreover, those terms might not even appear anywhere else in the song which will, in turn, decrease the document frequency of the term, thus increasing the inverse document frequency score. These issues are detected when chooses those (unwanted) sentences because they will have a high score on a certain latent topic. The binary weighting alleviates these problems because we only check for the presence of a term (not its frequency) and the document frequency of that term is not taken into account. also achieved results above the baseline (table \[tab:results\]).
[c|c|c|c|c|c|c]{}
-------
Frame
Size
-------
: MMR, LexRank and LSA (\# = 12)
&
------
Hop
Size
------
: MMR, LexRank and LSA (\# = 12)
&
--------
Vocab.
Size
--------
: MMR, LexRank and LSA (\# = 12)
&
----------
Sentence
Size
----------
: MMR, LexRank and LSA (\# = 12)
& Weighting & $\lambda$ & Accuracy\
\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 50 & 5 &
--------
dampTF
--------
: MMR, LexRank and LSA (\# = 12)
& 0.7 & 100%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.25 & 100 & 5 &
--------
Binary
--------
: MMR, LexRank and LSA (\# = 12)
& 0.7 & 99.2%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 25 & 5 & Binary & 0.5 & 97.2%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 25 & 10 & dampTF & 0.7 & 97.6%\
\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 50 & 5 &
--------
dampTF
--------
: MMR, LexRank and LSA (\# = 12)
& 0.7 & 90.8%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.25 & 100 & 5 &
--------
Binary
--------
: MMR, LexRank and LSA (\# = 12)
& 0.7 & 93.4%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 25 & 5 & Binary & 0.5 & 93.4%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 25 & 10 & dampTF & 0.7 & 93.4%\
\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 25 & 5 & dampTF & - & 99%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.25 & 100 & 20 & Binary & - & 97.4%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 25 & 10 &
--------
dampTF
--------
: MMR, LexRank and LSA (\# = 12)
& - & 97.6%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 25 & 10 & Raw & - & 97.6%\
\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 25 & 5 & dampTF & - & 94%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.25 & 100 & 20 & Binary & - & 93.8%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 25 & 10 &
--------
dampTF
--------
: MMR, LexRank and LSA (\# = 12)
& - & 93.8%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 25 & 10 & Raw & - & 93.4%\
\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 100 & 20 & Binary & - & 99.6%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 25 & 10 & Binary & - & 99.4%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 50 & 10 & Binary & - & 96.6%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.25 & 25 & 20 & Binary & - & 97%\
\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 100 & 20 & Binary & - & 91.2%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 25 & 10 & Binary & - & 93.4%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.5 & 50 & 10 & Binary & - & 93.4%\
-----
0.5
-----
: MMR, LexRank and LSA (\# = 12)
& 0.25 & 25 & 20 & Binary & - & 92.8%
\[tab:results\]
Conclusions and Future Work\[sec:conclusions\]
==============================================
We evaluated summarization through classification for , LexRank, and in the music domain. More experimenting should be done to find a set of parameter combinations that will work for most music contexts. Future work includes testing other summarization algorithms, other similarity metrics, other types of features and other types of classifiers. The use of Gaussian Mixture Models may also help in finding more “natural" vocabularies and Beat Detection might be used to find better values for fixed segmentation.
[^1]: Francisco Raposo is with Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisboa, Portugal
[^2]: Ricardo Ribeiro is with Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
[^3]: David Martins de Matos is with Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisboa, Portugal
[^4]: Ricardo Ribeiro and David Martins de Matos are with L2F - INESC ID Lisboa, Rua Alves Redol, 9, 1000-029 Lisboa, Portugal
[^5]: This work was supported by national funds through FCT – Fundação para a Ciência e a Tecnologia, under project PEst-OE/EEI/LA0021/2013.
|
harvmac.tex epsf.tex
Ø[[O]{}]{}
-0.9cm
**Clifford V. Johnson$^\dagger$**
[^1][*email: $^\dagger$[cvj@pa.uky.edu]{}*]{} 1.7cm
**Abstract**
1.7cm
Certain configurations of extended objects in string theory have become of considerable interest of late, as they enable the intricate interplay of duality, geometry, field theory and string theory to be explored. Typically, these configurations involve combinations of D–branes and NS–(five)branes, and sometimes the inclusion of orientifolds. The field theories are realized in the dimensions common to all of the world–volumes of the extended objects in question. The dynamics of the field theories encode much of the geometrical behaviour of the branes and [*vice–versa*]{}, yielding a powerful laboratory for the study of familiar dualities and the discovery of new ones.
These configurations are still somewhat novel, and many of their properties remain to be fully understood. The aspects which we will study in this paper are concerned with the question of how the physics —as encoded in the world–volume field theory— of a given configuration can arise from a very different configuration of extended objects. We are thus studying a sort of ‘dual pair’ realizing the same field theory, together with the properties of the transformation between the members of the pair.
Consider for a moment the properties of ‘T–duality’, acting on closed string backgrounds. In the target geometry we can replace a circle of radius $R$ by one of radius $\alpha^\prime/R$, where $\alpha^\prime$ is the inverse string tension. When the background fields have no non–trivial dependence on the compact coordinate (at least asymptotically), we understand what happens very well: winding and momentum modes exchange roles, leaving the physics invariant. (Of course, examining the action on space–time fermions, we see that the type IIA string theory is exchanged with the type IIB.)
However in the open string sector, T–duality exchanges free boundary conditions on the string endpoints with fixed ones (while exchanging the circles), changing a D$p$–brane into a D$(p{+}1)$–brane or [*vice–versa*]{}. Therefore, T–duality applied to the multi–brane configurations along one of the dimensions containing the field theory will change the dimension of the field theory. This is [*not*]{} the type of transformation which we wish to consider. We wish to find a transformation on the configuration which leaves the physical content of the field theory invariant, including its dimensionality. As a result we must consider transformations along a direction in which some branes are extended and some branes are localized.
Necessarily therefore, we will study a transformation of the brane configuration which is essentially a complicated version of T–duality. ‘Complicated’ because it will involve two situations where T–duality —as phrased above— is not well understood:
[*(i)*]{} It will involve a direction along which the background fields (such as the dilaton, metric and Kalb–Ramond field, all from the Neveu–Schwarz/Neveu–Schwarz (NS–NS) sector) have non–trivial dependence, because an NS–brane has its core there.
[*(ii)*]{} It will involve a direction along which the world–volume of a D–brane is only of finite extent, because the D–brane ends on the NS–brane. (This latter situation can be interpreted as a non–trivial dependence of the Ramond–Ramond (R–R) background fields on the coordinate in question.)
The end result of establishing the transformation will be a realization of the [*same field theory*]{} by either a brane configuration in type IIA string theory or a brane configuration in type IIB string theory. As in each configuration the dilaton (and hence the respective string couplings) varies from place to place in space–time, it is more precise to say that we have a dual realization involving M–theory and F–theory backgrounds.
The previous statement is the key to understanding just how we will proceed. In constructing the duality, we cannot use the strict definition of T–duality given above at all stages, as it is tied very much to the specific string theory context where the background field dependence is relatively trivial. Note however, that for very simple backgrounds we already know how we can embed our understanding of T–duality between type IIA and type IIB string theory into a larger context. First, recall that:
[*(a) Ten dimensional type IIA string theory is the zero radius limit of M–theory compactified on a circle.*]{}
Placing type IIA on a circle and shrinking it to zero size, we have by T–duality, an equivalent description in terms of ten dimensional type IIB string theory. The extra dimension is just the ‘T–dual’ dimension, which we understand very well in a stringy context as the infinite radius circle dual to the one of zero radius upon which the type IIA theory is compactified.
Thinking of this two–step process as a single operation on M–theory we arrive at the following conclusion:
[ *(b) Ten dimensional type IIB string theory is the zero size limit of M–theory compactified on a torus.*]{}
We will thus reinterpret T–duality between type IIA and type IIB string theory as those statements about how to arrive at each theory from M–theory.
Nearly all of the D–branes in either theory have a simple understanding in terms of the above geometrical statements ([*(a)*]{} and [*(b)*]{}) together with the fact that M–theory contains two basic branes, the M2–brane and the M5–brane.
In type IIA string theory, the D2–brane is a direct descendant of the M2–brane, while the D4–brane is the double reduction of the M5–brane, one dimension being wrapped on the circle. The F1–brane ([*i.e.,*]{} the fundamental type IIA string) is the double reduction of the M2–brane, while the F5–brane (NS–brane) is the direct descendant of the M5–brane. The D0–brane and D6–brane have a Kaluza–Klein origin as electric and magnetic sources.
Meanwhile, in the type IIB string theory, the D1–brane and the F1–brane come from wrapping one dimension of the M2–brane entirely on one or other cycle of the $T^2$ . Similarly, the D5–brane and the F5–brane come from wrapping a dimension of the M5–brane on one or the other cycle of the $T^2$. These partial wrappings explain why the respective D– and F–branes are mapped into each other under the $\tau{\to}-1/\tau$ transformation of $T^2$ which exchanges the two cycles. Labelling them with integers (0,1) and (1,0) respectively, the full $SL(2,\IZ)$ non–perturbative symmetry produces a family of $(p,q)$ branes. The D3–brane comes from wrapping two dimensions of the M5–brane on the $T^2$, which explains why it is mapped to itself under $SL(2,\IZ)$.
Understanding the existence of D7–branes in this geometrical picture is the launching point for understanding the origins of F–theory. There, the configuration of seven–branes in the non–perturbative type IIB theory is given by the degeneration of an auxiliary torus fibred over the ten physical dimensions of the theory. The origin of this auxiliary torus is clear in the context of this discussion. Once we have arrived at the type IIB string theory (using [*(b)*]{} above), we must not forget the torus upon which we compactified M–theory. We shrunk the area of the torus but we had a choice about the complex tructure, $\tau$. Indeed, the type IIB theory ‘remembers’ the complex structure of the torus, and this is frozen into the resulting configuration. Im($\tau$) is identified with the inverse type IIB coupling $\lambda_{B}^{-1}{=}{\rm e}^{-\Phi}$, ($\Phi$ is the dilaton field), while Re($\tau$) is the R–R scalar field $A^{(0)}$. The degeneration of the auxiliary torus fibration is a jump in the value of $A^{(0)}$, which signals the presence of a magnetic source of it, a seven–brane. There is a $(p,q)$ family of these branes too, related by $SL(2,\IZ)$, and the $(0,1)$ member of this family is the D7–brane of perturbative type IIB string theory.
We will take the position here that this is the geometrical origin of F–theory: An elliptic fibration, defining a consistent type IIB background, is simply a concise way of specifying consistently a collection of data about a [*family of tori*]{} upon which M–theory has been compactified before ultimately shrinking them away.
In M–theory, the D6–brane is a Kaluza–Klein monople, which from a ten dimensional point of view is a circle fibration which degenerates over the position of the D6–brane. This family of circles becomes part of the family of tori which specify the data in F–theory, as we will see. The degeneration of the circles (from the ten dimensional point of view) —signalling the presence of D6–branes in type IIA— are inherited by the tori, ultimately indicating the presence of D7–branes in type IIB. We will also see how other structures in type IIA/M–theory give rise to some of the other types of seven–brane of type IIB/F–theory. In this way, we see that F–theory backgrounds are simply a subset of the possible M–theory compactifications.
So far, we have employed rather heavy machinery to carry out a task which we can perform with simpler and sharper tools. We have recalled the rephrasing of T–duality and the taxonomy of branes in terms of the geometry of M–theory. We already understand T–duality very well in the terms laid out earlier, concerning the momentum and winding modes of closed strings, and boundary conditions for open strings.
However, the simple geometric restating of T–duality reiterated here is more readily adaptable to generalisation than the original terminology. Indeed, we should be able to incorporate features which we do not know how to handle well in the purely stringy context and we will do so in what follows.
We can proceed to understand relationships between non–trivial brane configurations in type IIA and brane configurations in type IIB as follows: Interpret the type IIA brane configuration as an M–theory background. This renders harmless many features which are hard to handle in string theory (such as branes ending on other branes) by turning them into smooth M–theory configurations. Next, compactify that M–theory background upon a family of tori, chosen in a way which respects the symmetries of the brane configuration, and shrink the tori. The resulting background will be an F–theory background, corresponding to a type IIB configuration of extended objects with non–trivial NS–NS and R–R background fields given by the data of the shrunken tori.
Thus, the real use of the technique will become apparent when we try to study the analogues of T–duality in directions where there is non–trivial behaviour. The route described above will allow us to realize an effective duality transformation which would have been more difficult to determine using purely stringy techniques alone.
The plan of this paper is as follows. In section 2, we will start by describing the configuration of branes we wish to consider, in the type IIA string theory. It is essentially a review. Although it is a classical discussion, it is a good starting point to orient ourselves, and it will sometimes be useful to return to the classical ten dimensional description for guidance.
In section 3, we review and follow the observation made in refs. that to go beyond the classical physics, it will be useful to go to a smooth description of the branes as a configuration in M–theory, recovering within the brane geometry the spectral curve which controls the (Coulomb branch) dynamics of the field theory.
The detailed procedures for constructing such smooth descriptions were presented in ref., and we follow that presentation quite closely, specializing to the case in hand, recovering the smooth M–theory configuration as an M5–brane with topology $\IR^4{\times}T^2$ in a multi–Taub–NUT geometry.
In section 4, we depart from what has gone before, walking the path from M–theory to F–theory while carrying over the data of the M5–brane/multi–Taub–NUT configuration. We arrive thus at section 5, describing the F–theory configuration we expect to arrive at. Indeed, the spectral curve for the field theory under consideration has been previously recognized as controlling the dynamics of a seven–brane configuration in type IIB/F–theory, and we make contact with that description. It has also been pointed out that the $\N{=}2$, four dimensional field theory arises naturally on the world–volume of a D3–brane probe moving around in the seven–brane geometry. In our case, the D3–brane probe arises naturally as the remains of the M5–brane we found in the M–theory: Its toroidal part was wrapped on a space–time torus, which was subsequently shrunken away.
In section 6 we discuss the type IIB string theory ([*i.e.*]{}, classical) limit of the F–theory background, revisiting the work of refs., recognizing and interpreting certain aspects of the ‘dual’ type IIA configuration in the new context.
We close with some remarks in section 7.
(This and the next section constitute a review —tailored to our needs— and are included in order to set the scene, establish a few conventions, and attempt a self–contained discussion.)
In this section the statements which we shall make will be essentially classical ones, based on treating the fluctuations of flat branes. We will revisit this configuration in section 3, taking into account the branes’ deformations away from flatness caused by the forces they exert on each other. As a result, the field theory content we will deduce will be only true classically also.
Let us start with the following brane configuration in type IIA string theory:
*Table 1.*
In Table 1 (and in a similar one in section 6), a dash ‘—’ represents a direction [*along*]{} a brane’s world–volume while a dot ‘$\bullet$’ is transverse to it. For the special case of the D4–branes’ $x^6$ direction, where a world–volume is a finite interval, we use the symbol ‘[\[—\]]{}’. (A ‘$\bullet$’ and a ‘—’ in the same column indicates that one object is living inside the world–volume of the other in that direction, and so they can’t avoid one another. Two ‘$\bullet$’s in the same column reveal that the objects are point–like in that direction, and need not coincide in that direction, except for the specific case where they share identical values of that coordinate.)
In the configuration the D4–branes are stretched, in the $x^6$ direction, between the two NS–branes which are a distance $x^6_1{-}x^6_2{=}L_6$ apart, where $x^6_{1,2}$ denote the positions of the first and second NS–brane in the $x^6$ direction. The remaining dimensions of their world–volumes, and that of all other branes, are fully extended, filling the directions in which they lie.
Consider the directions common to the world–volumes of all of the branes. There is a four dimensional field theory living on this common space–time (with coordinates $(x^0,x^1,x^2,x^3)$). This field theory has $\N{=}2$ supersymmetry, as the 32 supercharges are reduced by half due to the presence of the NS–branes, and by a half again due to the presence of the D4–branes. The presence of the D6–branes does not break any more supersymmetries.
The (classical) field content of the four dimensional theory is easily determined by the usual D–brane calculus: The excitations of open strings stretching between the D4–branes (‘4–4 strings’) supply some of the fields in the theory. Fluctuations parallel to the world–volume supply a family of fields transforming as vectors under the $SO(1,3)$ Lorentz symmetry. These vectors form $U(2)$ gauge bosons (when the D4–branes are coincident). Excitations transverse to the world–volume represent the movement of the D4–branes. The D4–branes must share the same position as the NS–branes in order to stay tethered to them, and therefore there are no fluctuations in the $(x^6,x^7,x^8,x^9)$ directions. The only transverse fluctuations are therefore in the $(x^4,x^5)$ directions which gives a set of complex massless scalars in the field theory. Taking into account their transformation properties under the gauge symmetry, it is clear that they form the complex adjoint scalar $\phi$, which lives in the $\N{=}2$ vector multiplet. The strength of the gauge coupling $g$ is a function of the distance between the NS–branes: $g^2\propto
\lambda_{A}/L_6$. Here, $\lambda_{A}$ is the type IIA string coupling, appearing in this way because the gauge kinetic term arises in open string theory ([*i.e.*]{}, the D–brane sector) as a disc amplitude.
The ‘matter’ multiplets of the gauge theory are $N_f({\leq}4)$ families of ‘quark’: scalars in the fundamental of $U(2)$, which come from the ‘6–4 strings’ connecting the D6–branes to the D4–branes. The masses of these quarks are set by the distance (in $(x^4,x^5)$) between the D6–branes and the D4–branes.
The Higgs branch of the theory is reached by first making the quarks massless by moving the D6–branes to be coincident with the D4–branes. The D4–branes may now split, letting them have new endpoints on the D6–branes, and the segments are now free to move independently inside the D6–branes’ world–volumes. The $(x^7,x^8,x^9)$ positions parameterize the vacuum expectation values (‘vevs’) of the quarks. In this way the gauge symmetry can be completely Higgsed away.
The Coulomb branch of the theory (our concern for most of the paper) is reached by giving the adjoint scalar $\phi$ a vev, with values in the Abelian subalgebra of $U(2)$. This breaks the gauge symmetry down to $U(1){\times}U(1)$ and corresponds to moving the D4–branes apart in the $(x^4,x^5)$ directions. When a D4–brane encounters a D6–brane in $(x^4,x^5)$, a quark becomes massless.
We need to understand this complicated brane configuration much better. For example, the ending of the D4–branes on the NS–branes is a somewhat singular situation. One might expect this feature to be smoothed out in a way which corresponds to quantum corrections to the field theory statements we have made in this section. Ultimately, the geometry reproduces the structure of the spectral curves which govern the structure of the quantum moduli space of the gauge theories under discussion. This was anticipated and exploited in ref., and independently in ref.. In ref., the mechanisms by which the corrections to the brane configurations may be deduced were explained, and the consequences explored quite extensively.
The starting point for correcting our classical configuration of the previous section is to realize that the definite position assigned the NS–branes in the $x^6$ direction is modified considerably. The D4–branes, which are finite in that direction and suspended between the NS–branes, are pulling the $(x^4,x^5)$ portion of the NS–branes’ world–volume out of shape, giving asymptotically the shape of (say) the first NS–brane world–volume as: where $v{=}x^4{+}ix^5$, and $k$ is a constant which depends upon the string coupling. Here, $a_1$ and $a_2$ are the positions of the two D4–branes in the $(x^4,x^5)$ plane.
In order for the NS–brane’s kinetic energy integral to converge, we have where C is some constant characteristic of the NS–brane. It can be set to zero after a shift of the origin in $(x^4,x^5)$ space. As discussed before, the $a$ positions are the scalar components of the gauge supermultiplet in the field theory. The sum $a_1{+}a_2$ controls the overall $U(1)$ factor of the gauge group $U(2)$ and therefore equation freezes out this $U(1)$, making our gauge group $SU(2)$. Considering the opposite D4–brane ends, on the other NS–brane, leads to the same equation and no additional conditions on the gauge group.
Turning to the gauge coupling, we revise our earlier formula to make it a function of $v$: and so we see that it is behaving as it should for a gauge theory, varying as a function of some ‘mass scale’ set by $|v|$: the quantity $1/g^2$ diverges logarithmically as $|v|{\to}\infty$.
The next step is to recognize that this type IIA situation of D4–branes ending on and deforming NS–branes should have a better description in M–theory. This is because on going to M–theory an extra dimension unfolds, revealing that there the D4–branes have a hidden world–volume dimension, and so become M5–branes. The NS–branes also become M5–branes, with a definite position in this new ‘M–direction’, $x^{10}$. The parts of the D4–branes we described in section 2 as lines in $x^6$ are actually cylinders connecting the NS–branes. The final justification for going to M–theory was pointed out in ref.: Looking at formula , it is clear that if we increase the string coupling $\lambda_{A}$ while simultaneously increasing the inter–NS–brane distance, the field theory is completely unaffected by this. Therefore, we can go to the M–theory limit, where we grow an extra dimension, $x^{10}$, of radius $R{\sim}\lambda_A^{2/3}$, as measured in type IIA units.
We now recognize that the formulae above were the real part of a complex story. Giving the NS–branes positions in the $x^{10}$ direction, we have: and we may define the coupling (measuring now in M–theory units of length) The angle $\theta$ changes harmlessly by $\pm2\pi$ as an $x^{10}$ position of an NS–brane changes by $2\pi R$, as it should.
We can quickly compute the $\beta$–function of our field theory using the above formula as follows: Following the arguments of ref., made in the context of string theory ([*i.e.*]{}, the language of section 2), we know that we can move all of the D6–branes past one of the NS–branes (let us choose the second one), resulting in a D4–brane stretched from the NS–brane (starting on the other $x^6$–side of it from the gauge D4–branes) to a D6–brane, one for each D6–brane.
As the D6–branes are more massive than the D4–branes, 4–4 strings entirely in the new D4–brane sector do not contribute to the gauge group. However, the quarks are still present, as they now arise as $N_f$ types of 4–4 string which connect the new D4–branes across the NS–brane to the old D4–branes. Since the D4–branes on the other side of the NS–brane pull the other way, the asymptotic shape of the NS–brane with the extra branes is given by: where the $m_i$ are the D6–brane $(x^4,x^5)$ positions, or equivalently those of the new D4–branes. They are the classical masses of the quarks.
Looking at the large $|v|$ behaviour of the coupling using this formula, we get displaying the one–loop $\beta$–function. When $N_f{=}4$ it vanishes, as it ought to for the scale invariant theory.
The way to incorporate the D6–branes in this set–up directly in the M–theory picture is to recognize that they are Kaluza–Klein monopoles: The M–coordinate $x^{10}$ is not simply a circle with which we form a product with the $(x^4,x^5,x^6)$ directions to get the full space–time. Instead, it is fibred over them in a Hopf–like fashion. The metric geometry of this situation is that of multi–Taub–NUT. The positions of the D6–branes are the positions in the base where the Killing vector for translations in the $x^{10}$ circle vanishes, giving us a singularity in the D6–brane metric when we reduce to ten dimensional type IIA string theory.
It is now clear that the type IIA string theory configurations of branes is a much less singular affair when viewed at strong coupling, in M–theory. The D4–branes and NS–branes are just different glimpses of the history of a single M5–brane’s life–time. If we add a point representing infinity to the $(x^4,x^5)$ world–volumes of the NS–branes, we see that in the full M–theory interpretation, the world–volume of the M5–brane has topology $\IR^4{\times}T^2$, where the $T^2$ is described as a surface embedded in the four dimensional space $Q_{N_f}$. Here, $Q_{N_f}$ denotes the multi–Taub–NUT space of multiplicity $N_f$, the M–theory origin of the $N_f$ D6–branes. In particular, $Q_0$ is just the product $\IR^3{\times}S^1$ with coordinates $(x^4,x^5,x^6,x^{10})$. As pointed out in ref., it will suffice (for study of the Coulomb branch of the field theories) to represent $Q_{N_f}$ as an equation of the form: where $(y,z,v)$ are coordinates on a three complex dimensional space with the structure of $\IC^3$. As before, $v{=}x^4{+}ix^5$. Defining the coordinate $s{=}(x^6+ix^{10})/R$, we have that for fixed $z$, large $y$ corresponds to $t=\exp(-s)$ while for fixed $y$, large $z$ corresponds to $t^{-1}$. The parameters $m_i$ are the $(x^4,x^5)$ positions of the D6–branes. We will require that the $N_f$ D6–branes are located [*between*]{} the NS–branes, and nowhere else. The specification misses (among other things) the $x^6$ positions of the D6–branes.
The world–volume of the M5–brane may be specified as a further constraint equation in the coordinates $(y,v)$: $F(y,v){=}0$. Giving $Q_{N_f}$ a complex structure and requiring holomorphicity in $v$ and $y$ (very natural when viewed from the point of view of the field theory) specifies the metric structure on $T^2$ as a complex Reimann surface.
As a polynomial, the function $F$ must be quadratic in $y$ for a ($v{=}{\rm const.}$) slice to yield two NS–branes in the ten dimensional picture, and our constraint equation is thus of the form: where $A,B$ and $C$ are relatively prime polynomials.
There are no components of D4–branes extended outside the $x^6_1$ — $x^6_2$ interval; these would necessarily be semi–infinite (as they have nothing else to end on), and as such would show up in our solution as a divergence in $y$ for some definite value of $v$. The absence of such behaviour fixes $A$ to be a constant, which we can choose to be 1. The same requirement also removes the possibility of $z$ diverging for some particular value of $v$ and this translates into a condition on the form of $C$ and $B$ also: $C$ must have the same zeros –with the same multiplicity– in the $v$ plane as has the defining polynomial of $Q_{N_f}$, and $B$ must be quadratic in $v$ in order to yield two D4–branes at fixed $y$ in the ten dimensional picture.
Our torus is thus of the form: where $f$ is an arbitrary complex constant. We can remove terms linear in $v$ from $B(v)$ by a shift in $v$, which would shift the bare masses $m_i$. For the case $N_f{=}0$, the last term should simply be a constant, which we can set to 1 without loss of generality. In terms of ${\tilde y}{=}y{+}B/2$, we have: a standard form for the spectral curve controlling the Coulomb branch of $\N{=}2$ supersymmetric four dimensional $SU(2)$ gauge theory with $N_f$ quarks. The details of the polynomial can be fixed by comparing to various field theory limits as done in ref..
At the present stage, we have an M–theory background consisting of an M5–brane with topology $\IR^4{\times}T^2$ propagating in the $N_f$ Taub–NUT space $Q_{N_f}$. The torus $T^2$ and the space $Q_{N_f}$, are all described in terms of constraint equations in an auxiliary six dimensional space.
Consider now the following. Let us ask instead for a slightly different situation, which will differ from this one in ways which are invisible in the field theory. Interpret the equation as not only specifying the $T^2$ giving the shape of the M5–brane in the four dimensional space $Q_{N_f}$, but also specifying two of the space–time coordinates of the M–theory configuration. In other words, [*we have wrapped the M5–brane we have been discussing on a space–time torus of the same shape.*]{}
The manipulations following equation and resulting in the final curve serve to find us a smooth description of the wrapped M5–brane on a space–time torus $T^2$, where the torus is fibred over a base with topology $\IR^2$. Some of the fibration data is inherited from that of the multi–Taub–NUT geometry: The information about the positions where the D6–branes live translates into a contribution to the information about the location of zeros of the discriminant of the torus fibration.
Let us return to the type IIA description for a moment. As the Kaluza–Klein monoples feel no forces amongst themselves, it is not problematic to have toroidally compactified one of the directions in which they are point–like. The wrapping of the M5–brane on the torus is already partially performed from the start: the D4–branes are a piece of an M5–brane wrapped on the periodic $x^{10}$ direction. So at any $x^6$ position where there is a D4–brane, we know that there is a hidden part of an M5–brane wrapped on $x^{10}$. What we have effectively done is a further compactification of eleven dimensional space–time. Focusing on the world–volume of an NS–brane, we must make some combination of $(x^4,x^5)$ compact in order to get the complete toroidal topology. We know from our experience with the branes just how to do this: We simply add the space–time point at infinity to the $(x^4,x^5)$ plane making it a $\IP^1$, just as we did to the world–volume of the NS–branes in those directions. The $\IP^1$ has cuts or punctures in it due to the presence of the D4–branes.
We have already seen that the size of the M–direction does not affect the physics of the field theory if we rescale the separation of the NS–branes accordingly. Similarly, the fact that we have a $\IP^1$ for the $(x^4,x^5)$ direction (instead of $\IR^2$) should not enter as a parameter in the field theory if we rescale the positions of the D4– and D6–branes to absorb any changes we make in the overall size of the $\IP^1$.
Returning to M–theory where the complete, smooth description is to be found, we may now consider shrinking the $T^2$ part of the M5–brane wrapped space–time. We hold the complex structure of the torus (and hence the field theory data) fixed and shrink its size away to zero.
As described in the introduction, we know from simpler situations that we have a type IIB description of this situation (M–theory on a shrunken torus) where:
[[*(i)*]{} We have a new direction, $\hat x$, which restores us to a ten dimensional theory.]{}
[[*(ii)*]{} The wrapped M5–brane becomes a D3–brane.]{}
[[*(iii)*]{} The data describing the shape of the torus which we shrink to zero size is not lost, but is ‘remembered’ by the final configuration: It is frozen into an auxiliary torus, fibred over the ten dimensions of the IIB theory. This is longhand for ‘F–theory’. ]{}
As we know, the ‘data torus’, or more precisely the family of such tori, is that which specifies the Coulomb branch of the $\N{=}2$ four dimensional $SU(2)$ gauge theory with $N_f$ quarks. Described as an elliptic fibration over a base $\cal B$, with topology $\IR^2$, it is singular over up to six points (depending upon $N_f$) in $\cal
B$. From the point of view of our F–theory background, these points are the positions of magnetic sources of the R–R background field $A^{(0)}$, as the modular parameter of the torus fibre specifies type IIB string background fields [*via*]{} the relation: where the type IIB string coupling $\lambda_B(v)$ is related to the dilaton $\Phi$ as $\lambda_B{=}{\rm e}^\Phi$. Such a magnetic source is an object which is point–like in $\cal B$ and extended in the other eight directions. It is therefore a seven–brane of type IIB theory. In the case where we can describe the background with perturbative type IIB strings, the seven–brane is either a D7–brane or an O7–plane (orientifold fixed plane). More generally, it can be any of the infinite family of seven–branes which can appear in the type IIB theory by virtue of the $SL(2,\IZ)$ non–perturbative symmetry.
The connection between precisely this family of tori (describing $D{=}4$, $\N{=}2$ $SU(2)$ gauge theory with $N_f$ quarks) and an F–theory background was noticed in ref.. It was pointed out there that close to the perturbative type IIB limit of F–theory compactified on K3 ([*i.e.,*]{} the orbifold limit of the K3), the background describes four identical families of six seven–branes. Focusing on one family, in the limit two of the six possible singularities merge to become an O7–plane while the rest become $N_f$ D7–branes. Furthermore, the four dimensional field theory is naturally realized on the world–volume of a D3–brane probe, as pointed out in ref.. The fact that the D3–brane has an $SU(2)$ living on it instead of just $U(1)$ is T–dual to the fact that it is really [*two*]{} D3–branes, plus an orientifold projection which forces them to move together as a single object, projecting the expected $U(2)$ (resulting from their coincidence) to $SU(2)$.
We see here that the D3–brane probe appears unbidden in this framework as the wrapped M5–brane! We also know that the $N_f$ D7–branes have their origins in the presence of $N_f$ D6–branes, while the O7–plane is an additional structure which was frozen into the torus because of the non–trivial way (from the type IIA picture) the D4–branes end on the NS–branes. We can trace the origins of the O7–planes to the D4/NS–brane system and not the D6–branes because the case of no flavours has precisely two O7–planes and no other singularities (not counting the point at infinity).
In the next section we shall describe this further in the type IIB limit.
Let us choose to label the coordinates of the base $\cal B$ by $v{=}{
x}^4{+}i{ x}^5$. (We should be careful here. This is not exactly the $(x^4,x^5)$ pair of the type IIA configuration.) Let us also denote by ${\hat x}^6$ the new, ‘dual direction’ (which we briefly called $\hat
x$ in the last section).
We have the following brane configuration in type IIB string theory:
*Table 2.*
Comparing Table 1 and Table 2, we see that from a string theory point of view we have performed a sort of T–duality, in the $x^6$ direction. As one might expect, under it the D6–branes have turned into D7–branes, as they should. Ignoring for a moment the finite extent of the D4–branes in the $x^6$ direction, we see that they have turned into a pair of D3–branes, as one might hope naively. The complication of the presence of the cores of two NS–branes, together with the ending of a D4–brane on them, turns out to be ‘$T_6$–dual’ to an orientifold background. The orientifold procedure glues to the two D3–branes into one dynamical object carrying an $SU(2)$ gauge group, and introduces an O7–plane.
This perturbative type IIB string background describes aspects of the classical limit of the Coulomb branch of the $SU(2)$ gauge theory. The position of the D3–brane in the $(x^4,x^5)$ plane parameterizes the Coulomb branch of the gauge theory on its world volume, where the gauge group is generically $U(1)$. As it moves around the plane, it sees $N_f$ D7–branes each of charge 1 (in D7–brane units), and one fixed plane, which is the O7–plane, the fixed plane of the orientifold symmetry, which is $\Omega R_{45}$ on the bosonic sector. If the D3–brane probe is coincident with the O7–plane, the $SU(2)$ is restored. (Here, $\Omega$ is world–sheet parity, and $R_{45}$ is $v{\to}{-}v$. The O7–plane has charge $-4$ as can be deduced from requirements of $A^{(0)}$ charge cancellation in the full compact situation: In total there are four O7–planes and sixteen D7–branes.)
(As explained a while ago in ref., this is understood in the $T_{45}$–dual type I language as follows: The D5–brane has gauge group $SU(2)$, resulting from a projection with $\Omega$, in constructing the type I theory. It has part of its world–volume in the directions $(x^4,x^5)$ before doing the $T_{45}$–duality to the present situation. This allows the possibility of introducing $(x^4,x^5)$ Wilson lines (when making them toroidal in preparation for the T–duality) to break the $SU(2)$ to $U(1)$. These Wilson lines are $T_{45}$–dual to the positions of the D3–brane probe here.)
Using the charge assignments just given, and the fact that the number of transverse directions is two, one expects that the couplings are given by: where $m_i$ are the classical positions of the D7–branes and we have placed the O7–plane at the origin.
The similarity with the equations describing the asymptotic shape of the NS–branes as they are pulled on by the D4–branes (in section 3) should not escape our notice. Combining equations and , we have (placing the D4–branes at the origin): The $m_i$ are the $(x^4,x^5)$ positions of the D6–branes. The similarity between the two formulae is not an accident. It is part of the ‘dual’ properties of the brane configurations. Let us list some observations about these:
[*(i)*]{} In both cases there is $\N{=}2$ supersymmetry in four dimensions. The original thirty–two supercharges are reduced to eight. In the type IIA case this is done by introducing NS–branes, and then D4–branes. Adding D6–branes to the mix places no further constraints on the number of supercharges. Similarly, in the type IIB situation, there is a $\IZ_2$ orientifold (which introduces an O7–plane), followed by the introduction of a D3–brane. Adding D7–branes to these does not ‘break’ any more supersymmetry.
[*(ii)*]{} In both cases, the logarithmic form of the two equations above is a consequence of there being two relevant directions in which a Laplace–Poisson equation is solved. In the type IIA situation, it is the two directions on the NS–brane in which the incident D4–branes make a point, pulling in a transverse $x^6$ direction. In the type IIB scenario, it is the two directions transverse to both the seven–branes and the D3–brane probe.
[*(iii)*]{} The main sources of non–trivial behaviour of the dilaton in the type IIA theory are the cores of the NS–branes, at the place where the D4–branes meets them. Equation encodes the asymptotic shape of the NS–branes’ world–volumes, deformed in the $x^6$ direction, and implicitly the distribution of background NS–NS and R–R fields there. The ‘dual’ configuration in type IIB makes this explicit: The D7–branes and O7–planes are NS–NS sources for the dilaton and R–R sources for the field $A^{(0)}$, and equation gives their asymptotic form, while the branes themselves remain undeformed.
[*(iv)*]{} We can deduce that the D4/NS–brane system, non–trivial in the $(x^4,x^5,x^6)$ sector, acts as an electric source for the R–R form $A^{(7)}$ in type IIA, and hence has some effective D6–brane charge, as measured by enclosing that part of the configuration with a two–sphere at infinity.
There are a number of ways to see that this is true:
[ 0.2cm [**(a)**]{} These charges are ultimately responsible for the O7–plane (two extra seven–branes) in the ‘dual’ type IIB (F–theory) picture. Interpreting our configurations as effectively $T_6$–dual to each other, the O7–plane, carrying $A^{(0)}$ charge, is the image under $T_6$ of the D4/NS–brane junctions.]{}
[ 0.2cm [**(b)**]{} This charge assignment is consistent with the fact that adding D6–branes, positioned precisely in the $(x^4,x^5,x^6)$ directions, does not break any of the supersymmetries already preserved by the D4/NS–brane configuration. From the point of view of the D6–branes, adding them to the configuration is no different from adding them to a system of parallel D6–branes.]{}
[ 0.2cm [**(c)**]{} Possessing electric charge of $A^{(7)}$ is equivalent to having some magnetic $A^{(1)}$ (D0–brane) charge. It is clear that the D4/NS–brane configuration has such charge by considering the nature of the $x^6$ end–point of the D4–brane in the $(x^4,x^5)$ part of the NS–brane’s world–volume: It is a ‘vortex’ or monopole. As one circles a D4–brane’s end–point once in $(x^4,x^5,x^6)$ space and returns to the same position, some winding has been acquired in the $x^{10}$ direction. This is the only way to make local sense of the smoothing out of the D4/NS–brane IIA system into a Reimann surface in M–theory. This non–trivial winding is akin to the behaviour which we attribute to a D6–brane in assigning it the role of a Kaluza–Klein monopole of $A^{(1)}$.]{}
[0.2cm[**(d)**]{} The $A^{(7)}$ charge observation is also consistent with the observationthat moving a D6–brane through an NS–brane will result in a new D4–brane stretched between them. Indeed, if we had moved the D6–branes off to infinity, obtaining the quarks from the resulting $N_f$ semi–infinite D4–branes instead, the final equation for the shape of the M5–brane would have been precisely the same as the one obtained here, . Hence, the F–theory result would have been the same, and consequently so would be the final dual type IIB configuration in Table 2. Therefore, the effective $T_6$ duality treats the D4/NS–brane junction as an object with D6–brane charge.]{}
[*(v)*]{} As pointed out in ref., the equation can only be correct classically, or far away from the O7–plane. Close to the orientifold, the imaginary part of $\tau$ would appear to be able to go negative, which is not acceptable in a theory which is supposed to be unitary. This is simply a reflection of the fact that there are non–perturbative corrections to the formula as one approaches the orientifold. The full solution is obtained by returning to the complete F–theory background. The new non–perturbative data are precisely those encoded in the spectral curve , which yields the correct solution for $\tau$ everywhere and hence the non–perturbative positions of the seven–branes. An important fact is that the singularity at $v_0$, representing the O7–plane, splits into two pieces, separated by a distance ${\rm
e^{\pi i\tau}}$. This corresponds to the O7–plane splitting into two $(p,q)$ seven–branes in the full non–perturbative theory. Similarly, the form for the shape of the NS–branes is only true asymptotically; the complete data are in the M5–brane M–theory configuration in the shape of the spectral curve .
Note that we can move from the theory with $N_f{=}4$ quarks to a lower number of quarks by the scaling limits described in ref.. For example, we send a D7– (or D6–) brane (corresponding to a a quark of mass $m$) off to infinity in the $(x^4,x^5)$ plane. At the same time, we take the limit $\tau{\to}i\infty$, and hold the product $\Lambda{=}{\rm e}^{\pi
i\tau}m$ fixed, defining the mass scale of the $N_f{<}4$ theory.
We have found that a type IIA configuration of D4–branes, NS–branes and D6–branes on whose intersection there lives an $\N{=}2$ four dimensional $SU(2)$ gauge theory is related to a type IIB configuration of parallel D3–branes, D7–branes and O7–planes, realizing the same gauge theory. The spectral curve controlling the dynamics of the gauge theory appears naturally in the topology and geometry of M–branes in M–theory on the one side, and as F–theory data on the other.
We have studied a very non–trivial example of how F–theory brane configurations may arise as M–theory ones, realizing an effective T–duality in the process.
It seems that generalising the reverse process is always possible: We should be able to start with an F–theory background and shrink a direction over which the data torus is not varying much. This should yield an M–theory background where the torus has now become physical. If there were D3–branes in the F–theory background, they will become M5–branes with two of their dimensions in the shape of that torus. Returning to a type IIA background by shrinking an appropriate circle will yield a configuration of intersecting branes of various sorts. This procedure should always be possibly locally, and therefore we can understand (at least piece–wise) all F–theory backgrounds in terms of M–theory brane configurations.
The generalisation of the M–theory to F–theory route (along the lines of this paper) might be more challenging, however. It would be interesting to study how the example presented here might generalise, providing a useful relation between certain type IIA/M–theory brane configurations and (pieces of) type IIB/F–theory ones. There are many reasons why this would be desirable. Much of the technology of F–theory is very well organised in terms of the well–developed geometry of elliptically fibred complex manifolds. However, the study of complicated M–theory/type IIA brane configurations is still a relatively new area, so being able to relate them to F–theory backgrounds should help in sharpening certain aspects of their analysis.
However, it is not clear that all relevant M–theory brane configurations can be converted to F–theory ones in the specific way done here. Considering the case of higher rank gauge groups, where the spectral curves are of higher genus than that of a torus, is already interesting: the path to F–theory will probably involve multiple wrappings of the M5–brane on the space–time torus, resulting in many D3–branes in the final dual model, with additional discrete projections.
It will be interesting to study such issues further. The benefits of finding a dictionary between M– and F–theory configurations will be of tremendous value in the study of the dynamics of gauge theories.
[**Acknowledgments:**]{}
CVJ was supported in part by family, friends and music. Thanks to E. G. Gimon and W. Lerche for comments on the manuscript.
1.0in
[^1]:
|
---
abstract: 'The function spaces $Ces_p=[\ces,L^p]$, $1\le p\le\infty$, have received renewed attention in recent years. Many properties of $[\ces,L^p]$ are known. Less is known about $\cx$ when the operator takes its values in a rearrangement invariant (r.i.) space $X$ other than $L^p$. In this paper we study the spaces $\cx$ via the methods of vector measures and vector integration. These techniques allow us to identify the absolutely continuous part of $\cx$ and the Fatou completion of $\cx$; to show that $\cx$ is never reflexive and never r.i.; to identify when $\cx$ is weakly sequentially complete, when it is isomorphic to an AL-space, and when it has the Dunford-Pettis property. The same techniques are used to analyze the operator $\ces\colon\cx\to X$; it is never compact but, it can be completely continuous.'
address:
- 'Facultad de Matemáticas & IMUS, Universidad de Sevilla, Aptdo. 1160, Sevilla 41080, Spain'
- 'Math.–Geogr. Fakultät, Katholische Universität Eichstätt–Ingolstadt, D–85072 Eichstätt, Germany'
author:
- 'Guillermo P. Curbera'
- 'Werner J. Ricker'
title: 'Abstract Cesàro spaces: Integral representations'
---
[^1]
Introduction {#introduction .unnumbered}
============
function spaces have attracted much attention in recent times; see for example the papers [@astashkin-maligranda0], [@astashkin-maligranda1], [@astashkin-maligranda2] by Astashkin and Maligranda and [@lesnik-maligranda-1], [@lesnik-maligranda-2] by Lésnik and Maligranda and the references therein. These spaces arise when studying the behavior, in certain function spaces, of the operator $$\ces:f\mapsto \ces(f)(x):=\frac{1}{x} \int_0^x f(t)\,dt.$$ A classical result of Hardy motivated the study of the operator $\ces$ in the $\elp$ spaces, thereby leading to the spaces $Ces_p:=\{f: \ces(|f|)\in \elp\}$. It was then natural to extend the investigation to the so called abstract spaces $\cx$, where the role of $\elp$ is replaced by a more general function space $X$, namely, the Banach function space (B.f.s.) $$\cx:=\big\{f: \ces(|f|)\in X\big\},$$ equipped with the norm $$\|f\|_{\cx}:=\|\ces(|f|)\|_X,\quad f\in\cx.$$ We will focus our attention on those spaces $X$ which are rearrangement invariant (r.i.) on $[0,1]$.
It is known that $[\ces,L^p]=Ces_p$ is not reflexive, [@astashkin-maligranda0 Theorem 1, Remark 1]. In Theorem \[reflexive\] it is shown that $\cx$ is never reflexive. This result is established via techniques from a different area. It turns out, for every r.i. space $X\not=\linf$, that the $X$-valued set function $$\mx:A\mapsto \mx(A):=\mathcal{C}(\chi_A),
\quad A\subseteq [0,1] \text{ measurable},$$ is $\sigma$-additive, i.e., it is a *vector measure*. This fact can be successfully used for studying the function space $\cx$. Indeed, the norm of $\cx$ is not necessarily absolutely continuous (a.c.). Actually, the a.c. part $\cx_a$ of $\cx$ is precisely the well understood space $\lmx$ consisting of all the $\mx$-integrable functions (in the sense of Bartle, Dunford and Schwartz, [@bartle-dunford-schwartz]). Moreover, $\cx$ need not have the Fatou property. It turns out that the Fatou completion $\cx''$ of $\cx$ is precisely the space $\wlmx$ consisting of all the weakly $\mx$-integrable functions.
A further relevant point is that the integration operator $\imx\colon\lmx\to X$ given by $f\mapsto\int f\,d\mx$ is precisely the restriction to $\cx_a$ of the operator $\ces\colon\cx\to X$. Moreover, $\lmx$ is the *largest* B.f.s. over $[0,1]$ with a.c. norm on which $\ces$ acts with values in $X$. In addition, the scalar variation measure $|\mx|$ of the vector measure $\mx$ is always $\sigma$-finite and possesses a strongly measurable, Pettis integrable density $F\colon[0,1]\to X$ relative to Lebesgue measure. A relevant feature for the operator $\ces$ (which a priori is only given by a pointwise expression on $\cx$) is that *integral representations* become available. First, for $\ces$ restricted to $\cx_a$, such a representation is given by $$\label{representationBDS}
\ces(f)=\int_{[0,1]} f\,d\mx,\quad f\in\lmx=\cx_a,$$ via the Bartle-Dunford-Schwartz integral for vector measures. Actually, it turns out specifically for $\mx$ that $$\label{representation}
\ces(f)=\int_{[0,1]} f(y)\,F(y)\,dy,\quad f\in\lmx ,$$ which is defined more traditionally as a Pettis integral. Furthermore, for the class of r.i. spaces $X$ where the variation measure $\mxv$ is finite, the representation when restricted to $\lmxv$ is actually given via a *Bochner integrable density* $F$.
The paper is organized as follows.
In Section 1 we present the preliminaries on Banach function spaces, rearrangement invariant spaces and vector integration that are needed in the sequel.
Section 2 is devoted to establishing the main properties of the vector measure $\mx$. A large class of r.i. spaces $X$ for which $\mxv$ is a finite measure is identified; see Proposition \[variation-L\] and Corollary \[condition var\].
In Section 3 the study of the space $\cx$ is undertaken with the vector measure $\mx$ and its space of integrable function $\lmx$ as main tools. As mentioned above, in Theorem \[reflexive\] it is proved that $\cx$ is never reflexive. It is also established as part of that result that $\cx$ fails to be r.i. (this was proved for $[\ces,L^p]$ in [@astashkin-maligranda1 Theorem 1] and conjectured in [@lesnik-maligranda-1 Remark 3]). The problem of when $\cx$ is order isomorphic to an AL-space, that is, to a Banach lattice where the norm is additive over disjoint functions, is also considered. It is shown (cf. Theorem \[L1\](a)), for a large class of Lorentz spaces $\laf$, that $[\ces,\laf]$ is order isomorphic to $L^1(|m_{\laf}|)$ with $|m_{\laf}|$ a finite, non-atomic measure. Crucial for the proof of the existence of this order isomorphism is an identification, due to Lésnik and Maligranda, [@lesnik-maligranda-1], of the associate space $\cx'$ of the B.f.s. $\cx$ (under some restrictions on the r.i. space $X$).
In Section 4 we analyze the operator $\ces\colon\cx\to X$. The identification of the restriction of $\ces$, via $\imx$, is used to show that the operator $\ces\colon\cx\to X$ is never compact; see Proposition \[compact\]. For r.i. spaces $X$ satisfying $X\subseteq\lmxv$, which forces both $\mx$ to have finite variation and $\ces\colon X\to X$ to act boundedly, it follows (cf. Proposition \[cc lmxv\]) that $\ces\colon X\to X$ is necessarily completely continuous. This result is quite useful in view of the fact that $\ces\colon X\to X$ is never compact (whenever it is a bounded operator). The complete continuity of the restricted integration operator $\imx\colon\lmxv\to X$ can be ‘lifted’ to the complete continuity of $\ces\colon\cx\to X$, under some conditions on the r.i. space $X$; see Proposition \[cc lmxv\]. This property of $\ces\colon\cx\to X$ is related to $\cx$ being order isomorphic to an AL-space; see Proposition \[cc lmx\]. The section ends with another extension of a result valid for $X=L^p$. It was shown in [@astashkin-maligranda1 §6, Corollary 1] that the spaces $Ces_p$, $1<p<\infty$, fail to have the Dunford-Pettis property. This result is extended to include all reflexive r.i. spaces $X$ having a non-trivial upper Boyd index; see Proposition \[4.7\].
In the final section we discuss in fine detail the role of the Fatou property in relation to $\cx$, and derive some consequences for $\cx$; see Proposition \[5.2\].
We only consider r.i. spaces $X\not=\linf$ because $Ces_\infty=[\ces,\linf]$, known as the Korenblyum-Kreĭn-Levin space, has already been thoroughly investigated; see [@astashkin-maligranda1], [@astashkin-maligranda2] and the references therein.
Preliminaries
=============
A *Banach function space* (B.f.s.) $X$ on \[0,1\] is a Banach space of classes of measurable functions on \[0,1\] satisfying the ideal property, that is, $g\in X$ and $\|g\|_X\le\|f\|_X$ whenever $f\in X$ and $|g|\le|f|$ $\lambda$–a.e., where $\lambda$ is the Lebesgue measure on \[0,1\]. The *associate space* $X'$ of $X$ consists of all functions $g$ satisfying $\int_0^1|f(t)g(t)|\,dt<\infty$, for every $f\in X$. The space $X'$ is a subspace of the Banach space dual $X^*$ of $X$. The *absolutely continuous* (a.c.) part $X_a$ of $X$ is the space of all functions $f\in X$ satisfying $\lim_{\lambda(A)\to0}\|f\chi_A\|_X=0$; here $\chi_A$ is the characteristic function of the set $A\in\mathcal{M}$, with $\mathcal{M}$ denoting the $\sigma$-algebra of all Lebesgue measurable subsets of $[0,1]$. If $\linf\subseteq X_a$, then the closure of $\linf$ in $X$ coincides with $X_a$ and $(X_a)'=X'$. The space $X$ is said to have a.c. norm if $X=X_a$. In this case, $X'=X^*$. The space $X$ satisfies the *Fatou property* if $\{f_n\}\subseteq X$ with $0\le f_n\le f_{n+1}\uparrow f$ $\lambda$-a.e. and $\sup_n\|f_n\|_X<\infty$ imply that $f\in X$ and $\|f_n\|_X\to\|f\|_X$. The second associate space $X''$ of $X$ is defined as $X''=(X')'$. The space $X$ has the Fatou property if and only if $X''=X$. Unless specifically stated, it is not assumed that the Fatou property holds in $X$.
A *rearrangement invariant* (r.i.) space $X$ on \[0,1\] is a B.f.s. on $[0,1]$ such that if $g^*\le f^*$ and $f\in X$, then $g\in X$ and $\|g\|_X\le\|f\|_X$. Here $f^*$ is the *decreasing rearrangement* of $f$, that is, the right continuous inverse of its distribution function: $\lambda_f(\tau):=\lambda(\{t\in [0,1]:\,|f(t)|>\tau\})$. The associate space $X'$ of a r.i. space $X$ is again a r.i. space. A r.i. space $X$ satisfies $\linf\subseteq X\subseteq \ele$. If $X\not=\linf$, then $(X_a)'=X'$. The *fundamental function* $\fix$ of $X$ is defined via $\varphi_X(t):=\|\chi_{[0,t]}\|_X$. For $X\not=\linf$ we have $\lim_{t\to0}\fix(t)=0$, [@rodin-semenov Lemma 3, p.220].
Important classes of r.i. spaces are the Lorentz and Marcinkiewicz spaces. Let $\varphi\colon[0,1]\to[0,\infty)$ be an increasing, concave function with $\varphi(0)=0$. The Lorentz space $\Lambda(\varphi)$ consists of all measurable functions $f$ on \[0,1\] satisfying $$\|f\|_{\Lambda(\varphi)}:=\int_0^1f^*(s)\,d\varphi(s) <\,\infty.$$ Let $\varphi\colon[0,1]\to[0,\infty)$ be a quasi-concave function, that is, $\varphi$ is increasing, the function $t\mapsto\varphi(t)/t$ is decreasing and $\varphi(0)=0$. The Marcinkiewicz space $\marf$ consists of all measurable functions $f$ on \[0,1\] satisfying $$\|f\|_{\marf}:=\sup_{0<t\le 1}\, \frac{\varphi(t)}{t}\,\int_0^tf^*(s)
\, ds<\infty.$$
The Marcinkiewicz space $M(\varphi)$ and the Lorentz space $\Lambda(\varphi)$ are, respectively, the largest and the smallest r.i. spaces having the fundamental function $\varphi$. That is, for any r.i. space $X$ we have $\Lambda(\fix)\subseteq X\subseteq M(\fix)$. The associate space $\laf'=M(\psi)$ and $\marf'=\Lambda(\psi)$, for $\psi(t):=t/\varphi(t)$. In the notation of [@krein-petunin-semenov p.144], observe that $\marf=M_\psi$.
If $\phi$ is a positive function defined on \[0,1\], then its lower and upper dilation indices are, respectively, defined by $$\gamma_\phi := \lim_{t\to 0^+} \frac{\log\big(\sup_{\,0<s\le 1}
\frac{\phi(st)}{\phi(s)}\big)}{\log t}, \qquad
\delta_\phi := \lim_{t\to +\infty} \frac{\log\big(\sup_{\,0<s\le
1/t} \frac{\phi(st)}{\phi(s)}\big)}{\log t}.$$ For a quasi-concave function $\vfi$ it is known that $0\le \gamma_\vfi \le \delta_\vfi\le1$. Whenever $\delta_\varphi<1$ the following equivalence for the above norm in $\marf$ holds (see [@krein-petunin-semenov Theorem II.5.3]): $$\label{norm-marz}
\|f\|_{\marf}\asymp\sup_{0<t\le 1}\, \varphi(t)f^*(t).$$ The notation $A\asymp B$ means that there exist constants $C>0$ and $c>0$ such that $c{\cdot}A\le B\le C{\cdot}A$. For further details concerning r.i. spaces we refer to [@bennett-sharpley], [@krein-petunin-semenov], [@lindenstrauss-tzafriri]; care should be taken with [@bennett-sharpley] as all r.i. spaces there are assumed to have the Fatou property. General references for B.f.s.’ include [@okada-ricker-sanchez], [@zaanen1 Ch.15].
We recall briefly the theory of integration of real functions with respect to a vector measure, initially due to Bartle, Dunford and Schwartz, [@bartle-dunford-schwartz]. Let $(\Omega,\Sigma)$ be a measurable space, $X$ a Banach space and $m\colon\Sigma\to X$ a $\sigma$-additive vector measure. For each $x^*\in X^*$, denote the ${\mathbb R}$–valued measure $A\mapsto
\langle x^*,m(A)\rangle$ by $x^*m$ and its variation measure by $|x^*m|$. A measurable function $f\colon\Omega\to{\mathbb R}$ is said to be *integrable with respect to* $m$ if $f\in L^1(|x^*m|)$, for every $x^*\in X^*$, and for each $A\in\Sigma$ there exists a vector in $X$ (denoted by $\int_Af\,dm$) satisfying $\langle\int_Af\,dm,x^*\rangle=\int_Af\,d\, x^*m$, for every $x^*\in X^*$. The $m$–integrable functions form a linear space in which $$\label{norm-m}
\|f\|_{L^1(m)} : =\sup\left\{\int |f|\,
d|x^*m| \colon x^*\in X^*, \|x^*\|\le1\right\}$$ is a seminorm. A set $A\in\Sigma$ is called $m$–*null* if $|x^*m|(A)=0$ for every $x^*\in X^*$. Identifying functions which differ only in a $m$–null set, we obtain a Banach space (of classes) of $m$–integrable functions, denoted by $L^1(m)$. It is a B.f.s.for the $m$–a.e. order and has a.c. norm. The simple functions are dense in $L^1(m)$ and the space $L^\infty(m)$ of all $m$–essentially bounded functions is contained in $L^1(m)$. The *integration operator* $I_{m}$ from $L^1(m)$ to $X$ is defined by $f\mapsto\int f\,dm:=\int_\Omega f\,dm$. It is continuous, linear and has operator norm at most one. No assumptions have been made on the *variation measure* $|m|$ of $m$ (cf. [@okada-ricker-sanchez §3.1]) in the definition of $L^1(m)$. In general $\lnuv\subseteq\lnu$. We will repeatedly use the following property: let $Y$ be the closed linear subspace of $X$ generated by the range $m(\Sigma)$ of the vector measure $m$. Then $\lmx=L^1(m_Y)$ and $\lmxv=L^1(|m_Y|)$, where $m_Y\colon\Sigma\to Y$ is given by $m_Y(A):=m_X(A)$ for all $A\in\Sigma$.
The B.f.s.’ $\lnu$ can be quite different to the classical $L^1$–spaces of scalar measures and may be difficult to identify explicitly. Indeed, every Banach lattice with a.c. norm and having a weak unit (e.g. $L^2([0,1])$) is the $L^1$–space of some vector measure, [@curbera1 Theorem 8]. For further details concerning $L^1(m)$ and $I_m$ see, for example, [@okada-ricker-sanchez Ch.3] and the references therein.
The vector measure induced by $\ces$
====================================
The vector measure associated to the operator is defined by $$\label{measure-m}
m \colon A\longmapsto m(A):=\mathcal{C}(\chi_A),\qquad A \in\mathcal{M}.$$ Since $\mathcal{C}$ maps $\linf$ into itself, we have $m(\mathcal{M})\subseteq \linf$. So, $m$ is a well defined, finitely additive vector measure with values in $\linf$ but, it is not $\sigma$-additive as an $\linf$-valued measure, [@ricker]. For every r.i. space $X$ we have $\linf\subseteq X$. Accordingly, $m$ is also well defined and finitely additive with values in $X$. We will denote $m$ by $m_X$ whenever it is necessary to indicate that the values of $m$ are considered to be in $X$.
\[measure\] Let $X\not=\linf$ be a r.i. space.
- The measure $m_X$ is $\sigma$-additive.
- The measure $m_X$ has a strongly measurable, $X$-valued, Pettis $\lambda$-integrable density $F$ given by $$\label{density}
F: y\in[0,1]\mapsto F_y\in X \textrm{ with } F_y(x)=:\frac1x \chi_{[y,1]}(x),\quad 0<x\le1.$$
- The measure $m_X$ has $\sigma$-finite variation given by $$\label{var}
\mxv(A)=\int_A\|F_y\|_X\,dy,\quad A\in\mathcal{M}.$$ In the event that $\mx$ has finite variation, $F$ is actually Bochner $\lambda$-integrable.
- The range $\mx(\mathcal{M})$ of $m_X$ is a relatively compact set in $X$.
\(a) Let $(A_n)$ be a sequence of sets with $A_n\downarrow\emptyset$. Then the functions $(\chi_{A_n})$ decrease pointwise to zero. Since $\mathcal{C}$ is a positive operator, the sequence $(\mathcal{C}(\chi_{A_n}))$ is also decreasing; by the Dominated Convergence Theorem applied to $\chi_{A_n}\downarrow0$ it follows that $(\mathcal{C}(\chi_{A_n}))$ actually decreases to zero a.e. Recall that $\mx(\mathcal{M})\subseteq \linf\subseteq X_a$. But, $X_a$ has a.c. norm and so $\|\mathcal{C}(\chi_{A_n})\|_{X_a}\to0$. Since the norms of $X_a$ and $X$ coincide, we have $\|\mathcal{C}(\chi_{A_n})\|_{X}\to0$, i.e., $m_X(A_n)\to0$ in $X$.
\(b) Consider the $X$-valued vector function $F$ given by . It is a.e. well defined since, for each $0<y\le 1$, we have $F_y\in\linf\subset X$. To prove that it is strongly measurable it suffices to verify that $y\in(0,1]\mapsto F_y\in X$ is continuous. Fix $0<t<s\le 1$, in which case $$\|F_t-F_s\|_X= \left\|\frac1x\chi_{[t,s)}\right\|_X \le \frac1t\varphi_X(s-t).$$ Since $X\not=\linf$, it follows that $\varphi_X(s-t)\to0$ as $(s-t)\to0$.
Next we check the Pettis $\lambda$-integrability of $F$. Note that $F_y\in\ X_a$ for $y\in(0,1]$. For any $0\le g\in (X_a)'$ we have, via Fubini’s theorem, that $$\int_0^1 \big\langle F_y,g\big\rangle\,dy =
\int_0^1 \int_0^1 \frac1x\chi_{[y,1]}(x)g(x)\,dx\,dy
=
\int_0^1 g(x)\,dx,$$ which is surely finite as $(X_a)'\subseteq L^1$. Since elements of $X^*$ restricted to $X_a$ belong to $(X_a)^*$ and $(X_a)^*=(X_a)'$, it follows that $y\mapsto \langle F_y,x^*\rangle\in\ele$ for every $x^*\in X^*$.
It remains to check that $F$ is the Pettis $\lambda$-integrable density for $\mx$. Fix $A\in\mathcal{M}$ and recall that $m_X(A)=\mathcal{C}(\chi_A)\in X_a$. For $0\le g\in (X_a)'$ an application of Fubini’s theorem yields $$\begin{aligned}
\big\langle m_X(A),g\big\rangle&=&\int_0^1g(x)
\left(\int_0^1\frac1x\chi_{[0,x]}(y)\chi_A(y)\,dy\right)\,dx
\\
&=&
\int_A \int_0^1g(x)\frac1x\chi_{[y,1]}(x)\,dx\,dy
\\
&=&\int_A \Big\langle \frac1x\chi_{[y,1]}(x), g(x)\Big\rangle\,dy
\\
&=&
\int_A \Big\langle F_y, g\Big\rangle\,dy .\end{aligned}$$ Since this is valid for every $0\le g\in(X_a)'$ and $(X_a)^*=(X_a)'$, it follows that $F\colon y\mapsto F_y$ is Pettis $\lambda$-integrable with $$\label{aaa}
\int_A F_y\,dy:=m_X(A)\in X_a\subset X,\quad A\in\mathcal{M} .$$
\(c) Fix $0<a<1$. Consider the measure $m_X$ restricted to $[a,1]$. Since $F$ is continuous on the compact set $[a,1]$, we have $\int_{[a,1]} \|F_y\|_X\,dy<\infty$. According to , $y\mapsto F_y$ is then a Bochner $\lambda$-integrable density for $\mx$ on $[a,1]$. Accordingly, $$\label{equality}
\mxv(A)
=\int_A \|F_y\|_X\,dy,\quad A\in\mathcal{M},\quad A\subseteq[a,1] .$$
Let now $A\in\mathcal{M}$. Set $A_n:=[1/n,1]\cap A$, for $n\ge2$, in which case $\mxv(A_n)<\infty$. Observing that $\chi_{A_n}(y)\|F_y\|_X\uparrow\chi_A(y)\|F_y\|_X$ $\lambda$-a.e. it follows from and the $\sigma$-additivity of $\mxv$ that $$\mx(A)=\lim_n\mx(A_n)=\lim_n\int_{A_n}\|F_y\|_X\,dy = \int_{A}\|F_y\|_X\,dy .$$ This establishes and the $\sigma$-finiteness of the variation.
In the event that $\mx$ has finite variation, implies that $y\mapsto\|F_y\|_X$ belongs to $L^1$ and hence $F$, being strongly measurable, is Bochner $\lambda$-integrable.
\(d) Set $D_n=(1/2^n,1/2^{n-1}]$, for $n\ge1$. Then for each $n\ge1$ we have $|m_X|(D_n)<\infty$. Moreover, the density $F$ is Bochner $\lambda$-integrable over each $D_n$. Hence, the range $m_X(\mathcal{M}_{D_n})$ is relatively compact in $X$, [@okada-ricker-sanchez p.148], where $\mathcal{M}_{D_n}:=\{A\in\mathcal{M}:A\subseteq D_n\}$. Thus, $$m_X(\mathcal{M})=\sum_{n=1}^\infty m_X(\mathcal{M}_{D_n})
:=\Big\{\sum_{n=1}^\infty f_n:f_n\in\mx(\mathcal{M}_{D_n}) \textrm{ for } n\in\mathbb{N}\Big\}.$$ Arguing as in the proof of Corollary 2.43 (see also part II of Proposition 3.56) in [@okada-ricker-sanchez] we deduce that $m_X(\mathcal{M})$ is relatively compact in $X$.
\[2.2\] (a) The $\sigma$-finiteness of $\mxv$ also follows from a general result on Pettis integration, [@van-dulst Proposition 5.6(iv)]. Since $([0,1],\mathcal{M},\lambda)$ is a perfect measure space, the relative compactness of $\mx(\mathcal{M})$ in $X$ is also a general result (due to C. Stegall), [@van-dulst Proposition 5.7].
\(b) It follows from that $\lambda$ and $\mx$ have the same null sets.
For certain r.i. spaces $X$ it is possible to compute $\mxv$ precisely.
\[variation-L\] For the Lorentz space $\Lambda(\varphi)$ we have $$\label{normL}
\|F_y\|_{\Lambda(\varphi)}= \int_0^{1-y} \frac{\varphi'(t)}{t+y}\,dt,\quad y\in(0,1],$$ and $$|m_{\Lambda(\varphi)}|([0,1]) =\int_0^1\log(1/t)\,\varphi'(t)\,dt .$$
Consequently, $m_{\Lambda(\varphi)}$ has finite variation precisely when $\log(1/t)\in\Lambda(\varphi)$ and, in that case, $|m_{\Lambda(\varphi)}|([0,1])=\big\|\log(1/t)\big\|_{\Lambda(\varphi)}$.
For $y\in(0,1]$ the decreasing rearrangement of $F_y(\cdot)$ is given by $$\label{density*}
(F_y)^*(t)=F_y(t+y)=\frac{1}{t+y}\chi_{[0,1-y]}(t),\quad 0\le t\le1.$$ It follows that $$\|F_y\|_{\Lambda(\varphi)}= \int_0^{1-y} \frac{\varphi'(t)}{t+y}\,dt,\quad y\in(0,1].$$ Then, from we can conclude that $$|m_{\Lambda(\varphi)}|(A)=
\int_A\|F_y\|_{\Lambda(\varphi)}\,dy =
\int_A \left(\int_0^{1-y} \frac{\varphi'(t)}{t+y}\,dt\right)\,dy .$$ For $A=[0,1]$ an application of Fubini’s theorem yields $$|m_{\Lambda(\varphi)}|([0,1])
=
\int_0^1 \left(\int_0^{1-y} \frac{\varphi'(t)}{t+y}\,dt\right)\,dy
=
\int_0^1\log(1/t)\,\varphi'(t)\,dt .$$ Since $t\mapsto\log(1/t)$ is decreasing, it is clear that $m_{\laf}$ has finite variation precisely when $\log(1/t)\in\laf$ in which case $|m_{\laf}|([0,1])=\|\log(1/t)\|_\laf$.
\[variation-L-rem\] The Zygmund spaces of exponential integrability $L^p_{\textrm{exp}}$, for $p>0$, are close" to $\linf$; see [@bennett-sharpley Definition IV.6.11]. The classical space $\elexp$ (i.e. $p=1$) is a particular case. The space $L^p_{\textrm{exp}}$ coincides with the Marcinkiewicz space $M(\varphi_p)$ for $\varphi_p(t):=\log^{-1/p}(e/t)$. For $X=\Lambda(\varphi_p)$ we have that $\log(1/t)\in\Lambda(\varphi_p)$ if and only if $0<p<1$. Hence, in view of Proposition \[variation-L\], $m_{\Lambda(\varphi_p)}$ has finite variation if and only if $0<p<1$.
Let $X, Y$ be r.i. spaces with $X\subseteq Y$, in which case there exists $K>0$ such that $\|f\|_Y\le K \|f\|_X$ for $f\in X$. In particular, $\|m_Y(A)\|_Y\le K\|m_X(A)\|_X$ for $A\in\mathcal{M}$. Hence, $m_Y$ has finite variation whenever $m_X$ does. This observation, together with Proposition \[variation-L\] and Example \[variation-L-rem\] establishes the following result.
\[condition var\] Let $X\not=\linf$ be a r.i. space. Suppose that $\laf\subseteq X$ for some increasing, concave function $\vfi$ satisfying $\vfi(0)=0$ and $$\int_0^1\log(1/t)\,\varphi'(t)\,dt<\infty,$$ that is, $\log(1/t)\in\laf$. Then $\mx$ has finite variation.
In particular, since $\Lambda(\varphi_p)\subseteq M(\varphi_p)=L^p_{\textrm{exp}}$, this is the case if $L^p_{\textrm{exp}}\subseteq X$ for some $0<p<1$.
\[variation-L-rem2\] According to Corollary \[condition var\], $\mx$ has finite variation whenever $X$ is a Lorentz space $L^{p,q}$ on $[0,1]$ for $(p,q)\in(1,\infty)\times[1,\infty]$ or for $p=q=1$ (see [@bennett-sharpley Definition IV.4.1]), and whenever $X$ is an Orlicz space $L^\Phi$ satisfying $\Phi(t)\le e^{t{p}}$, $t\ge t_0$, for some $p\in(0,1)$.
The Cesàro space $\cx$ {#3}
======================
In [@curbera-ricker3] a study of optimal domains for kernel operators $Tf(x)=\int_0^1 f(y)K(x,y)\,dy$ was undertaken. Although the conditions imposed on the kernel $K(x,y)$ in [@curbera-ricker3 §3] do not apply to the kernel $(x,y)\mapsto(1/x)\chi_{[0,x]}(y)$ generating the operator, a detailed analysis of the arguments given there shows that the only condition needed for the results to remain valid for r.i.spaces $X\not=\linf$ is that the partial function $K_x\colon y\mapsto K(x,y)$ belongs to $L^1$ for a.e. $x\in[0,1]$. The remaining conditions were aimed purely at guaranteeing that the vector measure associated with the kernel was $\sigma$-additive as an $\linf$-valued measure which, in turn, was the way of ensuring the $\sigma$-additivity of the measure when interpreted as an $X$-valued measure (for every r.i. space $X\not=\linf$). This last condition of $\sigma$-additivity is obtained, for the case when $T$ is the operator, by other means; see Theorem \[measure\](a). Accordingly, from the results of §3 of [@curbera-ricker3] we have the following facts.
\[3.1\] Let $X\not=\linf$ be a r.i. space. The following assertions hold.
- If $f\in \lmx$, then $f\in\cx$ and $\|f\|_{\lmx}= \|f\|_{\cx}$.
- If $X$ has a.c. norm, then $\cx$ has a.c. norm and $\cx=\lmx$.
- $[\mathcal{C},X_a]=\cx_a$.
Consequently, the following chain of inclusions holds $$\label{inclusions}
\lmxv\subseteq \lmx=L^1(m_{X_a})=[\mathcal{C},X_a]=\cx_a\subseteq\cx.$$
In this section we will study various properties of $\cx$ and examine certain connections between the spaces appearing in .
The containment $\lmx\subseteq\cx$ can be strict, as seen by the following result.
\[several\] Let $\vfi$ be an increasing, concave function with $\vfi(0)=0$ and upper dilation index $\delta_\vfi<1$. For the corresponding Marcinkiewicz space $M(\varphi)$ the containment $L^1(m_{M(\varphi)})\subseteq [\mathcal{C},M(\varphi)]$ is strict.
The a.c.-part of the space $M(\varphi)$ is $$\marf_a=M(\varphi)_0:=\left\{f:\lim_{t\to0} \frac{\varphi(t)}{t}\int_0^tf^*(s)\,ds=0\right\}.$$ The condition $\delta_\vfi<1$ allows us to use the equivalent expression for the norm in $\marf$ given by . The function $1/\varphi$ is decreasing and so $(1/\vfi)^*=1/\vfi$. It follows that $\|1/\varphi\|_{\marf}\asymp 1$ and hence, $1/\varphi\in\marf$. On the other hand, $$\frac{\varphi(t)}{t}\int_0^t\bigg(\frac{1}{\varphi}\bigg)^*(s)\,ds\ge
\frac{\varphi(t)}{t}\frac{t}{\varphi(t)}=1,\quad t\in(0,1],$$ showing that $1/\varphi\not\in\marf_0$. So, $1/\varphi\in\marf\setminus \marf_0$.
To verify that $\mathcal{C}(1/\varphi)\asymp 1/\varphi$ is equivalent to showing that $(\vfi(t)/t)\int_0^tds/\vfi(s) \asymp 1$. Since $1/\varphi$ is decreasing (i.e., $(1/\vfi)^*=1/\vfi$), this is equivalent to verifying $\|1/\varphi\|_{\marf}\asymp 1$, that is, to showing that $1/\varphi\in\marf$. But, we have just proved that this is indeed the case, due to the condition $\delta_\varphi<1$. Hence, $\ces(1/\varphi)\in\marf\setminus \marf_0$ which implies that $1/\varphi\in [\ces,\marf]\setminus [\ces,\marf_0]$. From we have that $L^1(m_{M(\varphi)})=L^1(m_{M(\varphi)_0})=[\mathcal{C},M(\varphi)_0]$. Consequently, $1/\varphi\in [\mathcal{C},M(\varphi)]\setminus L^1(m_{M(\varphi)})$.
We now establish two properties of $\cx$ that were alluded to in the Introduction.
\[reflexive\] Let $X\not=\linf$ be any r.i. space.
- The space $\lmx$ is not reflexive. Hence, the Cesàro space $[\mathcal{C},X]$ is not reflexive either.
- The Cesàro space $[\mathcal{C},X]$ is not r.i. Moreover, neither is $\lmx=\cx_a$.
\(a) A general result concerning the $L^1$-space of a vector measure $m$ asserts that if $m$ has $\sigma$-finite variation and no atoms, then $L^1(m)$ is not reflexive, [@curbera3 Remark p.3804], [@okada-ricker-sanchez Corollary 3.23(ii)]. Since this is the case for $\lmx$, which is a closed subspace of $\cx$, it follows that $\cx$ is not reflexive either.
\(b) Let $\varphi:=\fix$ be the fundamental function of $X$. Set $f(t):=(-2\varphi^{-1/2})'(1-t)=\varphi'(1-t)/\varphi^{3/2}(1-t)$. Since $f$ is an increasing function, $f^*(t)=\varphi'(t)/\varphi^{3/2}(t)$. Direct computation shows that $\mathcal{C}f^*\equiv\infty$. Thus, $f^*\not\in[\mathcal{C},X]$.
On the other hand, $$\begin{aligned}
\mathcal{C}f(x)&=&\frac{1}{x}\int_0^x\frac{\varphi'(1-t)}{\varphi^{3/2}(1-t)}dt
= \frac{1}{x}\int_{1-x}^1\frac{\varphi'(s)}{\varphi^{3/2}(s)}\,ds
\\
&=&\frac{2}{x}
\left(\frac{1}{\varphi^{1/2}(1-x)}-\frac{1}{\varphi^{1/2}(1)}\right)
\\
&=& \frac{2}{\varphi^{1/2}(1)}\left(\frac{\varphi^{1/2}(1)-\varphi^{1/2}(1-x)}{x}\right)
\frac{1}{\varphi^{1/2}(1-x)}
\\
&:=&\frac{h(x)}{\varphi^{1/2}(1-x)}.\end{aligned}$$ Both of the functions $h$ and $1/h$ are bounded on $[0,1]$. Accordingly, $\ces(f)$ is equivalent to $x\mapsto 1/\varphi^{1/2}(1-x)$. So, $(\mathcal{C}f)^*(t)\asymp 1/\varphi^{1/2}(t)$. It follows that $$\|\mathcal{C}f\|_{\Lambda(\varphi)} \asymp \int_0^1 \frac{1}{\varphi^{1/2}(t)} \vfi'(t)\,dt
=2 \varphi^{1/2}(1)<\infty.$$ Hence, $(\mathcal{C}f)^*\in \Lambda(\varphi)\subseteq X$, which implies that $f\in[\mathcal{C},X]$. So, $\cx$ is not r.i.
According to we have $L^1(m_{X_a})=[\ces,X_a]$. Since $[\ces,X_a]$ fails to be r.i., so does $L^1(m_{X_a})$. But, $L^1(m_{X_a})=\lmx$. Accordingly, the closed subspace $\lmx$ of $\cx$ is never r.i.
\[wsc\] (a) A reasonable ‘substitute’ for reflexivity is weak sequential completeness. If $X$ is weakly sequentially complete, then $\cx$ is also weakly sequentially complete. Indeed, the weak sequential completeness of $X$ implies that of $\lmx$, [@curbera1 Corollary to Theorem 3]. But, $\lmx=\cx$; see Proposition \[3.1\](a).
\(b) Some further examples of vector measures $m$ for which the spaces $L^1(m)$ are known not to be r.i. arise from Rademacher functions, [@curbera4 Theorem 1], and from fractional integrals, [@curbera-ricker1 Example 5.15(b)].
We now address the question of when $\cx$ is order isomorphic to an *AL-space*, that is, to a Banach lattice in which the norm is additive over disjoint functions. In this regard, the space $X=\ele$ exhibits a particular feature, namely, that $$\label{caseL1}
[\ces,L^1]=L^1(m_{L^1})=L^1(|m_{L^1}|)=L^1(\log(1/t) .$$ We point out that not only do the three spaces $[\ces,L^1]$, $L^1(m_{L^1})$ and $L^1(|m_{L^1}|)$ coincide, but that $[\ces,L^1]$ is also an AL-space.
\[AL\] Let $X\not=\linf$ be a r.i. space. The following conditions are equivalent.
- The space $\cx$ is order isomorphic to an AL-space.
- The spaces $\lmx$ and $\lmxv$ are order isomorphic via the natural inclusion (this latter condition is written as $\lmx\simeq\lmxv$).
- The function $y\mapsto \|F_y\|_X$, $y\in[0,1]$, belongs to the associate space $\cx'$.
If any one of these conditions holds, then $$\cx=\lmx\simeq\lmxv.$$
\(a) $\Rightarrow$ (b) If $\cx$ is order isomorphic to an AL-space, then it is a.c., [@lindenstrauss-tzafriri Theorem 1.a.5 and Proposition 1.a.7]. Hence, by we have that $\cx=\cx_a=\lmx$ and so $\lmx$ is order isomorphic to an AL-space. This last condition implies that $\lmx$ is order isomorphic (via the natural inclusion) to $\lmxv$; see Proposition 2 of [@curbera2] and its proof.
\(b) $\Rightarrow$ (a) Suppose that $\lmx\simeq\lmxv$. According to we have $\cx_a=\lmx$ and so $\cx_a\simeq\lmxv$. Since $\lmxv$ is weakly sequentially complete, it follows that $\cx_a$ has the Fatou property and hence, that $\cx_a=(\cx_a)''$. Since $\cx_a\not=\{0\}$, we have $(\cx_a)'=\cx'$ and hence, $(\cx_a)''=\cx''$. Thus, $\cx_a=\cx''$ which, together with the chain of inclusions $\cx_a\subseteq \cx \subseteq \cx''$, yields $\cx=\cx_a$. Accordingly, $\cx\simeq\lmxv$ and this last space is an AL-space.
\(b) $\Leftrightarrow$ (c) Due to we have $\lmx=[\ces,X_a]$. Hence, the condition $\lmx\simeq\lmxv$ is equivalent to $[\ces,X_a]\simeq\lmxv$. This, in turn, is equivalent to the requirement $$\int_0^1|f(y)| \cdot\|F_y\|_{X}\,dy<\infty, \quad f\in [\ces,X_a],$$ which is precisely the condition that the function $y\mapsto\|F_y\|_{X}$ belongs to the associate space $[\ces,X_a]'=\cx'$.
In the sequel we will repeatedly use the fact that $\mathcal{C}\colon X\to X$ (necessarily boundedly) if and only if $X\subseteq \cx$. For r.i. spaces $X$ this corresponds precisely to the upper Boyd index $\overline{\alpha}_X$ of $X$ satisfying $\overline{\alpha}_X<1$; see [@krein-petunin-semenov II.6.7, Theorem 6.6] or [@maligranda Remark 5.13]. Note that the proof given in [@bennett-sharpley Theorem III.5.15] uses the Fatou property of $X$. Observe that if $\mathcal{C}\colon X\to X$, then also $\ces\colon X_a\to X_a$.
\[L1\] Let $\vfi$ be an increasing, concave function with $\vfi(0)=0$ and having non-trivial dilation indices $0<\gamma_\vfi\le\delta_\vfi<1$.
- For $X=\Lambda(\vfi)$ the B.f.s. $\cx$ is order isomorphic to an AL-space.
- For $X=\marf$ the B.f.s. $\cx$ is not order isomorphic to an AL-space.
Via Proposition \[AL\], we need to decide whether or not $\lmx\simeq\lmxv$.
In [@lesnik-maligranda-1 Corollary 13] Lesnik and Maligranda identify the associate space of $\cx$ in the case when $X$ has the Fatou property and both $\ces,\ces^*$ act boundedly on $X$. Here $f\mapsto \ces^*(f)(x):=\int_x^1\frac{f(t)}{t}\,dt$, $x\in[0,1]$, for any a.e. finite measurable function $f$ (denoted by $f\in L^0$) for which it is meaningfully defined, is the *Copson operator*. Then $$\label{c*}
\cx'= \left(X'\Big(\frac{1}{1-x}\Big)\right)^{\widetilde{}}
=\bigg\{f: y\mapsto\frac{\tilde{f}(y)}{1-y}\in X'\bigg\},$$ where $\tilde{f}$ is the decreasing majorant of $f$, defined by $\tilde{f}(y):=\sup_{x\geqslant y}|f(x)|$ and, for a weight function $0<w$ on $[0,1]$ and a B.f.s. Y, we set $Y(w):=\{h:wh\in Y\}$ and $\tilde{Y}:=\{g:\tilde{g}\in Y\}$.
\(a) The identification applies to $X=\laf$ as $X$ possesses the Fatou property and because $\underline{\alpha}_X=\gamma_\vfi$ and $\overline{\alpha}_X=\delta_\vfi$, together with the given index assumptions, imply that $0<\underline{\alpha}_X \le \overline{\alpha}_X<1$ which, in turn, guarantees that $\ces,\ces^*\colon \laf\to \laf$ boundedly.
Since $\Lambda(\varphi)'=M(\psi)$, for $\psi(t):=t/\vfi(t)$, we have from that $$[\ces,\laf]'
= \left(M(\psi)\Big(\frac{1}{1-y}\Big)\right)^{\widetilde{}}
=\bigg\{f:\sup_{0<t\le1}\frac{1}{\vfi(t)}
\int_0^t\Big(\frac{\tilde{f}(y)}{1-y}\Big)^*(s)\,ds<\infty\bigg\}.$$ The condition $0<\gamma_\vfi$ implies that $\delta_{\psi}<1$ which allows us, via , to simplify the previous description to $$\label{dual-simple}
[\ces,\laf]'
=\bigg\{f:\sup_{0<t\le1}\frac{t}{\vfi(t)}
\Big(\frac{\tilde{f}(y)}{1-y}\Big)^*(t)<\infty\bigg\}.$$
We need to verify that $y\mapsto \|F_y\|_{\Lambda(\varphi)}\in [\ces,\laf]'$; see Proposition \[AL\]. From it follows that $$\|F_y\|_{\Lambda(\varphi)}= \int_0^{1-y} \frac{\varphi'(s)}{y+s}\,ds.$$ This function is decreasing (as a function of its variable $y$), so it coincides with its decreasing majorant, that is, $(\|F_y\|_{\Lambda(\varphi)})^{\widetilde{}}=\|F_y\|_{\Lambda(\varphi)}$. Moreover, for $0<y\le1$, we have $$\begin{aligned}
\frac{\|F_y\|_{\Lambda(\varphi)}}{1-y}
&\le&
2 \chi_{[0,1/2]}(y)\int_0^{1} \frac{\varphi'(s)}{y+s}\,ds
+
\chi_{[1/2,1]}(y)\frac{2}{1-y}\int_0^{1-y} \varphi'(s)\,ds
\\ &\le &
2\int_0^{1} \frac{\varphi'(s)}{y+s}\,ds +2 \frac{\varphi(1-y)}{1-y}
\\ &:= &
g(y)+h(y).\end{aligned}$$ In the latter term, $g$ is decreasing and $h$ is increasing due to the quasi-concavity of $\vfi$ (which implies that $\vfi(t)/t$ is decreasing), i.e., $g^*=g$ and $h^*(t)=h(1-t)$. Using the property $(g+h)^*(t)\le g^*(t/2)+h^*(t/2)$ (see (2.23) in [@krein-petunin-semenov Ch.II §2, p.67]), it follows that $$\bigg(\frac{\|F_y\|_{\Lambda(\varphi)}}{1-y}\bigg)^*(t)\le
g\Big(\frac{t}{2}\Big) + h\Big(1-\frac{t}{2}\Big) =
2 \int_0^{1} \frac{\varphi'(s)}{\frac t2+s}\,ds +2 \frac{\varphi(t/2)}{t/2}.$$ Accordingly, $$\sup_{0<t\le1}\frac{t}{\vfi(t)}\bigg(\frac{\|F_y\|_{\Lambda(\varphi)}}{1-y}\bigg)^*(t)
\le
2
\sup_{0<t\le1}\frac{t}{\vfi(t)} \int_0^{1} \frac{\varphi'(s)}{\frac t2+s}\,ds
+
4\sup_{0<t\le1}\frac{t}{\vfi(t)}\frac{\varphi(t/2)}{t } .$$ The last term in the right-side is bounded (as $\vfi$ increasing implies $\vfi(t/2)/\vfi(t)\le1$) and so we concentrate on the first term. Due to the quasi-concavity of $\vfi$ we have $t\vfi'(t)\le \vfi(t)$. This, together with a change of variables yields, for $t\in(0,1]$, that $$\begin{aligned}
\frac{t}{\vfi(t)} \int_0^{1} \frac{\varphi'(s)}{\frac t2+s}\,ds
\le
\int_0^{1} \frac{\varphi(s)}{\vfi(t)}\frac{t}{s(\frac t2+s)} \,ds
\le
\int_0^{1/t} \frac{\varphi(tu)}{\vfi(t)}\frac{2du}{u(1+u)} =I_t.
$$ The conditions $0<\gamma_\varphi\le \delta_\varphi<1$ imply that there exist $\alpha, \beta\in(0,1)$, and $u_0, u_1$ with $0<u_0< 1<u_1<\infty$ such that $$\frac{\vfi(tu)}{\vfi(t)}\le u^\alpha,\quad 0<u<u_0,
\qquad \frac{\vfi(tu)}{\vfi(t)}\le u^\beta,\quad u>u_1,$$ [@krein-petunin-semenov pp.53-56]. Since $\vfi(tu)/\vfi(t) \le \max\{1,u_1\}=u_1$, for $u_0<u<u_1$ (via the quasi-concavity of $\vfi$), it follows that $$I_t \le \int_0^{u_0} \frac{2u^\alpha du}{u(1+u)}
+
\int_{u_0}^{u_1} u_1\frac{2du}{u(1+u)}
+
\int_{u_1}^{\infty} \frac{2u^\beta du}{u(1+u)},$$ which is finite as $0<\alpha,\beta<1$. Thus, $\|F_t\|_{\Lambda(\varphi)}\in [\ces,\laf]'$ and so $L^1(m_{\laf})\simeq L^1(|m_{\laf}|)$. Hence, $[\ces,\laf]$ is order isomorphic to an AL-space.
\(b) For $X=\marf$ the identification can again be applied, for the same reasons that it was applied in the case of $\laf$; see part (a). In particular, both $\ces,\ces^*\colon \marf\to \marf$ boundedly.
Since $\marf'=\Lambda(\psi)$, for $\psi(t):=t/\vfi(t)$, we have from that $$[\ces,\marf]'
=
\left(\Lambda(\psi)\Big(\frac{1}{1-y}\Big)\right)^{\widetilde{}}
=
\bigg\{f:\int_0^1 \Big(\frac{\tilde{f}(y)}{1-y}\Big)^*(t)\, \psi'(t)\,dt<\infty\bigg\}.$$
We need to verify that $y\mapsto \|F_y\|_{\marf}\not\in [\ces,\marf]'$; see Proposition \[AL\]. Since the upper dilation index of $\varphi$ satisfies $\delta_\varphi<1$, we can use the equivalent expression for the norm in $\marf$ to obtain from that $$\|F_y\|_{\marf}\asymp \sup_{0\le s\le 1-y} \frac{\vfi(s)}{s+y} .$$ This function is decreasing (as a function of its variable $y$) and so it coincides with its decreasing majorant, $(\|F_y\|_{\marf})^{\widetilde{}}=\|F_y\|_{\marf}$. Moreover, for each $y\in[0,1]$, we have $$\|F_y\|_{\marf}\asymp\sup_{0\le s\le 1-y} \frac{\vfi(s)}{s+y} \ge \vfi(1-y) ,$$ and hence, modulo a positive constant, $$\frac{\|F_y\|_{\marf}}{1-y} \ge \frac{\vfi(1-y)}{1-y} .$$ Since $\vfi$ is quasi-concave, $\vfi(t)/t$ is decreasing and so $\big(\frac{\vfi(1-t)}{1-y}\big)^*(t)=\vfi(t)/t$, i.e., $$\left(\frac{\|F_y\|_{\marf}}{1-y}\right)^*(t) \ge \frac{\vfi(t)}{t} = \frac{1}{\psi(t)}.$$ Accordingly, modulo a positive constant, we have $$\int_0^1 \bigg(\frac{(\|F_y\|_{\marf})^{\widetilde{}}}{1-y}\;\bigg)^*(t)\, \psi'(t)\,dt
\ge
\int_0^1 \frac{\psi'(t)}{\psi(t)}\,dt =\infty.$$ Hence, $\|F_y\|_{\Lambda(\varphi)}\not\in [\ces,\marf]'$ and so $L^1(m_{\marf})\not=L^1(|m_{\marf}|)$. Consequently, $[\ces,\marf]$ is not order isomorphic to an AL-space.
A precise description of when $\cx$ is a weighted $L^1$-space (in particular, an AL-space) can be deduced from [@schep Theorem 3.3].
\(a) The argument at the beginning of the proof of $(a)\Rightarrow (b)$ in Proposition \[AL\] shows that also $[\ces,\marf_0]$ is not order isomorphic to an AL-space.
\(b) If $\cx$ is order isomorphic to an AL-space, then Proposition \[AL\] implies that $\cx=\lmx\simeq\lmxv$. Thus, $\chi_{[0,1]}\in\lmxv$ and so $\mx$ has finite variation. Hence, whenever $\mx$ has infinite variation (e.g. $X=L^p_{\textrm{exp}}$, $p\ge1$, or if $\log(1/t)\not\in\laf$), then $\cx$ cannot be order isomorphic to an AL-space.
\(c) Further examples of when $\cx$ fails to be order isomorphic to an AL-space occur in Proposition \[cc lmx\] below.
The final results of this section address the question of when is $X$ contained in $\lmx$ or in $\lmxv$. In the first case, we have the integral representation for $\ces\colon X\to X$ as given in via the Bartle-Dunford-Schwartz integral. In the latter case, the representation for $\ces\colon X\to X$ is via the Bochner integral as given by and .
\[3.6\] (a) Let $X\not=\linf$ be a r.i. space such that $\overline{\alpha}_X<1$. Then each of the containments $X\subseteq\cx$ and $X_a\subseteq L^1(\mx)$ is proper. Indeed, since $\overline{\alpha}_X<1$, we have $X\subseteq\cx$, where $X$ is r.i. and $\cx$ is not; see Theorem \[reflexive\](b). Thus, $X=\cx$ is impossible.
Applying the previous argument to $X_a$ (in place of $X$) shows that $X_a\subseteq[\ces,X_a]$ properly. But, $[\ces,X_a]=\lmx$; see .
If, in addition, $X$ has a.c. norm, then $X_a=X$ and so $X\subseteq\lmx=\cx$ properly.
\(b) Unlike for the containment $X_a\subseteq\lmx$, it is not true in general (with $\overline{\alpha}_X<1$) that $X\subseteq\lmx$. Indeed, for $X=L^{p,\infty}$, $1<p<\infty$, we have $$X_a=L^{p,\infty}_0=\left\{f:\lim_{t\to0} t^{-1/q}\int_0^tf^*(s)\,ds=0\right\},\quad \frac1p+\frac1q=1.$$ Since $\overline{\alpha}_X=1/p<1$, it follows from part (a) that $L^{p,\infty}_0\subseteq L^1(m_{L^{p,\infty}})$ properly. To see that $L^{p,\infty}\not\subseteq L^1(m_{L^{p,\infty}})$ we consider, as in the proof of Proposition \[several\], the decreasing function $x^{-1/p}\in L^{p,\infty}\setminus L_0^{p,\infty}$. Since $\mathcal{C}(x^{-1/p})=qx^{-1/p}$, it follows that $x^{-1/p}\not\in [\mathcal{C},L_0^{p,\infty}]$. From we have $[\mathcal{C},L_0^{p,\infty}]=L^1(m_{L^{p,\infty}})$. Accordingly, $x^{-1/p}\in L^{p,\infty}$ and $x^{-1/p}\not\in L^1(m_{L^{p,\infty}})$.
Let $X=\Lambda(\vfi)$ satisfy $0<\gamma_\vfi\le\delta_\vfi<1$. It follows from Proposition \[L1\](a) that $\cx\simeq\lmxv$. Since $\overline{\alpha}_X=\delta_\vfi<1$, we also have $X\subseteq\cx$ and hence, $X\subseteq\lmxv$. On the other hand, for $X=L^1$ we have the contrary situation that $L^1\not\subseteq L^1(\log(1/t))=L^1(m_{L^1})$; see . Of course, here $\overline{\alpha}_X=\delta_\vfi=1$. A similar situation occurs for $X=L^p$, $1<p<\infty$, namely $L^p\not\subseteq L^1(|m_{L^p}|)$, [@ricker Theorem 1.1(ii)]. The following result exhibits additional facts concerning whether or not we have $X\subseteq\lmxv$.
\[marf x and lmxv\] Let $X\not=\linf$ be a r.i. space.
- It is always the case that $L^1\not\subseteq\lmxv$.
- Suppose that $X\subseteq\lmxv$. Then the containment is necessarily proper.
- The containment $X\subseteq\lmxv$ holds if and only if the function $$y\mapsto \|F_y\|_X=\Big\|t\mapsto\frac{1}{t+y}\chi_{[0,1-y]}\Big\|_X, \quad y\in(0,1],$$ belongs to the associate space $X'$ of $X$.
- For $X=\marf$, it is the case that $\marf\not\subseteq L^1(|m_{\marf}|)$.
\(a) If $L^1\subseteq\lmxv$ holds, then $\int_0^1|f(y)|\cdot\|F_y\|_X\,dy<\infty$ for all $f\in L^1$ and so $\sup_{0<y\le1}\|F_y\|_X<\infty$. However, this is impossible since, for each $y\in(0,1]$, we have $$\|F_y\|_X\ge \|F_y\|_{L^1}=\int_0^1\frac{1}{x}\chi_{[y,1]}(x)\,dx=\log(1/y).$$
\(b) If $X\simeq\lmxv$ holds, then $X$ is a r.i. space which is order isomorphic to an AL-space. Then, for some constants $C_1,C_2>0$, we have $C_1\|f\|_{\lmxv}\le \|f\|_X\le C_2 \|f\|_{\lmxv}$, $f\in X$. So, for $0\le s<t\le1$, we have $$\begin{aligned}
\|\chi_{[0,t]}\|_X &\le& C_2 \|\chi_{[0,t]}\|_{\lmxv} =C_2\big(\|\chi_{[0,s]}\|_{\lmxv}+\|\chi_{[s,t]}\|_{\lmxv}\big)
\\ &\le&
\frac{C_2}{C_1}\big(\|\chi_{[0,s]}\|_X+\|\chi_{[s,t]}\|_X\big)
=
\frac{C_2}{C_1}\big(\|\chi_{[0,s]}\|_X+\|\chi_{[0,t-s]}\|_X\big)
.\end{aligned}$$ In a similar way we can obtain the corresponding lower bound. It follows that the fundamental function $\fix$ satisfies $\fix(s+t)\asymp \fix(s)+\fix(t)$ for $s,t,s+t\in[0,1]$. This, together with the continuity of $\fix$ on $[0,1]$ and $\fix(0)=0$, implies that $\fix(ta)\asymp t\fix(a)$ for $t, a, ta\in[0,1]$. Hence, $\fix(t)\asymp t$ for $t\in[0,1]$, which implies that $X$ is order isomorphic to $L^1$. But, this contradicts part (a).
\(c) Note that $X\subseteq \lmxv$ if and only if $f\in\lmxv$ for all $f\in X$, that is (via Theorem \[measure\]), $$\int_0^1|f(y)| \cdot\|F_y\|_X\,dy<\infty, \quad f\in X,$$ which corresponds to the function $y\mapsto\|F_y\|_X$ belonging to the space $X'$. Since $X$ is r.i., it follows from that this is equivalent to the function $$y\mapsto \|(F_y)^*\|_X=\Big\|t\mapsto\frac{1}{t+y}\chi_{[0,1-y]}\Big\|_X, \quad y\in(0,1],$$ belonging to $X'$.
\(d) Applying part (c) to $X=\marf$ we need to show that $y\mapsto\|F_y\|_{\marf}$ does not belong to $\marf'=\Lambda(\psi)$, for $\psi(t):=t/\vfi(t)$.
The function $y\mapsto\|F_y\|_{\marf}$ can be estimated below, using , for the values $0\le y\le 1/2$ (in which case $y\le 1-y$), namely $$\begin{aligned}
\|F_y\|_{\marf} &=& \sup_{0<t\le1} \frac{\vfi(t)}{t} \int_0^t (F_y)^*(s)\,ds
\\ &\ge&
\sup_{0<t\le1} \vfi(t)(F_y)^*(t)
=
\sup_{0<t\le1-y} \frac{\vfi(t)}{y+t}
\ge
\frac{\vfi(y)}{2y}.\end{aligned}$$ Hence, we have that $$(\|F_y\|_{\marf})^*(t)\ge \frac{\vfi(t)}{2t}=\frac12 \frac{1}{\psi(t)},\quad 0<t\le \frac12.$$
Consequently, $$\big\|y\mapsto\|F_y\|_{\marf}\;\big\|_{\Lambda(\psi)}
=
\int_0^{1}(\|F_t\|_{\marf})^*(t) \psi'(t)\,dt
\ge
\frac12 \int_0^{1/2} \frac{\psi'(t)}{\psi(t)}\,dt
= \infty.$$
The proof of Proposition \[marf x and lmxv\](d) shows that the result also applies to the a.c. part $\marf_0$ of $\marf$. More generally, for a r.i. space $X\not=\linf$ we have $X_a\not\subset L^1(|m_{X_a}|)$ if and only if $X\not\subset\lmxv$ since $X_a$ and $X$ have the same norm and $X_a'=X'$.
Regarding the separability of $\cx$, it is known that the B.f.s.’ $Ces_p([0,1])$, $1<p<\infty$, are *separable*, [@astashkin-maligranda1 Theorem 1], [@astashkin-maligranda2 Theorem 3.1(b)]. As pointed out in [@astashkin-maligranda2 p.18], this is due to the fact that $Ces_p([0,1])$, which coincides with $[\ces,L^p]_a=[\ces,L^p]$, contains $\linf$ and has a.c. norm. More generally, since $\linf\subseteq X_a$ for any r.i. space $X\not=\linf$ and $\ces\colon\linf\to\linf$, we necessarily have that $\linf\subseteq[\ces,X_a]=[\ces,X]_a$ (cf. Proposition \[3.1\]) and hence, $[\ces,X]_a$ is separable by the a.c. of its norm. In particular, if $X$ itself has a.c. norm, then $\cx$ is separable; see . Since the $\sigma$-algebra $\mathcal{M}$ is $\lambda$-essentially countably generated and $\lmx=\cx_a$, via , this also follows from a general result on the separability of $L^1(m)$, [@ricker2 Proposition 2].
The operator acting on $\cx$
============================
It is known that the operator $\ces\colon \elp\to\elp$, for $1<p<\infty$, is not compact, [@leibowitz p.28]. Actually, this is a rather general feature.
\[4.1\] Let $X\not=\linf$ be a r.i. space satisfying $\overline{\alpha}_X<1$. Then the continuous operator $\ces\colon X\to X$ is not compact.
For each $\alpha\ge0$, direct calculation shows that the continuous function $x^\alpha$ (on $[0,1]$) satisfies $\ces(x^\alpha)=x^\alpha/(\alpha+1)$ and so $1/(\alpha+1)$ is an eigenvalue of $\ces$. Accordingly, the interval $(0,1]$ is contained in the spectrum of $\ces$ and so $\ces$ cannot be compact.
Since the operator $\ces\colon X\to X$, whenever it is available, factorizes through $\lmx$ via $\imx\colon\lmx\to X$, it follows that also $\imx$ is not compact. By the same argument also $\ces\colon\cx\to X$ fails to be compact. Actually, the requirement that $\ces\colon X\to X$ is unnecessary.
\[compact\] Let $X\not=\linf$ be any r.i. space. Then the operator $\ces\colon\cx\to X$ is not compact.
According to [@okada-ricker-rpiazza1 Theorem 4], the bounded variation of $\mx$ is a necessary condition for $\imx\colon\lmx\to X$ to be compact. Thus, if $\mx$ has infinite variation, then $\imx\colon\lmx\to X$ is not compact. Since the restriction of $\ces\colon\cx\to X$ to the closed subspace $\lmx$ is $\imx$, also $\ces\colon\cx\to X$ fails to be compact.
Suppose now that $\mx$ has finite variation. Then a further condition is necessary for $\imx\colon\lmx\to X$ to be compact: the existence of a Bochner integrable density, in our case the function $F\colon y\mapsto F_y$, with the property that the set $\mathcal{B}:=\{G(y):=F_y/\|F_y\|_X, 0\le y\le1\}$ is relatively compact in $X$, [@okada-ricker-rpiazza1 Theorem 1]. So, assume then that this last condition holds. Choose a sequence $\{y_n\}\subseteq[0,1]$ which increases to 1. Since $\{G_{y_n}\}\subseteq \mathcal{B}$, there is a subsequence, again denoted by $\{G_{y_n}\}$ for convenience, which converges in $X$. Let $\psi\in X$ be the limit of $\{G_{y_n}\}$. By passing to a subsequence, if necessary, we can assume that $G_{y_n}(x)\to \psi(x)$ for a.e. $x\in[0,1]$. Recall that $F_y$ is given by $F_y(x)=(1/x)\chi_{[y,1]}(x)$; see . Thus, as $\{y_n\}$ increases to $1$ we have $F_{y_n}(x)\to0$ for a.e. $x\in[0,1]$. The same property occurs also for $\{G_{y_n}\}$. As a consequence, $\psi=0$ a.e. This contradicts $\|\psi\|_X=1$ as $\|G_{y_n}\|_X=1$ for all $n\ge1$.
A useful substitution for compactness is the *complete continuity* of an operator, that is, one which maps weakly convergent sequences to norm convergent sequences. In view of the Eberlein-Smulian Theorem, this is equivalent to mapping relatively weakly compact sets to relatively norm compact sets. For the particular case of the operator, due to the fact that the vector measure $\mx$ has relatively compact range and $\sigma$-finite variation (cf. Theorem \[measure\](c), (d)) it is the case that the (restricted) integration operator $\imx\colon\lmxv\to X$ is always completely continuous, [@okada-ricker-sanchez Proposition 3.56]. This fact will have important consequences.
The following result should be compared with Proposition \[4.1\].
\[cc lmxv\] Let $X\not=\linf$ be a r.i. space such that the function $y\mapsto\|F_y\|_X$ belongs to $X'$. Then $\ces\colon X\to X$ is completely continuous.
In particular, this occurs for $X=\Lambda(\vfi)$ if $\varphi$ satisfies $0<\gamma_\varphi\le \delta_\varphi<1$.
By Proposition \[marf x and lmxv\](c) the function $y\mapsto\|F_y\|_X$ belonging to $X'$ implies that $X\subseteq\lmxv$. According to we have $X\subseteq\cx$ and so the operator $\ces\colon X\to X$ is continuous. Moreover, it can be factorized via the continuous inclusion $X\subseteq\lmxv$ and the restricted integration operator $\imx\colon\lmxv\to X$. But, as noted above, $\imx\colon\lmxv\to X$ is necessarily completely continuous. The ideal property of completely continuous operators then implies that $\ces\colon X\to X$ is also completely continuous.
The particular case of $X=\Lambda(\vfi)$ with $0<\gamma_\varphi\le \delta_\varphi<1$ follows from Proposition \[AL\] and Theorem \[L1\](a).
Any r.i. space $X$ for which the function $y\mapsto\|F_y\|_X$ belongs to $X'$ cannot be reflexive. For, if so, then $\ces\colon X\to X$ is a completely continuous operator defined on a reflexive Banach space and hence, it is necessarily compact (which contradicts Proposition \[4.1\]). For $X=L^p$, $1<p<\infty$, this was shown explicitly in (the proof of) Theorem 1.1(i) in [@ricker].
We deduce some further consequences from the complete continuity of the restricted integration operator $\imx\colon\lmxv\to X$.
\[cc lmx\] Let $X\not=\linf$ be a r.i. space.
- If $\cx$ is order isomorphic to an AL-space, then $\ces\colon\cx\to X$ is completely continuous.
- Let $X$ be reflexive and satisfy $\overline{\alpha}_X<1$. Then $\ces\colon\cx\to X$ is not completely continuous. In particular, $\cx$ cannot be order isomorphic to an AL-space.
- Suppose that $X$ does not contain a copy of $\ell^1$. If $\ces\colon\cx\to X$ is completely continuous, then $\cx$ is order isomorphic to an AL-space.
\(a) From Proposition \[AL\] we have $\cx\simeq\lmxv$ which, together with the operator $\imx\colon\lmxv\to X$ being completely continuous, establishes the claim.
\(b) From $\overline{\alpha}_X<1$ we have $\ces\colon X\to X$ and so $X\subseteq\cx$. Then, $\ces\colon X\to X$ can be factorized via $\ces\colon\cx\to X$. Suppose that $\ces\colon\cx\to X$ is completely continuous. Then also $\ces\colon X\to X$ is completely continuous. Since $X$ is reflexive, we conclude that $\ces\colon X\to X$ is compact, which is a contradiction to Proposition \[4.1\].
Suppose now that $\cx$ is order isomorphic to an AL-space. Then part (a) implies that $\ces\colon\cx\to X$ is completely continuous. But, we have just proved that this is not possible.
\(c) Suppose that $\cx$ is not order isomorphic to an AL-space. Then it follows from Proposition \[AL\] that $\lmxv\not=\lmx$. On the other hand, the complete continuity of $\ces\colon\cx\to X$ implies (via factorization through $\lmx$) that $\imx\colon\lmx\to X$ is also completely continuous. Combining Corollary 1.4 of [@calabuig-etal] with Proposition 1.1 of [@okada-ricker-rpiazza2], it follows that $\lmxv\simeq\lmx$. Contradiction!
For an example of a reflexive r.i. space with $\overline{\alpha}_X=1$ we refer to [@maligranda Example 12, p.29].
Recall that a Banach space $X$ has the *Dunford-Pettis property* if every Banach-space-valued, weakly compact linear operator defined on $X$ is completely continuous. The classical example of a space with this property is $L^1$. In Theorem \[L1\](a) it was established, for certain Lorentz spaces $\laf$, that $[\ces,\laf]\simeq L^1(|m_{\laf}|)$ with $|m_{\laf}|$ a finite, non-atomic measure. Hence, $[\ces,\laf]$ has the Dunford-Pettis property in this case. However, as noted in the Introduction, $Ces_p=[\ces,L^p]$, $1<p<\infty$, fails the Dunford-Pettis property, [@astashkin-maligranda1 §6, Corollary 1]. The proof of this given in [@astashkin-maligranda1] relies on some results concerning certain Banach space properties particular to $Ces_p$. The following extension of this result is established via the methods of vector measures.
\[4.7\] Let $X$ be any reflexive r.i. space with $\overline{\alpha}_X<1$. Then, $\cx$ fails the Dunford-Pettis property.
Since $X$ has a.c. norm, we have $\lmx=\cx$ (cf. ) and hence, because of $\overline{\alpha}_X<1$, it follows that $\ces\colon X\to X$ and so $X\subseteq\cx=\lmx$. Suppose that $\cx$ has the Dunford-Pettis property. Then the weakly compact operator $\imx\colon\lmx\to X$ (recall that $X$ is reflexive) is necessarily completely continuous. Since $\ces\colon X\to X$ is the composition of $\imx\colon\lmx\to X$ and the natural inclusion of $X$ into $\lmx$, it follows that $\ces\colon X\to X$ is completely continuous. The reflexivity of $X$ then ensures that $\ces\colon X\to X$ is actually compact. But, this contradicts Proposition \[4.1\]. Accordingly, $\cx$ fails the Dunford-Pettis property.
The Fatou property for $\cx$
============================
In [@lesnik-maligranda-1 Theorem 1(d)] it was noted that if $X$ has the Fatou property, then also $\cx$ has the Fatou property. As explained in the beginning of §3, the results on optimal domains for kernel operators given in [@curbera-ricker3 §3] also apply to the kernel generating the operator (and to many other operators). In [@curbera-ricker3], a fine analysis of the Fatou property was undertaken. Proposition \[3.1\] above presents a partial view of the relations between the various function spaces involved. The complete picture of the results in [@curbera-ricker3 §3] is presented below. It involves the space $\wlmx$ consisting of all the functions which are *weakly integrable* with respect to the vector measure $\mx$, that is, of all measurable functions $f\colon[0,1]\to{\mathbb R}$ such that $f\in L^1(|x^*m_X|)$, for every $x^*\in X^*$. It is a B.f.s.for the samenorm as used in $\lmx$ and contains $\lmx$ as a closed subspace, [@okada-ricker-sanchez Ch.3, §1]. The Copson operator $\ces^*$ was defined in the proof of Theorem \[L1\]. Whenever $X$ has a.c. norm and $\overline{\alpha}_X<1$ it is the dual operator to $\ces\colon X\to X$.
The following result is a summary of facts that occur in [@curbera-ricker3], specialized to the operator. Parts (a), (f) already occur in Proposition \[3.1\] and (b) also occurs in [@lesnik-maligranda-1 Theorem 1]. Part (k) is Theorem 3.1 of [@schep]; it provides an alternate description of $\cx'$ to that given in .
\[all Indag\] Let $X\not=\linf$ be a r.i. space.
- If $X$ has a.c. norm, then $\cx$ has a.c. norm and $\cx=\lmx$.
- If $X$ has the Fatou property, then $\cx$ has the Fatou property.
- If $X$ has the weak Fatou property, then $\cx$ has the weak Fatou property.
- If $X'$ is a norming subspace of $X^*$, then $\cx'$ is a norming subspace of $\cx^*$.
- If $X'$ is a norming subspace of $X^*$, then $\cx''=[\mathcal{C},X'']$.
- If $f\in \lmx$, then $f\in\cx$ and $\|f\|_{\lmx}= \|f\|_{\cx}$.
- If $f\in \cx$, then $f\in\wlmx$ and $\|f\|_{\wlmx}\le\|f\|_{\cx}$.
- If $f\in \wlmx$, then $f\in[\mathcal{C},X'']$ and $\|f\|_{[\mathcal{C},X'']}\le\|f\|_{\wlmx}$.
- $\cx''=\wlmx$ with equality of norms.
- If $X'$ is a norming subspace of $X^*$, then $\wlmx=[\mathcal{C},X'']$.
- If $X$ has a.c. norm, the Fatou property and satisfies $\overline{\alpha}_X<1$, then $\cx'$ equals the ideal in $L^0$ generated by the range $\{\ces^*(f):f\in X'\}$ where $\ces^*$ acts in $X'$.
In the event that $X'$ is a norming subspace of $X^*$, there is equality of norms in (g) and (h).
The following chain of inclusions, which refines , summarizes the situation (cf. (9) on p.199 of [@curbera-ricker3]): $$\label{eq fatou}
\lmx \subseteq \cx \subseteq \cx'' = \wlmx = \lmx'' \subseteq [\mathcal{C},X''] .$$ If $X$ has a.c. norm, then the first and last containments are equalities and the second containment an isometric embedding. On the other hand, if $X$ has the Fatou property (i.e., $X=X''$), then the second and last containments are equalities. Finally, in case $X$ has both a.c. norm and the Fatou property (i.e., $X$ is weakly sequentially complete), then all spaces involved coincide.
It should be stressed that the space $\wlmx$ plays a crucial role. Recall that whenever a B.f.s. $X$ space does not have the Fatou property, then it is always possible to identify its *‘Fatou completion’*, that is, the smallest of all B.f.s.’ which contain $X$ and have the Fatou property, [@zaanen1 §71, Theorem 2]. This space coincides with $X''$. Proposition \[all Indag\](i) shows that the space $\wlmx$ is precisely the Fatou completion of the space $\cx$, whereas $\lmx$ is the a.c. part of $\cx$.
We conclude with two relevant results. Recall (cf. Theorem \[reflexive\](a)) that the space $\cx$ is never reflexive. Weak sequential completeness of a B.f.s.is often a good replacement for the space failing to be reflexive.
\[5.2\] Let $X\not=\linf$ be a r.i space.
- If the integration operator $\imx\colon\lmx\to X$ is weakly compact, then $\cx$ is weakly sequentially complete.
If, in addition, $\overline{\alpha}_X<1$, then $\ces\colon X\to X$ is also weakly compact.
- If the integration operator $\imx\colon\lmx\to X$ is completely continuous, then $\cx$ is weakly sequentially complete.
If, in addition, $\overline{\alpha}_X<1$, then $\ces\colon X\to X$ is also completely continuous.
\(a) If $\imx\colon\lmx\to X$ is weakly compact, then Corollary 2.3 of [@curbera-ricker4] asserts that $\lmx=\wlmx$ and hence, $\lmx$ has the Fatou property; see . Being also a.c., it follows that $\lmx$ is weakly sequentially complete. Again according to we then have $\lmx=\cx =\wlmx$.
If, in addition, $\ces\colon X\to X$, then $\ces$ factorizes through $\lmx$ via $\imx$ and so is itself also weakly compact.
\(b) If $\imx\colon\lmx\to X$ is completely continuous, then again it is known that necessarily $\lmx=\wlmx$, [@delcampo-etal Theorem 3.6]. A similar argument as in the proof of (a) establishes the result.
[10]{}
S.V. Astashkin and L. Maligranda, *Cesàro function spaces fail the fixed point property*, Proc. Amer. Math. Soc. **136** (2008), 4289–4294.
S.V. Astashkin and L. Maligranda, *Structure of Cesàro function spaces*, Indag. Math. (N.S.) **20** (2009), 329–379.
S.V. Astashkin and L. Maligranda, *Structure of Cesàro function spaces: a survey*. In: Function spaces X, Banach Center Publ., **102**, Polish Acad. Sci. Inst. Math., Warsaw, (2014), 13–40.
R. G. Bartle, N. Dunford and J. Schwartz, *Weak compactness and vector measures*, Canad. J. Math. **7** (1955), 289–305.
C. Bennett and R. Sharpley, *Interpolation of Operators*, Academic Press Inc., Boston (1988).
J. M. Calabuig, J. Rodríguez and E. A. Sánchez-Pérez, *On completely continuous integration operators of a vector measure*, J. Convex Anal., **21** (2014), 811–818.
R. del Campo, S. Okada and W. J. Ricker, *$L^p$-spaces and ideal properties of integration operators for Fréchet-space-valued measures*, J. Operator Theory **68** (2012), 463–485.
G. P. Curbera, *Operators into $L^1$ of a vector measure and applications to Banach lattices*, Math. Ann. **293** (1992), 317–330.
G. P. Curbera, *When $L^1$ of a vector measure is an AL–space*, Pacific J. Math. **162** (1994), 287–303.
G. P. Curbera, *Banach space properties of $L^1$ of a vector measure*, Proc. Amer. Math. Soc. **123** (1995), 3797–3806.
G. P. Curbera, *A note on function spaces generated by Rademacher series*, Proc. Edinburgh Math. Soc. **40** (1997), 119–126.
G. P. Curbera and W. J. Ricker, *Optimal domains for kernel operators via interpolation*, Math. Nachr. **244** (2002), 47–63.
G. P. Curbera and W. J. Ricker, *Banach lattices with the Fatou property and optimal domains of kernel operators*, Indag. Math. (N. S.), **17** (2006), 187–204.
G. P. Curbera, O. Delgado and W. J. Ricker, *Vector measures: where are their integrals?*, Positivity, **13** (2009), 61–87.
S. G. Krein, Ju. I. Petunin and E. M. Semenov, *Interpolation of Linear Operators*, Amer. Math. Soc., Providence, (1982).
G.M. Leibowitz, *Spectra of finite range Cesàro operators*, Acta Sci. Math. (Szeged), **35** (1973), 27–29.
K. Leśnik and M. Maligranda, *On abstract Cesàro spaces. Duality*, J. Math. Anal. Appl., **424** (2015), 932–951.
K. Leśnik and M. Maligranda, *On abstract Cesàro spaces. Optimal range*, Integral Equations Operator Theory, **81** (2015), 227–235.
J. Lindenstrauss and L. Tzafriri, *Classical Banach Spaces* vol. II, Springer-Verlag, Berlin, (1979).
L. Maligranda, *Indices and interpolation*, Dissert. Math. **234** (1985), 1–54.
S. Okada, W. J. Ricker and L. Rodríguez-Piazza, *Compactness of the integration operator associated with a vector measure*, Studia Math. **150** (2002), 133–149.
S. Okada, W. J. Ricker and L. Rodríguez-Piazza, *Operator ideal properties of vector measures with finite variation*, Studia Math. **205** (2011), 215–249.
S. Okada, W. J. Ricker and E. Sánchez-Pérez, *Optimal Domain and Integral Extension of Operators acting in Function Spaces*, Operator Theory Advances Applications **180**, Birkhäuser Verlag, Basel-Berlin-Boston, (2008).
W.J. Ricker, *Separability of the $L^1$-space of a vector measure*, Glasgow Math. J., **34** (1992), 1–9.
W.J. Ricker, *Optimal extension of the Cesàro operator in $L^p([0,1])$*, Bull. Belg. Math. Soc. Simon Stevin, **22** (2015), 343–352.
V.A. Rodin and E.M. Semenov, *Rademacher series in symmetric spaces*, Anal. Math., **1** (1975), 207–222.
A.R. Schep, *When is the optimal domain of a positive linear operator a weighted $L^1$-space?*, Vector Measures, Integration and Related Topics, pp.361-369, Oper. Theory Adv. Appl., vol. 201, Birkhäuser, Basel, 2010.
D. van Dulst, *Characterizations of Banach spaces not containing $\ell^1$*, CWI Tract No. 59, Centrum voor Wiskunde en Informatica, Amsterdam, 1989.
A. C. Zaanen, *Integration*, 2nd rev. ed. North Holland, Amsterdam; Interscience, New York Berlin (1967).
[^1]: The first author acknowledges the support of the International Visiting Professor Program 2015, via the Ministry of Education, Science and Art, Bavaria (Germany).
|
---
abstract: 'We use an unbiased, continuous-time quantum Monte Carlo method to address the possibility of a zero-temperature phase without charge-density-wave (CDW) order in the Holstein and, by extension, the Holstein-Hubbard model on the half-filled square lattice. In particular, we present results spanning the whole range of phonon frequencies, allowing us to use the well understood adiabatic and antiadiabatic limits as reference points. For all parameters considered, our data suggest that CDW correlations are stronger than pairing correlations even at very low temperatures. These findings are compatible with a CDW ground state that is also suggested by theoretical arguments.'
author:
- 'M. Hohenadler'
- 'G. G. Batrouni'
title: |
Dominant charge-density-wave correlations in the Holstein model\
on the half-filled square lattice
---
Introduction {#sec:introduction}
============
Charge-density-wave (CDW) and superconducting (SC) phases are ubiquitous in quasi-two-dimensional (quasi-2D) materials and often arise from electron-phonon coupling. Holstein’s molecular-crystal model [@Ho59a] of electrons coupled to quantum phonons has played a central role for the investigation of such phenomena. However, even after decades of research, fundamental questions are still unanswered. Quantum Monte Carlo (QMC) approaches have played a key role in the study of this problem. Despite important recent methodological advances [@PhysRevB.98.085405; @arXiv:1704.07913; @BaSc2018; @PhysRevB.98.041102; @Ka.Se.So.18; @li2019accelerating], simulations of electron-phonon problems remain significantly more challenging than, for example, those of purely fermionic Hubbard models.
The Holstein-Hubbard model captures the interplay of electron-phonon coupling ($\sim\lambda$) and electron-electron repulsion ($\sim U$). For $U=0$, it reduces to the Holstein model simulated in the following. For the much studied case of a half-filled square lattice, earlier work [@PhysRevB.52.4806; @PhysRevLett.75.2570; @PhysRevB.75.014503; @PhysRevB.92.195102; @PhysRevLett.109.246404; @PhysRevB.87.235133; @ohgoe2017competitions] agreed on either long-range CDW or AFM order at $T=0$ depending on $\lambda$ and $U$, consistent with theoretically expected instabilities of the Fermi liquid. In contrast, relying on variational QMC simulations, two recent papers [[@ohgoe2017competitions; @1709.00278]]{}reported the existence of an intermediate metallic phase with neither CDW nor AFM order, see Fig. \[fig:phasediagrams\]. Instead, it was characterized as either SC or paramagnetic [[@ohgoe2017competitions; @1709.00278]]{}. The prospect of a metallic ground state has to be distinguished from metallic behavior emerging at finite temperatures [@PhysRevB.87.235133] simply via the thermal destruction of AFM order [@PhysRevB.98.085405]. The predicted existence of the metallic state even at $U=0$ (see Fig. \[fig:phasediagrams\]), [i.e.]{}, the Holstein model, appears to rule out competing interactions as its origin.
![\[fig:phasediagrams\] Ground-state phase diagram of the Holstein-Hubbard model on a half-filled square lattice for $\omega_0/t=1$, as suggested by variational QMC calculations in Ref. [@ohgoe2017competitions] (open symbols) and Ref. [@1709.00278] (filled symbols). The shaded regions were reported to be either metallic or superconducting, without long-range CDW or AFM order. The dashed line corresponds to $U=\lambda W$ with $W=8t$. The definition of $\lambda$ in Ref. [@1709.00278] differs from ours and Ref. [@ohgoe2017competitions] by a factor of $8$. Here, we focus on the Holstein limit $U=0$. ](fig1.pdf){width="42.50000%"}
Apart from the challenges due to small gaps and order parameters at weak coupling, the methods used in Refs. [[@ohgoe2017competitions; @1709.00278]]{}are variational and involve an ansatz for the ground-state wave function that may bias the results. A similar controversy regarding metallic behavior in the 1D Holstein-Hubbard model was recently resolved. For the latter, approximate strong-coupling results in combination with unfounded conclusions from numerical simulations [@Hirsch83a] as well as insufficiently accurate renormalization-group (RG) approaches were contradicted by unbiased numerical simulations and functional RG calculations. For the 1D case, a disordered phase has been firmly established [@MHHF2017], although claims of dominant pairing correlations [@ClHa05] in this regime were refuted [@PhysRevB.92.245132]. Whereas the Fermi liquid is expected to be unstable at $T=0$ in the particular 2D setting considered, Refs. [[@ohgoe2017competitions; @1709.00278]]{}also suggest that the non-CDW region could have SC order. SC correlations have been found to be enhanced in Holstein models with next-nearest-neighbor hopping [@PhysRevB.46.271], dispersive phonons [@PhysRevLett.120.187003], frustration [@li2018superconductivity], or finite doping [@PhysRevB.40.197]. An extended semimetallic phase is supported by theory and numerics in the Holstein model on the honeycomb lattice [@PhysRevLett.122.077601; @PhysRevLett.122.077602].
Here, to provide further insight into this problem in the limit $U=0$, we exploit the properties of the continuous-time interaction-expansion (CT-INT) QMC method [@Rubtsov05]. Compared to other approaches, it can in principle access rather low temperatures in the weak-coupling regime. Although simulations are partially restricted by a sign problem, we obtain evidence for long-range CDW order at very low temperatures for parameters where a non-CDW phase was predicted [[@ohgoe2017competitions; @1709.00278]]{}.
The paper is organized as follows. In Sec. \[sec:model\], we define the model and summarize previous work and theoretical arguments. Section \[sec:method\] provides the necessary details about the CT-INT simulations. Our results are discussed in Sec. \[sec:results\], followed by our conclusions in Sec. \[sec:conclusions\].
Model {#sec:model}
=====
References [@ohgoe2017competitions; @1709.00278] presented phase diagrams for the Holstein-Hubbard model on the half-filled square lattice. Selected results are reproduced in Fig. \[fig:phasediagrams\]. Because the purported non-CDW region is most extended for a vanishing Hubbard repulsion ($U=0$), we focus on the simpler Holstein Hamiltonian [@Ho59a] $$\begin{aligned}
\label{eq:model}
\hat{H}
=
-t \sum_{{\langle}i,j{\rangle}\sigma} \hat{c}^\dag_{i\sigma} \hat{c}^{{\phantom{\dag}}}_{j\sigma}
+
\sum_{i}
\left[
\mbox{$\frac{1}{2M}$} {\hat{P}}^2_{i}
+
\mbox{$\frac{K}{2}$} {\hat{Q}}_{i}^2
\right]
-
g
\sum_{i} \hat{Q}_{i}
\hat{\rho}_i
\,.\end{aligned}$$ Here, $\hat{c}^\dag_{i\sigma}$ creates an electron with spin $\sigma$ at lattice site $i$ and the first term describes nearest-neighbor hopping with amplitude $t$. Lattice vibrations are modeled in terms of independent harmonic oscillators with frequency $\omega_0=\sqrt{K/M}$, displacements $\hat{Q}_i$, and momenta $\hat{P}_i$. The electron-phonon interaction takes the form of a density-displacement coupling to local fluctuations of the electron number; we have $\hat{\rho}_i={\hat{n}}_i-1$ with ${\hat{n}}_{i} =
\sum_\sigma {\hat{n}}_{i\sigma}$ and ${\hat{n}}_{i\sigma} = \hat{c}^\dag_{i\sigma}\hat{c}^{{\phantom{\dag}}}_{i\sigma}$. All simulations were done on $L\times L$ square lattices with periodic boundary conditions and for a half-filled band (${\langle}{\hat{n}}_{i}{\rangle}=1$, chemical potential $\mu=0$). We define a dimensionless coupling constant $\lambda=g^2/(W K)$ ($W=8t$ is the free bandwidth), set $\hbar$, ${k_\text{B}}$, and the lattice constant to one and use $t$ as the energy unit.
Hamiltonian (\[eq:model\]) has been the subject of numerous QMC investigations [@PhysRevB.40.197; @PhysRevB.42.2416; @PhysRevB.42.4143; @PhysRevLett.66.778; @PhysRevB.43.10413; @PhysRevB.46.271; @PhysRevB.48.7643; @PhysRevB.48.16011; @PhysRevB.55.3803]. With the exception of Refs. [[@ohgoe2017competitions; @1709.00278]]{}, a CDW ground state was assumed to exist for any $\lambda>0$, as suggested by several theoretical arguments. First, for classical phonons, corresponding to $\omega_0=0$, mean-field theory is exact at $T=0$ and reveals a gap and CDW order for any $\lambda>0$ [@PhysRevLett.66.778]. The origin of this weak-coupling instability is the combination of perfect nesting on a half-filled square lattice with nearest-neighbor hopping and a zero-energy Van Hove singularity in the density of states [@PhysRevLett.56.2732; @PhysRevB.42.2416]. These features give rise to a noninteracting charge susceptibility \[defined in Eq. (\[eq:chi\]) below\] that diverges as $\chi^{(0)}_\text{CDW}\sim\ln^2\beta t$ [@PhysRevLett.56.2732; @PhysRevB.42.2416], where $\beta=1/T$. Both, at the mean-field level and in numerical simulations, such a divergence produces CDW order for any $U<0$ in the attractive Hubbard model [@Hirsch85]. The latter is an exact limit of the Holstein model for $\omega_0\to\infty$ ($M\to
0$), with $U=\lambda W$. Hence, long-range CDW order at $T=0$ is established for the Holstein model both for $\omega_0=0$ and $\omega_0=\infty$. In contrast, the s-wave pairing susceptibility \[Eq. (\[eq:chip\])\] has a weaker divergence, $\chi^{(0)}_\text{SC}\sim\ln\beta t$, because nesting plays no role. This is consistent with the observation that for $\omega_0<\infty$ SC correlations are weaker than CDW correlations at half-filling [@PhysRevLett.66.778; @PhysRevB.40.197] but not with an SC phase [[@ohgoe2017competitions; @1709.00278]]{}. However, earlier work did not consider the weak-coupling regime and unbiased, high-precision finite-size scaling analyses only appeared recently [@PhysRevB.98.085405; @PhysRevB.98.041102; @BaSc2018; @li2019accelerating].
In Refs. [[@ohgoe2017competitions; @1709.00278]]{}, the challenging problem of determining the ground-state phase diagram was approached using zero-temperature variational QMC methods. In contrast, most other work (for exceptions see Refs. [@PhysRevB.52.4806; @li2018superconductivity]) infer ground-state properties from simulations at low but finite temperatures. Whereas the AFM phase of the Holstein-Hubbard model shown in Fig. \[fig:phasediagrams\] exists only at $T=0$, long-range CDW order is associated with an Ising order parameter and persists up to a critical temperature $T^\text{CDW}_c$ [@PhysRevLett.66.778; @PhysRevB.98.085405]. Similarly, the U(1) SC order parameter also permits a nonzero transition temperature $T^\text{SC}_c$. In both cases, given an ordered ground state, we therefore expect a finite-temperature phase transition. An important exception is the limit $\omega_0=\infty$, corresponding to the attractive Hubbard model. The latter has an enhanced symmetry that combines the CDW and SC order parameters into an SU(2) vector [@Hirsch85]. According to the Mermin-Wagner theorem [@PhysRevLett.17.1133], long-range order is therefore confined to $T=0$.
Let us address the purported intermediate phase at weak coupling and $\omega_0>0$ reported in Refs. [[@ohgoe2017competitions; @1709.00278]]{}in the light of these arguments. The overall size of this phase increases with increasing $\omega_0/t$ in Refs. [[@ohgoe2017competitions; @1709.00278]]{}, similar to the case of the 1D Holstein-Hubbard model [@MHHF2017]. In 1D, quantum lattice fluctuations promote the proliferation of domain walls in the Ising CDW order parameter. There, the ground state is metallic up to $\lambda=\lambda_c$ with $\lambda_c\to\infty$ for $\omega_0\to\infty$ (attractive Hubbard model) [@MHHF2017]. In contrast, the 2D Holstein model is CDW-ordered in the antiadiabatic limit $\omega_0\to\infty$. An explicit comparison of data for $\omega_0=\infty$ and $\omega_0<\infty$ will be made in Sec. \[sec:results\]. No theoretical arguments were given in Refs. [[@ohgoe2017competitions; @1709.00278]]{}against the weak-coupling instability expected from the divergence of $\chi^{(0)}_\text{CDW}$. Interestingly, although $\chi_\text{CDW}^{(0)}$ and $\chi_\text{AFM}^{(0)}$ diverge in the same way, the methods of Refs. [[@ohgoe2017competitions; @1709.00278]]{}successfully detect the weak-coupling AFM instability at $\lambda=0$ but not the CDW instability at $U=0$ (see Fig. \[fig:phasediagrams\]). Another apparent inconsistency is that the non-CDW region at $U=0$ is significantly larger for $\omega_0/t=8$ than for $\omega_0/t=1$ in Ref. [@ohgoe2017competitions], whereas it remains virtually unchanged between $\omega_0/t=1$ and $\omega_0/t=15$ in Ref. [@1709.00278]. For the value $\omega_0/t=1$ analyzed in both works and shown in Fig. \[fig:phasediagrams\], Ref. [@ohgoe2017competitions] predicts a non-CDW ground state up to $\lambda\approx 0.11$ at $U=0$, whereas Ref. [@1709.00278] reports a critical value of $\lambda\approx 0.125$ (using our definition of $\lambda$).
Method {#sec:method}
======
The application of the CT-INT method [@Rubtsov05] to electron-phonon models goes back to the work by Assaad and Lang [@Assaad07]. For investigations of 2D Holstein and Holstein-Hubbard models, see Refs. [@PhysRevB.98.085405; @PhysRevLett.122.077601]. Its general, action-based formulation makes it suitable for retarded fermion-fermion interactions that arise naturally from electron-phonon problems after integrating out the phonons in the path-integral representation of the partition function [@Assaad07]. The weak-coupling expansion can be shown to converge for fermionic systems in a finite space-time volume [@Rubtsov05], so that the method is exact apart from statistical errors. General reviews have been given in Refs. [@Gull_rev; @Assaad14_rev].
The numerical effort scales cubically with the average expansion order $n$, where $n\approx {O}(\beta\lambda L^2)$ for the Holstein model. While other methods formally scale linearly in $\beta$ [@Assaad18_rev], CT-INT is typically less limited by autocorrelation times. For the present work, its use is motivated by a significant speedup at small $\lambda$ that permits us to study reasonably large system sizes up to $L\leq12$ at inverse temperatures $\beta t\leq 96$. For intermediate phonon frequencies, the method is ultimately limited by a sign problem that arises from the absence of an exact symmetry between the spin-${\uparrow}$ and spin-${\downarrow}$ sectors [@PhysRevB.98.085405]. The results for $\omega_0=\infty$ were obtained by directly simulating the attractive Hubbard model with the CT-INT method. We used 1000 single-vertex updates and 8 Ising spin flips per sweep for all simulations. Although our method is entirely unbiased, as opposed to the algorithms of Refs. [[@ohgoe2017competitions; @1709.00278]]{}, limitations arise regarding model parameters, temperatures, and system sizes.
Results {#sec:results}
=======
To detect CDW and/or s-wave SC order, we carried out a finite-size scaling analysis based on the charge and pairing susceptibilities (with $\hat{\Delta}_i=\hat{c}_{i{\uparrow}}\hat{c}_{i{\downarrow}}$) $$\begin{aligned}
\label{eq:chi}
\chi_\text{c}(\bm{q})
&=
\frac{1}{L^2} \sum_{ij} e^{{\text{i}}(\bm{r}_i-\bm{r}_j)\cdot\bm{q}} \int_0^\beta {\text{d}}\tau
{\langle}\hat{n}_{i}(\tau) \hat{n}_{j}{\rangle}\,,
\\\label{eq:chip}
\chi_\text{p}(\bm{q})
&= \frac{2}{L^2} \sum_{ij}
e^{{\text{i}}(\bm{r}_i-\bm{r}_j)\cdot\bm{q}} \int_0^\beta {\text{d}}\tau
{\langle}\hat{\Delta}^\dag_{i}(\tau) \hat{\Delta}^{{\phantom{\dag}}}_{j}{\rangle}\,,\end{aligned}$$ We define $\chi_\text{CDW}\equiv\chi_\text{c}(\bm{Q}_\text{CDW})$ with $\bm{Q}_\text{CDW} = (\pi,\pi)$ and $\chi_\text{SC}\equiv\chi_\text{p}(\bm{Q}_\text{SC})$ with $\bm{Q}_\text{SC}= (0,0)$. The factor $2$ in the definition of $\chi_\text{p}$ ensures $\chi_\text{CDW}\equiv\chi_\text{SC}$ for the attractive Hubbard model ($\omega_0\to\infty$) and at $\lambda=0$.
Long-range CDW order can be detected from by the renormalization-group invariant correlation ratio $$\begin{aligned}
\label{eq:Rchic}
R^\chi_\text{CDW}
&= 1-\frac{\chi_\text{c}(\bm{Q}_\text{CDW}-\delta{\bm
q})}{\chi_\text{c}(\bm{Q}_\text{CDW})}\,,\quad |{\bm q}|=\frac{2\pi}{L}\,.\end{aligned}$$ At a fixed $\lambda$, $R^\chi_\text{CDW}$ depends only on $L^z/(T-T^\text{CDW}_c)$, so that data for different $L$ are expected to intersect (up to corrections to scaling) at the transition temperature $T^\text{CDW}_c$. By definition, $R^\chi_\text{CDW}\to 0$ as $L\to\infty$ in the absence of long-range CDW order, whereas $R^\chi_\text{CDW}\to 1$ as $L\to\infty$ if $\chi_\text{CDW}$ diverges with $L$. The ratio $R^\chi_\text{CDW}$ can be expected to have smaller scaling corrections than $\chi_\text{CDW}$ itself [@Binder1981]. The use of susceptibilities rather than static structure factors suppresses background contributions to critical fluctuations. We will also consider the finite-size scaling of the susceptibility itself, which is described near the Ising critical point by the scaling form $$\chi_\text{CDW} = L^{2-\eta} f[L^z/(T-T^\text{CDW}_c)]\,,$$ with $\eta=0.25$ known from the exact solution of the 2D classical Ising model.
{width="\textwidth"}
{width="\textwidth"}
A potential transition to an SC phase should be in the Berezinskii-Kosterlitz-Thouless universality class with power-law correlations below the critical temperature $T^\text{SC}_c$. In the absence of long-range order and hence a divergence of $\chi_\text{SC}$, we exploit the finite-size scaling form exactly at the critical temperature $$\label{eq:scalingSC}
\chi_\text{SC} = L^{2-\eta}$$ with $\eta=0.25$ [@kosterlitz1974critical]. Equation (\[eq:scalingSC\]) again implies a crossing point of results for different $L$ at $T=T^\text{SC}_c$, as recently observed for the half-filled Holstein model on the frustrated triangular lattice [@li2018superconductivity].
{width="\textwidth"}
{width="\textwidth"}
![\[fig:om1.0\_beta24\] (a) CDW and (b) SC susceptibilities as a function of $\lambda$ for $\omega_0/t=1$ and $T/t=1/24$.](fig6.pdf){width="47.50000%"}
![\[fig:lambda0.075\_diffomgea\] CDW and SC susceptibilities for different $\omega_0/t$ and $L$. Here, $\lambda=0.075$. Results for $\omega_0=\infty$ were obtained directly from simulations of the attractive Hubbard model.](fig7.pdf){width="47.50000%"}
To detect gaps for long-wavelength charge and spin fluctuations, we also consider the static (uniform) charge and spin susceptibilities $$\begin{aligned}
\label{eq:localsusc}
\chi_\text{c} &=\beta \left({\langle}\hat{N}^2{\rangle}- {\langle}\hat{N}{\rangle}^2\right)\,,&&\hspace*{-2em}\hat{N} = \sum_i \hat{n}_i\,, \\
\chi_\text{s} &=\beta \left({\langle}\hat{M}^2{\rangle}- {\langle}\hat{M}{\rangle}^2\right)\,,&&\hspace*{-2em}\hat{M} = \sum_i \hat{S}^x_i\,. \end{aligned}$$ Here, $\hat{S}^x_i=\hat{c}^\dag_{i{\uparrow}}\hat{c}^{{\phantom{\dag}}}_{i{\downarrow}}+\hat{c}^\dag_{i{\downarrow}}\hat{c}^{{\phantom{\dag}}}_{i{\uparrow}}$, giving a maximum magnetization per site ${\langle}M{\rangle}/L^2=1$.
Based on the arguments in Sec. \[sec:model\], we expect CDW order rather than SC order at half-filling. Mean-field theory predicts a CDW transition temperature $T^\text{CDW}_c\sim e^{-1/\sqrt{\lambda}}$ that appears consistent with simulations for $\omega_0=0$ [@PhysRevB.98.085405] and renders the weak-coupling regime challenging. Moreover, $T^\text{CDW}_c$ decreases with increasing $\omega_0$, and vanishes for $\omega_0=\infty$ [@Hirsch85]. Before discussing the case of $\omega_0/t=1$ depicted in Fig. \[fig:phasediagrams\], we consider $\omega_0/t=0.1$ (close to the mean-field limit) and $\omega_0/t=\infty$ (the attractive Hubbard model) as useful reference points. To address the findings of Refs. [[@ohgoe2017competitions; @1709.00278]]{}for $\omega_0/t=1$, we consider the couplings $\lambda=0.075$ and $\lambda=0.025$, both inside the purported non-CDW phase in Fig. \[fig:phasediagrams\].
Results for $\omega_0/t=0.1$ and $\lambda=0.25$ are shown in Fig. \[fig:om0.1\_lambda0.075\]. The CDW transition for these parameters was previously investigated using equal-time correlation functions [@PhysRevB.98.085405], so that we can benchmark our diagnostics. The data reveal a strong increase of the CDW susceptibility at low temperatures \[Fig. \[fig:om0.1\_lambda0.075\](a)\], whereas the SC susceptibility is virtually independent of $L$ \[Fig. \[fig:om0.1\_lambda0.075\](b)\]. The rescaled CDW susceptibility in Fig. \[fig:om0.1\_lambda0.075\](c) exhibits a clean crossing point at $T^\text{CDW}_c/t\approx0.2$, in agreement with previous findings [@PhysRevB.98.085405]. This crossing is consistent with that observed in the results for the correlation ratio in Fig. \[fig:om0.1\_lambda0.075\](e), a close-up of which is shown in Fig. \[fig:om0.1\_lambda0.075\](f). In contrast, the rescaled SC susceptibility \[Fig. \[fig:om0.1\_lambda0.075\](d)\] is strongly suppressed for $T\lesssim T_c^\text{CDW}$. Finally, the uniform charge and spin susceptibilities in Figs. \[fig:om0.1\_lambda0.075\](g) and (h), respectively, reveal a gap in both sectors at sufficiently low temperatures, as expected for a CDW insulator.
In the opposite, antiadiabatic regime $\omega_0=\infty$, we can rely on previous results for the ground state of the attractive Hubbard model [@Hirsch85; @Scalettar89] to interpret our finite-temperature data. To gain insight into the behavior expected in the weak-coupling regime, we consider $\lambda=0.075$ and focus on low temperatures $T/t\leq 0.1$. The CT-INT data in Fig. \[fig:ominf\_lambda0.075\] exhibit significant differences compared to Fig. \[fig:om0.1\_lambda0.075\]. The CDW and SC susceptibilities in Fig. \[fig:ominf\_lambda0.075\](a) and Fig. \[fig:ominf\_lambda0.075\](b) are identical due to the SO(4) symmetry of the Hubbard model. Whereas a crossing point is not visible for the rescaled susceptibilities in Figs. \[fig:ominf\_lambda0.075\](c) and (d) at the temperatures considered, the CDW correlation ratio \[Figs. \[fig:ominf\_lambda0.075\](e),(f)\] again approaches 1 for $T\to 0$. Such behavior is consistent with long-range CDW order at $T=0$. Finally, the charge susceptibility in Fig. \[fig:ominf\_lambda0.075\](g) is consistent with metallic behavior (due to the coexistence of CDW and SC order), whereas Fig. \[fig:ominf\_lambda0.075\](h) reveals the expected spin gap.
Having established the physics, but also the limitations of our simulations, in the undisputed adiabatic and antiadiabatic limits, we turn to parameters that are directly relevant for Fig. \[fig:phasediagrams\], specifically $\omega_0/t=1$ and $\lambda=0.075$, where Refs. [[@ohgoe2017competitions; @1709.00278]]{}predict a paramagnetic or SC state. The corresponding results are shown in Fig. \[fig:om1.0\_lambda0.075\]. Comparing the CDW and SC susceptibilities in Figs. \[fig:om1.0\_lambda0.075\](a) and (b) reveals that CDW correlations are significantly stronger than SC correlations at a given $T$. Whereas a crossing point in the rescaled CDW susceptibility \[Fig. \[fig:om1.0\_lambda0.075\](c)\] at temperatures below the accessible range is plausible, we are unable to reach temperatures comparable to $T_c^\text{CDW}$. On the other hand, a critical point signaling SC order is not expected based on the results of Fig. \[fig:om1.0\_lambda0.075\](d), especially upon comparison with Fig. \[fig:ominf\_lambda0.075\](d) for $\omega_0/t=\infty$. The latter case has stronger pairing correlations than observed for $\omega_0/t=1$ even though $T^\text{SC}_c=0$. The CDW correlation ratio \[Figs. \[fig:om1.0\_lambda0.075\](e),(f)\] is also consistent with CDW order at $T=0$. The uniform susceptibilities in Figs. \[fig:om1.0\_lambda0.075\](g) and (h) are not entirely conclusive but consistent with a gap for charge and spin excitations at $T=0$.
We also simulated a weaker coupling $\lambda=0.025$, deep inside the predicted intermediate phase in Fig. \[fig:phasediagrams\]. Of course, any type of order will be extremely delicate to detect at such weak interactions on finite systems. Moreover, CDW and SC correlations are necessarily degenerate at $\lambda=0$ (free fermions). Nevertheless, Fig. \[fig:om1.0\_lambda0.025\] does indicate somewhat stronger CDW than SC correlations, which again seems to contradict the claims of Refs. [[@ohgoe2017competitions; @1709.00278]]{}. At the same time, the expected spin gap is only visible in Fig. \[fig:om1.0\_lambda0.025\](h) at the lowest temperatures, whereas the expected charge gap is beyond the accessible temperature range in Fig. \[fig:om1.0\_lambda0.025\](g).
The dependence of the CDW and SC susceptibilities on the coupling strength $\lambda$ at $T=1/24$ can more clearly be seen in Fig. \[fig:om1.0\_beta24\]. Starting from identical values at $\lambda=0$, $\chi_\text{CDW}$ increases significantly with $\lambda$, whereas $\chi_\text{SC}$ flattens after a weak initial increase.
Finally, Fig. \[fig:lambda0.075\_diffomgea\] compares the temperature-dependent CDW and SC susceptibilities at different phonon frequencies. The CDW susceptibility in Figs. \[fig:lambda0.075\_diffomgea\](a),(b) evolves continuously, with values for intermediate $\omega_0$ falling between those for $\omega_0/t=0.1$ and $\omega_0/t=\infty$. For the SC susceptibility, Figs. \[fig:lambda0.075\_diffomgea\](c),(d), the data suggest the possibility of non-monotonic behavior: $\chi_\text{SC}$ for $\omega_0/t=1$ in Fig. \[fig:lambda0.075\_diffomgea\](d) is equal to that for $\omega_0/t=\infty$ at intermediate temperatures yet still smaller than $\chi_\text{CDW}$.
Conclusions {#sec:conclusions}
===========
Although limitations regarding lattice size and temperature preclude definitive conclusions regarding the ground state, we believe that our unbiased results point rather strongly toward long-range CDW order in the half-filled Holstein model on the square lattice. By extension, it seems reasonable to expect only CDW and AFM ground states in the Holstein-Hubbard model.
Our main arguments are as follows.
\(i) For the parameters considered, including those where Refs. [[@ohgoe2017competitions; @1709.00278]]{}predict no CDW order, we find that CDW correlations are stronger than SC correlations, consistent with long-range CDW order at $T=0$.
\(ii) CDW (SC) correlations are stronger (weaker) than for the attractive Hubbard model with the same effective interaction $U=\lambda W$. The latter corresponds to the Holstein model in the antiadiabatic limit $\omega_0\to\infty$. Because the Hubbard model is known to have long-range CDW order at $T=0$, this suggests long-range CDW order also for the Holstein model with $\omega_0<\infty$. Weaker SC correlations do not rule out SC order at $T=0$. However, the coexistence of CDW and SC order in the attractive Hubbard model is linked to an enhanced symmetry that is absent in the Holstein case for $\omega_0<\infty$ [@Hirsch83a]. Even if SC order exists at $T=0$, the stronger CDW order conflicts with the claims of Refs. [[@ohgoe2017competitions; @1709.00278]]{}.
\(iii) Since we infer the nature of the ground state from simulations at $T>0$, there is in principle a possibility of a non-monotonic temperature dependence, with a phase transition to an SC phase at even lower temperatures. However, we do not observe any signatures or precursor effects of this scenario, such as a decrease of the CDW susceptibility at low temperatures.
\(iv) Our results are consistent with the theoretical arguments for a weak-coupling CDW instability due to nesting and a Van Hove singularity, which should apply to the weak-coupling regime where a non-CDW region was reported in Refs. [[@ohgoe2017competitions; @1709.00278]]{}.
It is beyond the scope of this work to determine the origin of the different findings in Refs. [[@ohgoe2017competitions; @1709.00278]]{}. However, the necessity of choosing a variational wave function seems the most likely source for different physics in trying to distinguish a paired Fermi liquid from either a pair crystal (CDW state) or a pair condensate (SC state). In the weak-coupling regime that is of interest here, different states are expected to be close in energy. Moreover, the deviations between the critical values estimated with the help of the same QMC method in Refs. [[@ohgoe2017competitions; @1709.00278]]{}and visible in Fig. \[fig:phasediagrams\], also with respect to the strong-coupling phase boundary $U=\lambda W$, suggest uncertainties that significantly exceed the reported error bars.
We expect the $T=0$ phase diagram in the ($\lambda$,$U$) plane to contain a single line of critical points that emanates from the point $\lambda=U=0$ and separates CDW and AFM phases. For further progress on this problem, functional RG calculations with a suitable treatment of the energy and momentum dependence of the interaction appear promising [@Barkim2015] to detect CDW order at weak coupling. The combination of projective QMC simulations with improved updates based on recent ideas [@BaSc2018; @PhysRevB.98.041102; @li2019accelerating] could yield $T=0$ results without the variational approximation of Refs. [[@ohgoe2017competitions; @1709.00278]]{}. Finally, the use of pinning fields together with an extrapolation to the thermodynamic limit may also prove useful [@Assaad13].
We thank F. F. Assaad for helpful discussions and the DFG for support via SFB 1170. We gratefully acknowledge the computing time granted by the John von Neumann Institute for Computing (NIC) and provided on the supercomputer JURECA [@jureca] at the Jülich Supercomputing Centre.
[46]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.98.085405) [****, ()](\doibase 10.1103/PhysRevLett.119.097401) [****, ()](\doibase 10.1103/PhysRevB.99.035114) [****, ()](\doibase
10.1103/PhysRevB.98.041102) [****, ()](\doibase 10.1103/PhysRevB.98.201108) [****, ()](\doibase 10.1103/PhysRevB.100.020302) [****, ()](\doibase
10.1103/PhysRevB.52.4806) [****, ()](\doibase 10.1103/PhysRevLett.75.2570) [****, ()](\doibase 10.1103/PhysRevB.75.014503) [****, ()](\doibase 10.1103/PhysRevB.92.195102) [****, ()](\doibase
10.1103/PhysRevLett.109.246404) [****, ()](\doibase
10.1103/PhysRevB.87.235133) [****, ()](\doibase 10.1103/PhysRevLett.119.197001) [****, ()](\doibase 10.1103/PhysRevB.96.205145) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.92.245132) [****, ()](\doibase 10.1103/PhysRevB.46.271) [****, ()](\doibase
10.1103/PhysRevLett.120.187003) @noop [ ()]{} [****, ()](\doibase 10.1103/PhysRevB.40.197) [****, ()](\doibase 10.1103/PhysRevLett.122.077601) [****, ()](\doibase
10.1103/PhysRevLett.122.077602) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.42.2416) [****, ()](\doibase 10.1103/PhysRevB.42.4143) [****, ()](\doibase 10.1103/PhysRevLett.66.778) [****, ()](\doibase 10.1103/PhysRevB.43.10413) [****, ()](\doibase
10.1103/PhysRevB.48.7643) [****, ()](\doibase 10.1103/PhysRevB.48.16011) [****, ()](\doibase 10.1103/PhysRevB.55.3803) [****, ()](\doibase 10.1103/PhysRevLett.56.2732) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevLett.17.1133) [****, ()](\doibase 10.1103/PhysRevB.76.035116) [****, ()](\doibase
10.1103/RevModPhys.83.349) “,” (, , ) “,” (, , ) Chap. , [****, ()](\doibase 10.1007/BF01293604) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.91.085114) [****, ()](\doibase 10.1103/PhysRevX.3.031010) [****, ()](http://dx.doi.org/10.17815/jlsrf-2-121)
|
---
abstract: 'Two well-known quantum corrections to the area law have been introduced in the literatures, namely, logarithmic and power-law corrections. Logarithmic corrections, arises from loop quantum gravity due to thermal equilibrium fluctuations and quantum fluctuations, while, power-law correction appears in dealing with the entanglement of quantum fields in and out the horizon. Inspired by Verlinde’s argument on the entropic force, and assuming the quantum corrected relation for the entropy, we propose the entropic origin for the Coulomb’s law in this note. Also we investigate the Uehling potential as a radiative correction to Coulomb potential in $1$-loop order and show that for some value of distance the entropic corrections of the Coulomb’s law is compatible with the vacuum-polarization correction in QED. So, we derive modified Coulomb’s law as well as the entropy corrected Poisson’s equation which governing the evolution of the scalar potential $\phi$. Our study further supports the unification of gravity and electromagnetic interactions based on the holographic principle.'
author:
- 'S. H. Hendi$^{1,2}$[^1] and A. Sheykhi$^{2,3} $[^2]'
title: 'Entropic Corrections to Coulomb’s Law'
---
Introduction
============
The profound connection between gravity and thermodynamics has a long history since the discovery of black holes thermodynamics in 1970’s [@HB; @B; @D]. It was discovered that black holes can emit Hawking radiation with a temperature proportional to its surface gravity at the black hole horizon and black hole has an entropy proportional to its horizon area [@B]. The Hawking temperature and horizon entropy together with the black hole mass obey the first law of black hole thermodynamics [@D]. The studies on the connection between gravity and thermodynamics has been continued until in 1995 Jacobson showed that the Einstein field equation is just an equation of state for spacetime and in particular it can be derived from the the first law of thermodynamics together with relation between the horizon area and entropy [@Jac]. Following Jacobson, however, several recent investigations have shown that there is indeed a deeper connection between gravitational dynamics and horizon thermodynamics. It has been shown that the gravitational field equations in a wide variety of theories, when evaluated on a horizon, reduce to the first law of thermodynamics and vice versa. This result, first pointed out in [@Pad1], has now been demonstrated in various theory including f(R) gravity [@Elin], cosmological setups [@Cai2; @Cai3; @CaiKim; @Wang; @Cai33; @Shey0], and in braneworld scenarios [@Shey1; @Shey2]. For a recent review on the thermodynamical aspects of gravity and complete list of references see [@Padrev]. Although Jacobson’s derivation is logically clear and theoretically sound, the statistical mechanical origin of the thermodynamic nature of gravity remains obscure.
A constructive new idea on the relation between gravity and thermodynamics was recently proposed by Verlinde [@Ver] who claimed that gravity is not a fundamental interaction and can be interpreted as an entropic force arising from the change of information when a material body moves away from the holographic screen. Verlinde postulated that when a test particle approaches a holographic screen from a distance $\triangle x$, the magnitude of the entropic force on this body has the form $$\label{F}
F\triangle x=T \triangle S,$$ where $T$ and $\triangle S$ are the temperature and the entropy change on the screen, respectively (see Fig. \[Rec\]).
Focusing on the physical explanation of interesting proposal of Verlinde, it has been shown that his idea is problematic [@Hossenfelder2010; @Gao2011]. In other word, although Verlinde’s derivation is right, mathematically, it does not prove that gravity is an entropic force, physically. We should note that it has been presented a general objection to viewing gravity as an entropic force [@Gao2011] and it has been proved that Verlinde’s idea is supported by a mathematical argument based on a discrete group theory [@Winkelnkemper]. In addition, considering a modified entropic force with the covariant entropy bound, one may obtain the Newtonian force law [@Myung2011]. Also, following the controversial hypothesis in Ref. [@Chaichian2011], it has been shown that gravity is an entropic force.
Verlinde’s derivation of laws of gravitation opens a new window to understand gravity from the first principles. The entropic interpretation of gravity has been used to extract Friedmann equations at the apparent horizon of the Friedmann-Robertson-Walker universe [@Cai4], modified Friedmann equations [@Sheykhi], modified Newton’s law [Modesto]{}, the Newtonian gravity in loop quantum gravity [@smolin], the holographic dark energy [@Mli], thermodynamics of black holes [@Tian] and the extension to Coulomb force [@Twang]. Other studies on the entropic force have been carried out in [@Other].
In addition, the derivation of Newton’s law of gravity, in Verlinde’s approach, depends on the entropy-area relationship $S=A/4\ell _{p}^{2}$ of black holes in Einstein’s gravity, where $A=4\pi R^{2}$ represents the area of the horizon and $\ell
_{p}^{2}=G\hbar /c^{3}$ is the Planck length. However, this definition can be modified from the inclusion of quantum effects. Two well-known quantum corrections to the area law have been introduced in the literatures, namely, logarithmic and power-law corrections. Logarithmic corrections, arises from loop quantum gravity due to thermal equilibrium fluctuations and quantum fluctuations [@Meis; @Zhang], $$S=\frac{A}{4\ell _{p}^{2}}-\beta \ln {\frac{A}{4\ell
_{p}^{2}}}+\gamma \frac{\ell _{p}^{2}}{A}+\mathrm{const},
\label{S1}$$ where $\beta $ and $\gamma $ are dimensionless constants of order unity. The exact values of these constants are not yet determined and still an open issue in quantum gravity.
Power-law correction appears in dealing with the entanglement of quantum fields in and out the horizon. The entanglement entropy of the ground state obeys the Bekenstein- Hawking area law. However, a correction term proportional to a fractional power of area results when the field is in a superposition of ground and excited states [@Sau]. In other words, the excited state contributes to the power-law correction, and more excitations produce more deviation from the area law [@sau1]. The power-law corrected entropy is written as [@Sau; @pavon1] $$S=\frac{A}{4\ell _{p}^{2}}\left[ 1-K_{\alpha }A^{1-\alpha
/2}\right] \label{plec}$$where $\alpha $ is a dimensionless constant whose value ranges as $%
2<\alpha<4 $ [@Sau], and $$K_{\alpha }=\frac{\alpha (4\pi )^{\alpha /2-1}}{(4-\alpha
)r_{c}^{2-\alpha }} \label{kalpha}$$where $r_{c}$ is the crossover scale. The second term in Eq. (\[plec\]) can be regarded as a power-law correction to the area law, resulting from entanglement, when the wave-function of the field is chosen to be a superposition of ground state and exited state [@Sau]. Taking the corrected entropy-area relation into account, the corrections to the Newton’s law of gravitation as well as the modified Friedman equations were derived [@Sheykhi].
In this paper, we would like to extend the study to the electromagnetic interaction. We will derive the general quantum corrections to the Coulomb’s law, Poisson’s equation and the general form of the modified Newton-Coulomb’s law by assuming the entropic origin for the electromagnetic interaction.
Entropic corrections to Coulomb’s law
=====================================
In order to derive the corrections to the Coulomb’s law of electromagnetic, we consider the modified entropy-area relationship in the following form $$S=\frac{A}{4\ell _{p}^{2}}+{s}(A), \label{S2}$$where $s(A)$ represents the general quantum correction terms in the entropy expression. We assume there are two charged particles, one a test charged particle with mass $m$ and charge $q$ and the other considered as the source with respective charge $Q$ and mass $M$ located at the center (see Fig. \[Spherical\] for more details). Centered around the source mass $M$ with charge $Q$, is a spherically symmetric surface $\mathcal{S}$ which will be defined with certain properties that will be specified explicitly later. To derive the entropic law, the surface $%
\mathcal{S}$ is between the test mass and the source mass, but the test mass is assumed to be very close to the surface as compared to its reduced Compton wavelength $\lambda _{m}=\frac{\hbar
}{mc}$. When a test mass $m$ is a distance $\triangle x=\eta
\lambda _{m}$ away from the surface $\mathcal{S} $, the entropy of the surface changes by one fundamental unit $\triangle S$ fixed by the discrete spectrum of the area of the surface via the relation $$\triangle S=\frac{\partial S}{\partial A}\triangle A=\left(
\frac{1}{4\ell _{p}^{2}}+\frac{\partial {s}(A)}{\partial A}\right)
\triangle A. \label{S3}$$
We find out that in order to interpret the entropic origin for the electromagnetic force, we should leave away the relativistic rest mass energy $E=Mc^{2}$, and instead, in a similar manner, we propose the relativistic rest electromagnetic energy of the source $Q$ as $$E=\Gamma Q c^{2}, \label{Ec}$$ where $\Gamma =\chi q/m$, and $\chi $ is a constant with known dimension ($[\chi ]=\frac{[k]}{[G]}$, where $k$ and $G$ are Coulomb and Newtonian constants, respectively). Although the physical interpretation of assumption (\[Ec\]) is not clear well yet for us, however as we will see it leads to the reasonable results. It is notable to mention that the charge/mass ratio ($q/m$) is a physical quantity that is widely used in the electrodynamics of charged particles. When a charged particle follows a circular which is caused by the magnetic field, the magnetic force is acting like a centripetal force. It is easy to find that charge/mass ratio ($q/m=V/Br$) is a constant in which we equal it to $\Gamma/\chi $.
Considering the relativistic rest mass energy relation with the motivation of analogy between mass in gravity and charge in electromagnetic interactions, one may consider $E_{EM}=\mathcal{M}_{EM}c^{2}$, in which $E_{EM}$ is the electromagnetic energy and $\mathcal{M}_{EM}=\Gamma Q$ is its corresponding mass which we call it as the electromagnetic mass. It is notable that there are other concepts of mass in special relativity, such as longitudinal mass and transverse mass.
We should mention that we are working in the geometrized unit of charge, in which the Coulomb’s law takes almost the same form as the Newton’s law except for the difference in signature. On the surface $\mathcal{S}$, there live a set of bytes" of information that scale proportional to the area of the surface so that $$A=\xi N, \label{AQN}$$ where $N$ represents the number of bytes and $\xi $ is a fundamental constant which should be determined later. Assuming the temperature on the surface is $T$, and then according to the equipartition law of energy [@Pad3], the total energy on the surface is $$E=\frac{1}{2}Nk_{B}T. \label{E}$$ Finally, we assume that the electric force on the charge particle $q$ follows from the generic form of the entropic force governed by the thermodynamic equation of state $$F=T\frac{\triangle S}{\triangle x}, \label{F2}$$ where $\triangle S$ is one fundamental unit of entropy when $|\triangle x|=\eta \lambda _{m}$, and the entropy gradient points radially from the outside of the surface to inside. Note that $N$ is the number of bytes and thus $\triangle N=1$; hence from (\[AQN\]) we find $\triangle A=\xi $. Now, we are in a position to derive the entropy-corrected Coulomb’s law. Combining Eqs. (\[S3\])- (\[F2\]), we reach $$\begin{aligned}
F &=&\frac{2\Gamma Qc^{2}}{Nk_{B}}\frac{\Delta A}{\Delta x}\left(
\frac{
\partial S}{\partial A}\right) \nonumber \\
&=&\frac{2\Gamma Q\xi mc^{3}}{Nk_{B}\eta \hbar }\left(
\frac{\partial S}{
\partial A}\right) \nonumber \\
&=&\frac{Qq}{R^{2}}\left( \frac{\chi \xi ^{2}c^{3}}{8\pi k_{B}\eta
\hbar \ell _{p}^{2}}\right) \left[ 1+4\ell _{p}^{2}\frac{\partial
{s}}{\partial A} \right] _{A=4\pi R^{2}}, \label{F3}\end{aligned}$$ This is nothing but the Coulomb’s law of electromagnetic to the first order provided we define $\xi ^{2}=8\pi k_{B}\eta \ell
_{p}^{4}$ and $\chi =1/(4\pi \varepsilon _{0}G)=\hbar /(4\pi
\varepsilon _{0}\ell _{p}^{2}c^{3})$ . Thus we write the general quantum corrected Coulomb’s law as $$F_{\mathrm{em}}= \frac{1}{4 \pi \varepsilon
_{0}}\frac{Qq}{R^{2}}\left[ 1+4\ell _{p}^{2}\frac{\partial {s}}{
\partial A}\right] _{A=4\pi R^{2}}. \label{F4}$$ In order to specify the correction terms explicitly, we use the two well-known kinds of entropy corrections. It is easy to show that $$F_{\mathrm{em1}}=\frac{1}{4 \pi \varepsilon _{0}}
\frac{Qq}{R^{2}}\left[ 1-\frac{\beta }{\pi }\frac{\ell
_{p}^{2}}{R^{2}}-\frac{\gamma }{4\pi ^{2}}\frac{\ell
_{p}^{4}}{R^{4}}\right] , \label{F5}$$ $$F_{\mathrm{em2}}=\frac{1}{4 \pi \varepsilon _{0}}
\frac{Qq}{R^{2}}\left[ 1-\frac{\alpha }{2}\left(
\frac{r_{c}}{R}\right) ^{\alpha -2}\right] , \label{F55}$$ where $F_{\mathrm{em1}}$ and $F_{\mathrm{em2}}$ are, respectively, the logarithmic and power-law corrected Coulomb’s law. Thus, with the corrections in the entropy expression, we see that the Coulomb’s law will be modified accordingly. Since the correction terms in Eqs. (\[F5\]) and (\[F55\]) can be comparable to the first term only when $R$ is very small (i.e. $R<<l_{p}$ and $R<<r_{c}$ for Eqs. (\[F5\]) and (\[F55\]), respectively), the corrections make sense only at the very small distances (note that $\alpha >2$). For large distances (i.e. $R>>l_{p}$ for (\[F5\]) and $R>>r_{c}$ for (\[F55\])), the entropy-corrected Coulomb’s law reduces to the usual Coulomb’s law of electromagnetic.
Uehling Correction to Coulomb’s law
===================================
In order to compare the entropic correction with QED correction of Coulomb’s law, we introduce the the so called Uehling potential [@Uehling] as a radiative correction to Coulomb potential in $1$-loop order (the vacuum-polarization correction for an electron in a nuclear Coulomb field). Using the Born approximation, the relation between the scattering amplitude $M$ and the potential is given by $$<p^{\prime }\left\vert iM\right\vert p>=-i2\pi V(\mathbf{q})\delta
(E_{p^{\prime }}-E_{p}), \label{Ampl}$$ where $p$ ($p^{\prime }$) and $E_{p}$ ($E_{p^{\prime }}$) are the momenta and energy of the incoming (outgoing) particles, respectively, and $\mathbf{q}=\mathbf{p}^{\prime}-\mathbf{p}$. For ordinary QED, the amplitude of a particle-antiparticle scattering is given by [@Peskin] $$iM\sim -\frac{ie^{2}}{\left\vert \mathbf{p}^{\prime
}-\mathbf{p}\right\vert ^{2}}. \label{iM}$$ Comparing (\[iM\]) with (\[Ampl\]), one can show that the attractive classical Coulomb potential $V(\mathbf{q})$ is given by $$V(\mathbf{q})=-\frac{e^{2}}{\left\vert \mathbf{q}\right\vert
^{2}}, \label{Vq}$$ where $\left\vert \mathbf{q}\right\vert =\left\vert
\mathbf{p}-\mathbf{p}^{\prime }\right\vert $. Using a Fourier transformation into the coordinate space, one can find $$V(\mathbf{x})=\int \frac{d^{3}q}{(2\pi
)^{3}}V(\mathbf{q})e^{i\mathbf{q}. \mathbf{x}}=-\frac{\alpha
^{\prime }}{\mathbf{R}}, \label{Vx1}$$ where $\mathbf{R}=|\mathbf{x}|$ and $\alpha ^{\prime }$ is the fine structure constant. Furthermore, to include the quantum correction into the result, the modified Coulomb potential can be calculated from $$V(\mathbf{x})=-e^{2}\int \frac{d^{3}q}{(2\pi
)^{3}}\frac{e^{i\mathbf{q}. \mathbf{x}}}{\mathbf{q}^{2}[1-\Pi
(\mathbf{q}^{2})]}, \label{Vx2}$$ where $\Pi (\mathbf{q})$ in the ordinary QED is defined by the vacuum polarization tensor $$\Pi ^{\mu \nu }(q)=(q^{2}g^{\mu \nu }-q^{\mu }q^{\nu })\Pi
(q^{2}), \label{Pimn}$$ and is given by $$\Pi (q^{2})=-\frac{2\alpha ^{\prime }}{\pi
}\int\limits_{0}^{1}x(1-x)\log \left(
\frac{m^{2}}{m^{2}-x(1-x)q^{2}}\right) dx. \label{Pi}$$ Choosing $q_{0}=0$ and inserting this relation into (\[Vx2\]), after some straightforward calculation [@Peskin], one can obtain the so called Uehling potential $$V(R)=-\frac{\alpha ^{\prime }}{R}\left( 1+\frac{\alpha ^{\prime
}}{4\sqrt{\pi }}\frac{e^{-2mR}}{(mR)^{3/2}}+...\right) ,
\label{Uehling}$$ and after differentiation we can obtain the corresponding Uehling force $$F_{Ueh}=\frac{\alpha ^{\prime }}{R^{2}}\left( 1-\frac{\alpha
^{\prime }}{8 \sqrt{\pi
}}\frac{e^{-2mR}}{(mR)^{3/2}}(4mR+5)+...\right) . \label{UehlingF}$$
In order to compare the results of the entropic and the Uehling corrections, we can plot the corresponding forces for different values of distance $R$. We draw three logarithmic figures for different scale. Figure \[FigS\], which is drawn for very small scale ($0<R<10^{-5}$), shows that for small value of $R$, $F_{em2}$ is compatible with $F_{Ueh}$ and for a special value of $R$ they are equal, and also $F_{em1}$ is near to the Coloumb force, $F_{col}$. When we investigate the figure \[FigM\], which is drawn for medium scale ($0<R<0.1$), we find that in this scale, $F_{em2}$ is far from others. One can find that in figure \[FigL\], which is plotted for large scale ($0<R<3$), $F_{em1}$, $F_{Ueh}$ and $F_{col}$ are overlapped to each other and $F_{em2}$ is separated. These figures show that for small values of distance $F_{em2}$ is more compatible with Uehling force, but for large values of $R$, $F_{em1}$ is more near to $F_{Ueh}$. As a result it is interesting to study the entropic force arising from the change of information.
Generalized Equipartition Rule and Newton-Coulomb’s Law
=======================================================
In this section we would like to generalize our discussion in the previous section to the case where electromagnetic force as well as the gravitational force are considered. We will study two approaches in dealing with the problem.
First Approach
--------------
In the first approach we identify the total relativistic rest energy as $$E=Mc^{2}+\Gamma Qc^{2}, \label{equip0}$$and thus the equipartition rule (\[Ec\]) will be replaced with $$Mc^{2}+\Gamma Qc^{2}=\frac{1}{2}Nk_{B}T, \label{equip1}$$Inserting Eqs. (\[equip0\])- (\[equip1\]) in Eq. (\[F2\]) after using Eqs. (\[S3\]) and (\[AQN\]), we find $$\begin{aligned}
F_{\mathrm{g,em}} &=&\frac{2\left( M+\Gamma Q\right) c^{2}}{Nk_{B}}\frac{%
\Delta A}{\Delta x}\left( \frac{\partial S}{\partial A}\right) \nonumber \\
&=&\frac{2\left( M+\Gamma Q\right) \xi mc^{3}}{Nk_{B}\eta \hbar
}\left(
\frac{\partial S}{\partial A}\right) \nonumber \\
&=&\frac{\left( mM+\chi qQ\right) }{R^{2}}\left( \frac{\xi
^{2}c^{3}}{8\pi
k_{B}\eta \hbar \ell _{p}^{2}}\right) \left[ 1+4\ell _{p}^{2}\frac{\partial {%
s}}{\partial A}\right] _{A=4\pi R^{2}} \label{FM&Q}\end{aligned}$$Again if we define $\xi ^{2}=8\pi k_{B}\eta \ell _{p}^{4}$ and $\chi =k/G=\hbar /(4\pi \varepsilon _{0}\ell _{p}^{2}c^{3})$, after also using Eqs. (\[S1\]) and (\[plec\]), we reach directly the modified Newton-Coulomb’s law corresponding to the logarithmic and power-law corrections, respectively, $$F_{\mathrm{g,em}}=\frac{GmM+kqQ}{R^{2}}\left[ 1-\frac{\beta }{\pi }\frac{%
\ell _{p}^{2}}{R^{2}}-\frac{\gamma }{4\pi ^{2}}\frac{\ell _{p}^{4}}{R^{4}}%
\right] , \label{FM&Q2}$$$$F_{\mathrm{g,em}}=\frac{GmM+kqQ}{R^{2}}\left[ 1-\frac{\alpha
}{2}\left( \frac{r_{c}}{R}\right) ^{\alpha -2}\right] .
\label{FM&Q22}$$These are the total entropy-corrected forces between a test particle with charge $q$ and mass $m$ in a distance $R$ of a source particle with charge $Q$ and mass $M$. We see that the correction terms have the same form for both gravitation and electromagnetic forces. If one of the particle does not have charge, i.e. $q=0$ or $Q=0$, then Eq. (\[FM&Q2\]) reduces to the quantum correction Newton’s law of gravitation [@Sheykhi]. Again we see that the corrections play a significant role only at the very small distances of $R$.
Second Approach
---------------
The second approach is very simple. It is sufficient to add the modified electromagnetic force obtained in Eq. (\[F4\]) and the modified Newton’s law of gravitation derived in [@Sheykhi], where in the general form is $$F_{\mathrm{g}}= G\frac{mM}{R^{2}}\left[ 1+4\ell _{p}^{2}\frac{\partial {s}}{%
\partial A}\right] _{A=4\pi R^{2}}.$$Since the emergent directions of gravity and electromagnetic forces coincide, we can obtain $$\begin{aligned}
F_{\mathrm{g,em}} &=&F_{\mathrm{g}}+F_{\mathrm{em}} \\
&=& \frac{GmM+kqQ}{R^{2}}\left[ 1+4\ell _{p}^{2}\frac{\partial
{s}}{\partial A}\right] _{A=4\pi R^{2}}.\end{aligned}$$Using Eqs. (\[S1\]) and (\[plec\]), it is straightforward to recover Eqs. (\[FM&Q2\]) and (\[FM&Q22\]).
Entropy corrected Poisson’s Equation
====================================
We can also derive the modified Poisson’s equation for the electric potential $\phi $, provided we define a new wavelength $\lambda _{q}=\frac{%
\delta \hbar }{qc}$ instead of Compton wavelength, $\lambda
_{m}=\frac{\hbar }{mc}$, where $\delta =\sqrt{4\pi \varepsilon
_{0}G}$. This definition may be understood if one accept a correspondence between the role of mass $m$ in gravitational force and the role of charge $q$ in the electromagnetic force. Consider the differential form of Gauss’s law $$\overrightarrow{\nabla }.\overrightarrow{E}=\frac{\rho
}{\varepsilon _{0}},$$and the fact that electrical field has zero curl and equivalently $%
\overrightarrow{E}=-\overrightarrow{\nabla }\phi $, where $\phi $is the electrical potential, it is easy to obtain the familiar Poisson’s equation as $$\nabla ^{2}\phi =-\frac{\rho }{\varepsilon _{0}}.$$In this section, by assuming the modified entropy-area relation (\[S2\]), we want to obtain the modified Poisson’s equation. It was argued in [Ver]{} that the holographic screens correspond to the equipotential surfaces, so it is natural to define$$-\frac{\delta N}{2c^{2}}\nabla \phi =\frac{\Delta S}{\Delta x},
\label{P1}$$where $\frac{\Delta S}{\Delta x}=\left( \frac{\partial S}{\partial
A}\right)
\frac{\Delta A}{\Delta x}$. Substituting $N=\frac{A}{\ell _{p}^{2}}$, $%
\Delta A=\ell _{p}^{2}$ and $\Delta x=\frac{\lambda _{q}}{8\pi }$, where $%
\lambda _{q}=\frac{\delta \hbar }{qc}$ in Eq. (\[P1\]), we can rewrite it in the differential from $$-\frac{\sqrt{\pi \varepsilon _{0}G}}{\ell _{p}^{2}c^{2}}\nabla \phi dA=\frac{%
4\pi c\ell _{p}^{2}}{\sqrt{\pi \varepsilon _{0}G}\hbar }\left( \frac{%
\partial S}{\partial A}\right) dq. \label{P2}$$Using the divergence theorem, we find $$-\frac{\sqrt{\pi \varepsilon _{0}G}}{\ell _{p}^{2}c^{2}}\int
\nabla ^{2}\phi dV=\frac{4\pi c\ell _{p}^{2}q}{\sqrt{\pi
\varepsilon _{0}G}\hbar }\left( \frac{\partial S}{\partial
A}\right) . \label{P3}$$Now, we are in a position to extract the modified Poisson’s equation $$\nabla ^{2}\phi =-\frac{4\pi \ell _{p}^{4}c^{3}}{\pi \varepsilon _{0}G\hbar }%
\left( \frac{\partial S}{\partial A}\right) \frac{dq}{dV},
\label{P4}$$Using Eq. (\[S3\]), the above equation can be further rewritten $$\nabla ^{2}\phi =-\frac{\ell _{p}^{2}c^{3}}{\varepsilon _{0}G\hbar }\rho %
\left[ 1+4l_{p}^{2}\frac{\partial s}{\partial A}\right] ,
\label{ModP0}$$ where we have defined the charge density $\rho =dq/dV$. Finally, using the fact that $c^{3}\ell _{p}^{2}/\hbar =G$, we can write the modified Poisson’s equation in the following manner $$\nabla ^{2}\phi =-\frac{\rho }{\varepsilon _{0}}\left[ 1+4l_{p}^{2}\frac{%
\partial s}{\partial A}\right] _{A=4\pi R^{2}}, \label{ModP1}$$where it reduces to $$\nabla ^{2}\phi =-\frac{\rho }{\varepsilon _{0}}\left[
1-\frac{\beta }{\pi } \frac{\ell _{p}^{2}}{R^{2}}-\frac{\gamma
}{4\pi ^{2}}\frac{\ell _{p}^{4}}{ R^{4}}\right] , \label{ModP}$$ and $$\nabla ^{2}\phi =-\frac{\rho }{\varepsilon _{0}}\left[
1-\frac{\alpha }{2} \left( \frac{r_{c}}{R}\right) ^{\alpha
-2}\right] , \label{ModP2}$$ for logarithmic and power-law corrections, respectively. In this way, one can derive the quantum correction to Poisson’s equation.
Conclusions\[Sum\]
==================
To conclude, taking into account the quantum corrections in area law of the black hole entropy, we derived the modified Coulomb’s law of electromagnetic as well as the generalized Newton-Coulomb’s law in the presence of correction terms. In addition we investigated the vacuum-polarization correction in QED (Uehling potential) and found that the results of entropic corrections of Coulomb’s law is near to the Uehling potential for some distances. This compatibility motivated us to investigate the entropic force in other electromagnetic field equations. We also obtained entropy-corrected Poisson’s equation which governing the evolution of the scalar potential $\phi$. Our study is the quite one generalization of Verlinde’s argument on the gravity force, to the electromagnetic interaction. According to the Verlinde’s discussion the gravitational force has a holographic origin. In this work we proposed a similar nature for the electromagnetic interaction. Our motivation is the high apparent similarity between the Newton’s law and the Coulomb’s law. If gravity and electromagnetic interaction can be extracted from holographic principle, this can be regarded as a form unification of gravity and electromagnetic force. Interestingly enough, we found that the correction terms have similar form for both Newton’s law and Coulomb’s law. This implies that in the very small distances, these two fundamental forces have the same behavior. This fact further supports the unification of gravity and electromagnetic interactions based on the holographic principle.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank the referees for constructive comments. This work has been supported financially by Research Institute for Astronomy and Astrophysics of Maragha, Iran.
[99]{} J. D. Bekenstein, Phys. Rev. D 7, 2333 (1973);
S. W. Hawking, Commun Math. Phys. 43, 199 (1975);
S. W. Hawking, Nature 248, 30 (1974).
J. M. Bardeen, B. Carter and S. W. Hawking, Commun. Math. Phys. 31, 161 (1973).
P. C. W. Davies, J. Phys. A: Math. Gen. 8, 609 (1975);
W. G. Unruh, Phys. Rev. D 14, 870 (1976);
L. Susskind, J. Math. Phys. 36, 6377 (1995).
T. Jacobson, Phys. Rev. Lett. **75**, 1260 (1995).
T. Padmanabhan, Class. Quantum. Grav. **19**, 5387 (2002).
C. Eling, R. Guedens and T. Jacobson, Phys. Rev. Lett. **96**, 121301 (2006).
M. Akbar and R. G. Cai, Phys. Rev. D **75**, 084003 (2007).
R. G. Cai and L. M. Cao, Phys.Rev. D **75**, 064008 (2007).
R. G. Cai and S. P. Kim, JHEP **0502**, 050 (2005).
B. Wang, E. Abdalla and R. K. Su, Phys. Lett. B **503**, 394 (2001);
B. Wang, E. Abdalla and R. K. Su, Mod. Phys. Lett. A **17**, 23 (2002).
R. G. Cai, L. M. Cao and Y. P. Hu, JHEP **0808**, 090 (2008).
S. Nojiri and S. D. Odintsov, Gen. Relativ. Gravit. **38**, 1285 (2006);
A. Sheykhi, Class. Quantum Grav. **27**, 025007 (2010);
A. Sheykhi, Eur. Phys. J. C **69**, 265 (2010).
A. Sheykhi, B. Wang and R. G. Cai, Nucl. Phys. B **779**, 1 (2007);
R. G. Cai and L. M. Cao, Nucl. Phys. B **785**, 135 (2007).
A. Sheykhi, B. Wang and R. G. Cai, Phys. Rev. D **76**, 023515 (2007);
A. Sheykhi, B. Wang, Phys. Lett. B **678**, 434 (2009).
T. Padmanabhan, Rept. Prog. Phys. **73**, 046901 (2010).
E. P. Verlinde, JHEP **1104**, 029 (2011).
S. Hossenfelder, \[arXiv:1003.1015\];
A. Kobakhidze, Phys. Rev. D 83, 021502 (2011);
B. L. Hu, Int. J. Mod. Phys. D 20, 697 (2011);
A. Kobakhidze, \[arXiv:1108.4161\].
S. Gao, Entropy, 13, 936 (2011).
H. E. Winkelnkemper, AP Theory V: Thermodynamics in Topological Disguise, Gravity from Holography and Entropic Force as Dynamic Dark Energy. Available online:http://www.math.umd.edu/ hew/ (accessed on 7 April 2011); Preprint, February 2011
Y. S. Myung, Eur. Phys. J. C 71, 1549 (2011).
M. Chaichian, M. Oksanen and A. Tureanu, \[arXiv:1109.2794\].
R. G. Cai, L. M. Cao and N. Ohta, Phys. Rev. D **81**, 061501(R) (2010);
Y. Ling and J. P. Wu, JCAP **1008**, 017 (2010).
A. Sheykhi, Phys. Rev. D **81**, 104011 (2010);
A. Sheykhi and S. H. Hendi, Phys. Rev. D **84**, 044023 (2011).
L. Modesto and A. Randono, \[arXiv:1003.1998\].
L. Smolin, \[arXiv:1001.3668\].
M. Li and Y. Wang, Phys. Lett. B **687**, 243 (2010);
D. A. Easson, P. H. Frampton and G. F. Smoot, Phys. Lett. B **696**, 273 (2011);
U. H. Danielsson, \[arXiv:1003.0668\].
Y. Tian and X. Wu, Phys. Rev. D **81**, 104013 (2010);
T. Wang, Phys. Rev. D **81**, 104045 (2010).
Y. X. Liu, Y. Q. Wang, S. W. Wei, Class. Quantum Grav. **27**, 185002 (2010);
V. V. Kiselev and S. A. Timofeev, Mod. Phys. Lett. A **25**, 2223 (2010);
R. A. Konoplya, Eur. Phys. J. C **69**, 555 (2010);
R. Banerjee and B. R. Majhi. Phys. Rev. D **81**, 124006 (2010);
P. Nicolini, Phys. Rev. D **82**, 044030 (2010);
C. Gao, Phys. Rev. D **81**, 087306 (2010);
Y. S. Myung and Y.W Kim, Phys. Rev. D **81**, 105012 (2010);
H. Wei, Phys. Lett. B **692**, 167 (2010);
D. A. Easson, P. H. Frampton and G. F. Smoot, \[arXiv:1003.1528\];
S. W. Wei, Y. X. Liu and Y. Q. Wang, *to be published in Commun. Theor. Phys.* \[arXiv:1001.5238\].
K. A. Meissner, Class. Quantum Grav. **21**, 5245 (2004);
A. Ghosh and P. Mitra, Phys. Rev. D **71**, 027502 (2004);
A. Chatterjee and P. Majumdar, Phys. Rev. Lett. **92**, 141301 (2004).
J. Zhang, Phys. Lett. B **668**, 353 (2008);
R. Banerjee and B. R. Majhi, Phys. Lett. B **662**, 62 (2008);
R. Banerjee and B. R. Majhi, JHEP **0806**, 095 (2008);
S. Nojiri and S. D. Odintsov, Int. J. Mod. Phys. A **16**, 3273 (2001).
S. Das, S. Shankaranarayanan and S. Sur, Phys. Rev. D **77**, 064013 (2008).
S. Das, S. Shankaranarayanan and S. Sur, \[arXiv:1002.1129\];
S. Das, S. Shankaranarayanan and S. Sur, \[arXiv:0806.0402\].
N. Radicella, D. Pavon, Phys. Lett. B **691**, 121 (2010).
T. Padmanabhan, Class. Quantum Grav. **21**, 4485 (2004);
T. Padmanabhan, Mod. Phys. Lett. A **25**, 1129 (2010);
T. Padmanabhan, Phys. Rev. D **81**, 124040 (2010).
E. A. Uehling, Phys. Rev. **48**, 55 (1935);
E. H. Wichmann and N. H. Kroll, Phys. Rev. **101**, 843 (1956);
A. Bonanno and M. Reuter Phys. Rev. D **62**, 043008 (2000);
W. Dittrich and M. Reuter, *Effective Lagrangians in Quantum Electrodynamics*, Springer-Verlag (1985).
M. E. Peskin and D. V. Schroeder, *An Introduction to Quantum Field Theory*, Reading, USA: Addison-Wesley (1995).
[^1]: E-mail: hendi@mail.yu.ac.ir
[^2]: E-mail: sheykhi@mail.uk.ac.ir
|
---
abstract: 'In simulations of a 12.5PW laser (focussed intensity $I=4\times10^{23}$Wcm$^{-2}$) striking a solid aluminum target $10\%$ of the laser energy is converted to gamma-rays. A dense electron-positron plasma is generated with a maximum density of $10^{26}$m$^{-3}$; seven orders of magnitude denser than pure e$^-$e$^+$ plasmas generated with 1PW lasers. When the laser power is increased to 320PW ($I=10^{25}$Wcm$^{-2}$) 40% of the laser energy is converted to gamma-ray photons and 10% to electron-positron pairs. In both cases there is strong feedback between the QED emission processes and the plasma physics; the defining feature of the new ‘QED-plasma’ regime reached in these interactions.'
author:
- 'C.P. Ridgers$^1$, C.S.Brady$^{2}$, R. Duclous$^3$, J.G. Kirk$^4$, K. Bennett$^2$, T.D. Arber$^2$, A.R. Bell$^1$'
title: 'Dense electron-positron plasmas and bursts of gamma-rays from laser-generated QED plasmas'
---
Introduction
============
With the advent of next generation 10PW-100PW lasers [@Mourou_07] a new frontier will be reached in high-power laser-plasma physics. These lasers will create strong enough electromagnetic fields to access strong-field quantum electrodynamics (QED) processes thought to be responsible for cascades of antimatter production in the relativistic winds from pulsars and black holes [@Goldreich_69]. Strong-field QED processes have typically been investigated using particle accelerators in experiments arranged such that these QED scattering processes can be studied in isolation [@Bula_96] (and their cross-sections compared to QED calculations [@Erber_66]). This is also true of laser-solid experiments where photon and pair production occur in the electric fields of the nuclei of high-$Z$ materials far from the laser focus [@Chen_10]. By contrast, the fields in a $>10$PW laser’s focus will cause strong-field QED reactions directly [@Bell_08]. In this case the QED processes strongly modify the basic plasma dynamics. Conversely, the rates of the QED reactions depend on the electromagnetic fields which are determined by the plasma dynamics. As a result of this feedback neither the QED nor the plasma physics may be considered in isolation, but both must be treated self-consistently in the resulting ‘QED-plasma’ [@Ridgers_12].
The important strong-field QED emission processes are [@Erber_66]: (1) quantum-corrected synchrotron radiation; (2) multiphoton Breit-Wheeler pair production. In (1) electrons and positrons in the plasma radiate energetic gamma-ray photons when accelerated by the electromagnetic fields of the laser. In process (2) these photons interact with the laser fields and generate electron-positron pairs. The controlling parameter for these processes is $\eta\approx\gamma\theta\sqrt{I/I_s}$, where $I$ is the laser intensity & $I_s=\epsilon_0{}cE_s^2/2=2\times10^{29}$Wcm$^{-2}$ is the laser intensity at which the average field in the laser focus is equal to the Schwinger field $E_s=1.3\times10^{18}$Vm$^{-1}$; i.e. the field required to break down the vacuum into electron positron-pairs [@Sauter_31]. $\theta\in[0,2]$ depends on the interaction geometry. As $\eta$ reaches 0.1 (for an optical laser) the electrons in the plasma radiate a significant fraction of their energy as gamma-ray photons; the plasma enters the ‘radiation dominated’ regime [@Sokolov_10]. In this regime the radiation reaction force [@Dirac_38] must be included in the equation of motion of the electrons & positrons. As laser intensities increase from the current maximum of $10^{22}$Wcm$^{-2}$ [@Bahk_04] to exceed $I\sim10^{23}$Wcm$^{-2}$ the ratio $I/I_s$ and the $\gamma$ factor to which the laser accelerates the electrons increase in step and $\eta=0.1$ is reached. 10PW lasers should be able to push well into this regime; if all the energy of a 10PW laser pulse is focussed into a laser spot of radius one micron $I=3\times10^{23}$Wcm$^{-2}$. When $\eta=1$ the following quantum corrections to the gamma-ray radiation become important: (1) the gamma-ray photon spectrum is modified to account for the recoil of the electron as it emits [@Erber_66]; (2) the emitted photon energy becomes a significant fraction of the emitting particles energy and therefore the emission becomes stochastic [@Shen_72]. In addition a significant amount of the radiated photons generate electron-positron pairs. Therefore when $\eta=1$ the ‘QED dominated’ regime is reached [@Sokolov_10]. In optical laser-plasma interactions this will occur as intensities increase to $I\sim10^{24}$Wcm$^{-2}-10^{25}$Wcm$^{-2}$, the limit attainable with 30PW-300PW lasers.
Due to the complexity of the feedback between the QED emission processes and plasma physics effects in a realistic laser-produced QED-plasma, numerical simulations of these interactions are essential. The appropriate simulation tool is obtained by augmenting a particle-in-cell (PIC) code by including QED emission processes (1) & (2) above. A classical model of gamma-ray emission and the resulting radiation reaction has previously been included in several PIC codes [@Zhidkov_02; @Chen_11; @Nakamura_12]. A classical model is only valid in the relatively narrow intensity range defining the radiation dominated regime. A quantum treatment of gamma-ray photon and pair generation, valid in the radiation and QED dominated regimes, has only recently been coupled to PIC codes [@Ridgers_12; @Sokolov_09; @Timhokin_10; @Nerush_11].
The resulting QED-PIC codes have been used to self-consistently simulate cascades of electron-positron pair production, where a critical density pair plasma can be generated from a single seed electron, in pulsar atmospheres [@Timhokin_10] and in the interaction of two counter-propagating 100PW ($I=3\times10^{24}$Wcm$^{-2}$) laser pulses [@Nerush_11]. QED-PIC simulations have also recently been used to show that prolific gamma-ray photon and pair production is possible in 10PW laser-solid interactions [@Ridgers_12]. Here the interaction of the laser pulse with a dense plasma, combined with the fact that the solid reflects the laser pulse and so doubles the electric field, compensates for the expected QED rate reductions due to the ten times lower intensity. In this paper we present a study of the interaction of $O$(10PW) and $O$(100PW) laser pulses with solid aluminum targets, exploring this promising configuration for pair production. We will use QED-PIC [@Brady_11] simulations to elucidate the details of the feedback between QED emission processes and plasma physics effects. In doing so we will attempt to outline a theoretical framework for laser-solid interactions in the QED-plasma regime, which will be important not only for determining the most effective way to produce copious numbers of gamma-ray photons and pairs, but also in determining the viability of any proposed applications of $>$10PW laser-solid interactions such as ion acceleration to multi-GeV energies [@Esirkepov_04; @Robinson_09] and high-harmonic generation [@Dromey_06].
QED Emission Model
==================
In this section we will discuss the model used for the QED emission processes its coupling to a PIC code. This is dramatically simplified by the fact that in high intensity laser-plasma interactions the macroscopic laser fields are effectively unchanged in the QED interactions [@Heinzl_11]. In this case the laser’s electromagnetic field may be treated classically [@Bagrov_90] and the QED reactions in the strong-field QED framework [@Furry_51]. Two approximations may be made concerning the classical ‘macroscopic’ laser fields. (1) They are quasi-static. The length scale over which photons or pairs are formed is a factor of $a=eE_l\lambda_l/2\pi m_e c^2$ ($E_l$ is the electric field of the laser) times smaller than the laser wavelength $\lambda_l$ [@Kirk_09]. For $I>10^{23}$Wcm$^{-2}$, $a\gg 1$, and the laser’s fields may be treated as constant during the QED emission processes. (2) The laser’s fields are much weaker than the Schwinger field. In this case the QED reaction rates in the general fields in the plasma are the same as those in plane wave fields and depend only on the Lorentz-invariant parameters: $\eta=(e\hbar/m_e^3c^4)|F_{\mu\nu}p^{\nu}|$ and $\chi=(e\hbar^22m_e^3c^4)|F_{\mu\nu}k^{\nu}|$ [@Erber_66]. $p^{\mu}$ ($k^{\mu}$) is the electron’s (photon’s) 4-momentum. For $I<10^{25}$Wcm$^{-2}$, $E/E_s<5\times10^{-3}$ and so approximation (2) holds.
Physically $\eta$ is the field perpendicular to the electron motion boosted into its rest frame. This can be seen clearly by writing $\eta$ in terms of three-vectors in the ultra-relativistic limit: $\eta\approx{}(\gamma/E_s)|\mathbf{E}_{\perp}+\bm{\beta}\times{}c\mathbf{B}|$; similarly $\chi=(\hbar\omega_{\gamma}/2m_ec^2)|\mathbf{E}_{\perp}+\hat{\mathbf{k}}\times{}c\mathbf{B}|$. Here $\mathbf{E}_{\perp}$ is the component of the electric field perpendicular to the particle’s motion (in the direction of $\hat{\mathbf{k}}$ for gamma-ray photons and $\hat{\mathbf{v}} = \bm{\beta}c/|\mathbf{v}|$ for electrons & positrons). The three-vector form for $\eta$ shows the origin of the geometrical factor $\theta$. In the case of an underdense plasma being struck by a single high-intensity laser pulse the electrons are rapidly accelerated to $\sim{}c$ in the direction of propagation of the laser, the $\mathbf{E}_{\perp}$ and $\bm{\beta}\times{}c\bm{B}$ terms in $\eta$ almost exactly cancel and so $\theta \ll 1$, $\eta$ is small and emission of photons and pairs consequently reduced. In the case of an ultra-relativistic beam of electrons colliding with the laser pulse the terms add and $\theta\approx2$, dramatically increasing the level of emission. In the case of counter-propagating laser pulses $\theta\approx1$ and the situation is also favorable for gamma-ray photon and pair production. A similar configuration is found in laser solid interactions, with the counter-propagating beam provided by the reflected wave. We will compare the effectiveness of the counter-propagating beam and laser-solid configurations in producing gamma-rays and pairs in section \[plasma\_QED\].
Emission Rates & Monte-Carlo Model
----------------------------------
For a plane electromagnetic wave the (spin & polarization averaged) rates of photon production by electrons and positrons of energy $\gamma{}m_ec^2$ & pair production production by photons of energy $\hbar\omega_{\gamma}$ are, respectively: $\lambda_{\gamma}(\eta) = (\sqrt{3}\alpha_fc/\lambda_c)(\eta/\gamma)h(\eta)$ & $\lambda_{\pm}(\chi) = (2\pi\alpha_fc/\lambda_c)(m_ec^2/\hbar\omega_{\gamma})\chi{}T_{\pm}(\chi)$ [@Erber_66]. $\alpha_f$ is the fine-structure constant, $\lambda_c$ is the Compton wavelength. $h(\eta) = \int_0^{\eta/2}d\chi{}F(\eta,\chi)/\chi$, $F(\eta,\chi)$ is the quantum-corrected synchrotron spectrum [@Erber_66]. $T_{\pm}(\chi)\approx{}0.16K_{1/3}^2(2/3\chi)/\chi$.
The quantum (stochastic) nature of the emission [@Shen_72] can be modelled using a Monte-Carlo technique [@Sokolov_09; @Duclous_11]. The cumulative probability that a particle emits when passing thorough a plasma of optical depth $\tau$ is $P(\tau)=1-e^{-\tau}$; $\tau=\int_0^t\lambda[\eta(t')]dt'$ and $\lambda$ is the reaction rate. To determine the optical depth each particle traverses before emitting $P$ is assigned a random value between 0 and 1 and the equation for $P$ above is then inverted to yield $\tau$. For each particle the optical depth evolves according to the rate equations above until $\tau$ is reached and emission occurs. The cumulative probability that an electron or positron with parameter $\eta$ emits a photon with $\chi$ (i.e. an energy $\hbar\omega_{\gamma}=2\gamma{}m_ec^2\chi/\eta$) is $P_{\chi}(\eta,\chi)=[1/h(\eta)]\int_0^{\chi}d\chi'F(\eta,\chi')/\chi$. When a photon creates a pair it is annihilated and its energy shared between the generated electron and positron. The cumulative probability that the positron takes fraction $f$ of the energy (parameterized by $\chi$) is $P_f(f,\chi)=\int_0^{f}df'p_f(f',\chi)$ [@Daugherty_83]. $P_{\chi}$ & $P_f$ are assigned to the emitted photon or positron at random in the range \[0,1\]. The corresponding values of $\chi$ or $f$ are determined by inverting the equations for $P_{\chi}$ or $P_f$. After emitting a photon the emitting electron or positron recoils, its momentum changing by $\Delta{}\mathbf{p}=-(\hbar\omega_{\gamma}/c)\hat{\mathbf{p}}$, providing the quantum equivalent of the radiation reaction force [@Sokolov_10; @DiPiazza_10]. Note that in the limit where $\Delta\mathbf{p}\ll \mathbf{p}$ many photons are emitted for an appreciable change in the particles energy and therefore the whole synchrotron spectrum is sampled, this is identical to a classical treatment where the particle instantaneously emits the entire synchrotron spectrum, thus the Monte-Carlo algorithm agrees with a classical treatment of radiation reaction in the classical limit.
QED-PIC
-------
The basis of the PIC technique [@Dawson_62] is the representation of the plasma as macroparticles, each representing many real particles such that the number of macroparticles is amenable to simulation. Particle interactions are mediated by: (1) interpolating the charge and current densities resulting from the positions and velocities of the macroparticles onto a spatial grid; (2) solving Maxwell’s equations for the $\mathbf{E}$ & $\mathbf{B}$ fields; (3) interpolating these fields onto the particle’s positions and pushing the particles using the Lorentz force law. The inclusion of the QED processes is simplified by: the fact that the macroscopic fields may be treated classically and therefore step (2) remains unchanged; the macroscopic fields are quasi-static and therefore the QED interactions are point-like, occur instantaneously on the timescale of the PIC code and are consequently not resolved by the code. Therefore we include the QED emission processes as a new step (0).
During emission macrophotons and macropairs are created. The pairs are treated in an equivalent way to the original electrons in the PIC code. The photons are treated as massless, chargeless macroparticles which propagate ballistically. The placement of the QED emission step at (0) ensures that the feedback defining QED-plasmas is simulated self-consistently. Radiation reaction exerts a drag force, altering the velocity of the electrons and positrons and therefore the current in the plasma; pair production acts as a current source. The inclusion of the QED processes before the PIC code solves Maxwell’s equations means that this change in the current is included when the fields are updated. The updated fields are then passed back to the QED routines and used to calculate emission during the next time-step.
$>$10PW Laser-Solid Interactions {#sims_sect}
================================

Two-dimensional QED-PIC simulations of 12.5PW and 320PW laser pulses striking solid aluminum targets at normal incidence have been performed which demonstrate the most important aspects of QED-plasma physics in both the radiation and QED dominated regimes. In both cases the laser is linearly p-polarized and focussed to a spot of radius $1\mu$m on the target’s surface \[i.e. the spatial profile of the laser intensity here is $\propto\exp(-y^2/1\mu\mbox{m}^2)$\]. Temporally the laser power $P=P_0$ for $0<t<30$fs and $P=0$ otherwise. Therefore the intensity on-target is $4\times10^{23}$Wcm$^{-2}$ for $P_0=$12.5PW and $1\times10^{25}$Wcm$^{-2}$ for $P_0=$320PW. The target is a fully-ionized aluminum foil of thickness $1\mu$m and initial density profile $\rho(x,y)=$2700kg$m^{-3}$ for $0<x<L$, $\rho(x,y)=0$ otherwise. The target is discretized on a spatial grid with cell size 10nm and is represented by 1000 macroelectrons and 32 macroions per cell (12.5PW case) or 1857 macroelectrons and 142 macroions per cell (320PW case).
Fig. \[Sim\_res\](a) shows the results for the 12.5PW laser pulse. This shows that prolific gamma-ray (2D blue) and positron (red contours) production occur as the laser bores into the solid (3D grey). $4.8\times10^{13}$ gamma-ray photons with an average energy of 4.8MeV are produced, corresponding to 10% conversion of laser energy to gamma-rays and therefore this interaction is in the radiation dominated regime. $10^{10}$ positrons are produced. Despite the large number generated the positrons are a minority species in the plasma, and therefore the sheath is generated by the ‘fast’ electrons launched into the target [@Wilks_92]. In this case the positrons pass through the sheath and readily escape the target. A pure electron-positron plasma is formed behind the target with a maximum positron number density of $10^{26}$m$^{-3}$, $0.1$ times the non-relativistic critical density for optical lasers. For comparison the highest positron density outside the target obtained in 1PW laser plasma experiments is $\sim10^{19}$m$^{-3}$. It should be noted that similar numbers of pairs are generated in each case and that the dramatic increase in density is entirely due to the much smaller volume over which the pairs are generated in 10PW than in PW laser-plasma interactions ($\sim{}1\mu$m$^3$ compared to $\sim1$mm$^3$).
The average positron energy is 320MeV. This is much higher than the average photon energy and approximately twice that of the fast-electrons (140MeV). This suggests that the positrons are born with relatively low energy and are rapidly accelerated by the laser to an energy equal to that of the fast-electrons and further accelerated by the sheath fields on leaving the target. The sheath field acts to confine the fast-electrons inside the target and so accelerates positrons [@Chen_10], doing work $\Phi{}m_ec^2$ approximately equal to the fast electron energy [@Ridgers_11]. In this case we expect the average Lorentz factor of the positrons to be $\langle\gamma\rangle\approx a^{sol}_{HB}+\Phi\approx 2a^{sol}_{HB}=2eE_{HB}^{sol}\lambda_{lHB}/2\pi m_ec^2\approx$ 300MeV, which is consistent with the simulations. In total 0.01% of the laser energy is converted to positron energy and so their relative effect on the plasma dynamics is small.
Figure \[Sim\_res\](b) shows simulation results for a 320PW laser striking a solid aluminum target. $10^{16}$ gamma-ray photons and $10^{13}$ positrons are produced with average energies of 92MeV & 2.2GeV respectively. The maximum positron density is $1.8\times10^{30}$m$^{-3}$, an increase of four orders of magnitude for only a factor of 25 increase in laser intensity. 40% of the energy is converted to gamma-rays and 10% to electron-positron pairs. Therefore at this extreme laser intensity both gamma-ray photon and pair production are crucial to the plasma dynamics and therefore the interaction is in the QED dominated regime.
The Effect of Plasma Physics Processes on the QED Rates {#plasma_QED}
-------------------------------------------------------
The key feature of a QED-plasma is the feedback between QED and classical plasma physics processes. In this section we discuss the effect plasma physics processes have on the rates of the QED reactions in laser solid interactions. This is best illustrated by comparing pair production in the interaction of a laser of intensity $I$ with a solid target to the alternative configuration consisting of two counter-propagating laser pulses of intensity $I/2$ interacting with a low-density gas [@Bell_08; @Nerush_11] as in the laser-gas case complicated plasma effects are less important. The laser-solid configuration has the clear advantages that the peak electric field is double that of laser-gas case due to reflection and that the pulse interacts with a dense plasma so that many pairs and photons may be produced even when the rates of reaction are low.
A parameter scan of the effect of increasing laser intensity on the number of pairs produced by each configuration is conducted using one-dimensional simulations. The targets considered are: solid aluminum (density 2700kgm$^{-3}$) and a hydrogen gas-jet (density 0.02kgm$^{-3}$). The solid targets are semi-infinite to avoid complications caused by the laser pulse breaking through the target. Figure \[n\_gamma\_pairs\](a) shows the number of positrons produced by each configuration. Due to the advantages previously mentioned the laser-solid case produces more positrons for $I<8\times10^{23}$Wcm$^{-2}$. For $I>8\times10^{23}$Wcm$^{-2}$ the gas-jet configuration continues to behave as expected, with increasing intensity leading to increased pair production and when $\eta\sim1$ a large fraction of the pairs generated go on to produce additional pairs, the reaction runs away and a cascade of antimatter production ensues. This is in good agreement with the results of Nerush *et al* [@Nerush_11]. In contrast pair production in the laser-solid case peaks when $I=8\times10^{23}$Wcm$^{-2}$ and then decreases for further increases in laser intensity.
![\[n\_gamma\_pairs\] (Color online) (a) Number of positrons generated in the interaction of a laser pulse of intensity $I$ with solid and gas targets. (b) The rate of positron production in each of these cases.](Ridgers_fig3_2.jpg)

The difference between the laser-solid and laser-gas configurations is more marked when considering the rate of pair production, shown in Figure \[n\_gamma\_pairs\](b). The rate is substantially lower for the solid than the gas target at all intensities. Several plasma effects have been proposed as being responsible for this reduction, namely: relativistic hole boring, the skin effect & relativistic transparency [@Zhidkov_02; @Ridgers_12]. Qualitatively the reduction when $I<8\times10^{23}$Wcm$^{-2}$ is due to hole-boring & the skin effect. When the pulse strikes the solid surface its radiation pressure accelerates the surface to speed $v_{HB}=\beta_{HB}c$ in a process known as hole-boring. The laser is reflected in the rest frame of the surface, in which its intensity is reduced by a factor of $(1-\beta_{HB})/(1+\beta_{HB})$, where $\beta_{HB}=\sqrt{\Xi}/(1+\sqrt{\Xi})$ and $\Xi=I/\rho{}c^3$ is the pistoning parameter for a laser of intensity $I$ and a target of density $\rho$ [@Robinson_09]. From this formula one can see that in the 12.5PW interaction $v_{HB}\approx0.2c$ and the intensity in the rest frame of the surface is 0.7 times that in the lab frame. Th electric field of the laser is evanescent inside the overdense solid and is reduced to $E^{sol}_{HB}=2(n_c/n_{eHB})^{1/2}E_{HB}^{max}$ (the skin effect). $n_c=\gamma{}m_e\epsilon_0\omega_l^2/e^2$ is the relativistically corrected critical density for the plasma ($\gamma$ is the Lorentz factor to which the electrons are accelerated by the laser pulse); $E_{HB}^{max}$ is the peak laser electric field and $n_{eHB}$ the electron number density both in the hole-boring frame. The reduction in the rate of pair production is consistent withe the reduction in the field in the solid target to $E^{sol}_{HB}$ [@Ridgers_12]. The Lorentz factor reached by the electrons in the solid is $\propto\sqrt{I}$, therefore if the laser intensity is increased eventually $n_c>n_e$ and the solid becomes underdense and therefore transparent [@Kaw_70]. This occurs when $I>8\times10^{23}$Wcm$^{-2}$. In this case the electrons are pushed forwards at $c$, the situation is similar to an underdense plasma illuminated by a single laser pulse and emission is drastically curtailed. Therefore gamma-ray and pair production are maximized in laser-solid interactions when the solid is marginally overdense. When this is the case the ratio $n_e/n_c$ is minimized and this overcomes the effect of the increased reduction of $E_{HB}$ due to the increase in $v_{HB}$ (resulting from the decrease in $\rho$ and corresponding increase in $\Xi$). This was shown with simple quantitative estimates in [@Ridgers_12]. However some aspects of the model used there are inconsistent with the work in [@Chen_11] and further work is required.
The trend of decreasing levels of pair production with increasing intensity does not continue. Prolific pair production was seen in the 320PW laser-solid interaction, despite the fact that at the intensity reached in this interaction ($10^{25}$Wcm$^{-2}$) the solid target is relativistically underdense. This could be due to a new QED-mediated laser absorption mechanism which operates in underdense plasmas proposed by Brady *et al* [@Brady_12]. However care should be taken when comparing to the results in this paper, which discussed the radiation dominated regime. In the QED dominated regime reached in the 320PW laser-solid interaction pair production is expected to lead to the generation of critical density pair plasmas over the duration of the laser pulse [@Nerush_11] which will clearly influence the plasma physics and as a result the macroscopic fields and therefore the QED rates. For example, it has recently been shown that in this regime a dense pair-plasma can form in front of the solid surface [@Kirk_12]. This pair plasma can absorb the laser. In this case we expect the QED rates in the solid to be reduced. Much more work is required to fully understand the influence of plasma physics processes on the emission rates in laser-solid interactions in the QED dominated regime.
The Effect of QED Emission on the Plasma Physics
------------------------------------------------
The substantial amount of energy converted to gamma-ray photons and pairs profoundly alters the laser energy absorbed by the electrons and ions in a QED-plasma and so the plasma processes which are driven by this energy. First we consider the interaction of the 12.5PW laser pulse with the solid aluminum target. In this case 10% of the laser energy is converted to gamma-ray photons. The effect that this has on the energy spectra of the electrons and ions is shown in Figures \[energy\_specs\](a) & \[energy\_specs\](b). The average energy of the fast electrons drops from 150MeV to 140MeV. It is clear that the most energetic electrons are most affected as they emit photons most strongly. Gamma-ray emission and the resulting radiation reaction causes a substantial difference in the ion spectrum, which develops two peaks. Preliminary qualitative discussions of the modification of some plasma processes caused by this change in the energy spectra are given in Refs. [@Chen_11].
Next consider the $I=1\times10^{25}$Wcm$^{-2}$ laser-solid interaction. This interaction is in the QED dominated regime and a significant fraction of the laser energy is converted to both gamma-ray photons and pairs (40% & 10% respectively). Figure \[energy\_specs\](c) shows the effect QED processes have on the electron energy. The average fast electron energy is reduced from 4.5GeV to 2.0GeV. As before, the high energy tail is preferentially damped. However, in this case pair production is a significant source of electrons, which are generated at moderate energies and significantly enhance the spectrum here. Figure \[energy\_specs\](d) shows that QED effects do not significantly alter the ion spectrum. In this underdense case the ions gain energy by coupling to electrostatic fields generated by the electrons as they are pushed forwards. as we have seen emission is not strong for electrons undergoing such motion. We expect the dramatic modification of the electron and ion spectra caused by QED emission in the QED dominated regime to strongly effect the plasma processes; however, very little work has been done to elucidate this.
Conclusions
===========
Next generation high-power lasers, operating at intensities $>10^{23}$Wcm$^{-2}$, will generate a qualitatively new plasma state on interacting with matter. These QED-plasmas are defined by feedback between QED emission processes and classical plasma physics effects. We have described how the important QED processes: synchrotron-like gamma-ray photon emission & multiphoton Breit-Wheeler pair production; can been included in a PIC code and used the resulting QED-PIC code to simulate this feedback in $>$10PW laser-solid interactions self-consistently. We have shown that the rates of reaction in the simulations can only be explained when plasma effects are included and that the QED modifies these plasma effects by strongly altering the electron and ion energy spectra. This alteration of the energy budget of laser-solid interactions may be important for proposed applications of 10PW lasers, such as ion acceleration or harmonic generation. Simulation of a 12.5PW laser pulse striking a solid aluminum target demonstrates the conversion of a significant fraction (10%) of the laser energy to gamma-ray photons. In addition a pure electron-positron plasma is generated in the simulation with density seven orders of magnitude higher than currently achievable in laser-matter interactions. Simulations of a 320PW laser-solid aluminum target interaction demonstrate that in this case we expect not only efficient (40%) conversion of laser energy to gamma-rays, but also (10%) to pairs. This prolific production of gamma-ray photons and pairs may find application as an efficient and bright sources of these particles.
Acknowledgements {#acknowledgements .unnumbered}
================
We acknowledge the support of the Centre for Scientific Computing, University of Warwick. This work was funded by EPSRC grant numbers EP/GO55165/1 and EP/GO5495/1.
[99]{} G.A. Mourou, C.L. Labaune, M. Dunne, N. Naumova & V.T. Tikhonchuk, Plasma Phys. Control. Fusion, **49**, B667 (2007) P. Goldreich, & W.H. Julian, ApJ., **157**, 869 (1969); R.D. Blandford, & R.L. Znajek, MNRAS, **179**, 433 (1977) C. Bula, K.T. McDonald, E.J. Prebys, C. Bamber, S.Boege, T. Kotseroglou, A.C. Melissinons, D.D. Meyerhofer, W. Ragg, D.L. Burke, R.C. Field, G. Horton-Smith, A,C, Odian, J.E. Spencer, D. Walz, S.C. Berridge, W.M. Bugg, K. Shmakov & A.W. Weidermann, Phys. Rev. Lett., **76**, 3116 (1996); D.L. Burke, R.C. Field, G. Horton-Smith, J.E. Spencer, D. Walz, S.C. Berridge, W.M. Bugg, K.Shmakov, A.W. Weidemann, C. Bula, K.T. McDonald, E.J. Prebys, C. Bamber, S.J. Boege, T. Koffas, T. Kotseroglou, A.C. Melissinos, D.D. Meyerhoffer, D.A. Reis & W. Ragg, Phys. Rev. Lett., **79**, 1626 (1997); U.I. Uggerh[ø]{}j, Rev. Mod. Phys., **77**, 1131 (2005) T. Erber, Rev. Mod. Phys., **38**, 626 (1966); V.I. Ritus, J. Russ. Laser Res., **6**, 5 (1985) H. Chen, S.C. Wilks, D.D. Meyerhofer, J. Bonlie, C.D. Chen, S.N. Chen, C. Courtois, L. Elberson, G. Gregori, W. Kruer, O. Landoas, J. Mithen, J. Myatt, C.D. Murphy, P. Nilson, D. Price, M. Schneider, R. Shepherd, C. Stoeckl, M. Tabak, R. Tommasini & P. Beiersdorfer, Phys. Rev. Lett., **105**, 015003 (2010) A.R. Bell, & J.G. Kirk, Phys. Rev. Lett., **101**, 200403 (2008); A.M. Fedotov, N.B. Narozhny, G. Mourou, & G. Korn, Phys. Rev. Lett., **105**, 080402 (2010); I.V. Sokolov, N.M. Naumova, J.A. Nees & G. Mourou, Phys. Rev. Lett., **105**, 195005 (2010) C.P. Ridgers, C.S. Brady, R. Duclous, J.G. Kirk, K. Bennett, T.D. Arber, A.P.L. Robinson & A.R. Bell, Phys. Rev. Lett., **108**, 165006 (2012) F. Sauter, Z. Phys. **69**, 742 (1931); W. Heisenberg & H. Euler, Z. Phys. **98**, 714 (1936); J. Schwinger, Phys. Rev., **82**, 664 (1951) I.V. Sokolov, J.A. Nees, V.P. Yanovsky, N.M. Naumova & G.A. Mourou, Phys. Rev. E, **81**, 036412 (2010) P.A.M. Dirac, Proc. R. Soc. A, **167**, 148 (1938); L.D. Landau & E.M. Lifshitz, in The Course of Theoretical Physics Vol. 2 (Butterworth-Heinemann, Oxford, 1987), p222-229 S.W. Bahk, P. Rousseau, T.A. Planchon, V. Chvykov, G. Kalintchenko, A. Maksimchuk, G.A. Mourou & V. Yanovsky, Opt. Lett. **29**, 2837 (2004) C.S. Shen & D. White, Phys. Rev. Lett., **28**, 7 (1972) A. Zhidkov, J. Koga, A. Sasaki & M. Uesaka, Phys. Rev. Lett., **88**, 185002 (2002); S. Kiselev, A. Pukhov & I. Kostyukov, Phys. Rev. Lett **93**, 135004 (2003); N. Naumova, T. Schlegel, V.T. Tikhonchuk, C. Labaune, I.V. Sokolov & G. Mourou, Eur. Phys. J. D, **55**, 393 (2009) M. Tamburini, F. Pegoraro, A. Di Piazza, C.H. Keitel & A. Macchi, New J. Phys., **12**, 123005 (2010); M. Tamburini, F. Pegoraro, A. Di Piazza, C.H. Keitel, T.V. Liseykina & A. Macchi, Nucl. Inst. & Meth. in Phys. Res. A, **653**, 181 (2011); M. Chen, A. Pukhov, T. Yu & Z. Sheng, Plasma Phys. Control. Fusion, **53**, 014004 (2011) T. Nakamura, J.K. Koga, T. Zh. Esirkepov, M. Kando, G. Korn & S.V. Bulanov, Phys. Rev. Lett., **108**, 195001 (2012) I.V. Sokolov, N.M. Naumova, J.A. Nees, G.A. Mourou & V.P. Yanovsky, Phys. Plasmas, **16**, 093115 (2009); I.V. Sokolov, N.M. Naumova & J.A. Nees, Phys. Plasmas, **18**, 093109 (2011) A.N. Timokhin, MNRAS, **408**, 2092 (2010) E.N. Nerush, I.Y. Kostyukov, A.M. Fedotov, N.B. Narozhny, N.V. Elkina & H. Ruhl, Phys. Rev. Lett., **106**, 035001 (2011) C.S. Brady & T.A. Arber, Plasma Phys. Control. Fusion, **53**, 015001 (2011) T. Esirkepov, M. Borghesi, S.V. Bulanov, G. Mourou & T. Tajima, Phys. Rev. Lett., **92**, 175003 (2004) A.P.L. Robinson, P. Gibbon, M. Zepf, S. Kar, R.G. Evans & C. Bellei, Plasma Phys. Control. Fusion, **51**, 024004 (2009) B. Dromey, M. Zepf, A. Gopal, K. Lancaster, M.S. Wei, K. Krushelnick, M. Tatarakis, N. Vakakis, S. Moustaizis, R. Kodama, M. Tampo, C. Stoeckl, R. Clarke, H. Habara, D. Neely, S. Karsch & P. Norreys, Nature Phys., **2**, 456 (2006); T. Baeva, S. Gordienko & A. Pukhov, Phys. Rev. E, **74**, 046404 (2006); A. Tarasevitch, K. Lobov, C. Wünsch & D. von der Linde, Phys. Rev. Lett, **98**, 103902 (2007) T. Heinzl, Int. J. Mod. Phys. Conf. Ser., **14**, 127 (2012). V.G. Bagrov & D.M. Gitman, in Exact Solutions of Relativistic Wave Equations (Kluwer Academic Publishers, Dordrecht, 1990) p43-114 W.H. Furry, Phys. Rev., **81**, 115 (1951) J.G. Kirk, A.R. Bell & I. Arka, Plas. Phys. Control Fusion, **51**, 085008 (2009) R. Duclous, J.G. Kirk & A.R. Bell, Plasma Phys. Control. Fusion, **53**, 015009 (2011); N.V. Elkina, A.M. Fedotov, I.Y. Kostyukov, M.V. Legkov, N.B. Narozhny, E.N. Nerush & H. Ruhl, Phys. Rev. ST. AB., **14**, 054401 (2011) J.K. Daugherty & A.K. Harding, Ap. J., **273**, 761 (1983) A. DiPiazza, K.Z. Hatsagortsyan & C.H. Keitel, Phys. Rev. Lett., **105**, 220403 (2010) J.M. Dawson, Phys. Fluids, **5**, 445 (1962); C.K. Birdsall & A.B. Langdon, Plasma physics via computer simulation (McGraw-Hill, New York, 1985) S.C. Wilks, W.L. Kruer & A.B. Langdon, Phys. Rev. Lett. **69**, 1383 (1992) C.P. Ridgers, M. Sherlock, R.G. Evans, A.P.L. Robinson & R.J. Kingham, Phys. Rev. E, **83**, 036404 (2011) P. Kaw & J. Dawson, Phys. Fluids, **13**, 472 (1970) C.S. Brady, C.P. Ridgers, T.D. Arber, A.R. Bell & J.G.Kirk, Phys. Rev. Lett., **109**, 245006 (2012) J.G. Kirk, A.R. Bell & C.P. Ridgers, “Electron-positron pair modification of the hole-boring scenario in intense laser-solid interactions”, submitted to *Plasma Phys. Control. Fusion*
|
---
abstract: 'In particularly noisy environments, transient loud intrusions can completely overpower parts of the speech signal, leading to an inevitable loss of information. Recent algorithms for noise suppression often yield impressive results but tend to struggle when the signal-to-noise ratio (SNR) of the mixture is low or when parts of the signal are missing. To address these issues, here we introduce an end-to-end framework for the retrieval of missing or severely distorted parts of time-frequency representation of speech, from the short-term context, thus speech inpainting. The framework is based on a convolutional U-Net trained via deep feature losses, obtained through speechVGG, a deep speech feature extractor pre-trained on the word classification task. Our evaluation results demonstrate that the proposed framework is effective at recovering large portions of missing or distorted parts of speech. Specifically, it yields notable improvements in STOI & PESQ objective metrics, as assessed using the LibriSpeech dataset.'
address: |
$^{1}$Logitech Europe S.A., 1015, Lausanne, Switzerland\
$^{2}$Imperial College London (ICL), SW7 2AZ, London, UK\
$^{3}$Ecole Polytechnique Federale de Lausanne (EPFL), 1015, Lausanne, Switzerland
bibliography:
- 'references\_fixed.bib'
title: 'Deep speech inpainting of time-frequency masks'
---
Speech processing, speech retrieval, speech enhancement, deep learning, deep feature losses
Introduction {#sec:introduction}
============
Tremendous developments in the field of speech enhancement (SE) in the past years have been mainly attributed to deep learning [@Wang2017SupervisedOverview]. In particular, recent approaches outperform traditional statistical SE systems, especially for high-variance noises [@Kumar2016SpeechNetworks]. SE algorithms based on deep neural networks typically belong to one of the following groups: (i) causal systems that maintain the conventional approach of speech and noise estimation using regression methods [@xu2015regression; @Mirsamadi2016; @valin2018hybrid] and (ii) end-to-end systems, including generative approaches, that are usually non-causal and require longer temporal integration windows [@Pascual2019TowardsNetworks]. While the latter tend to have higher latency, they are capable of suppressing highly non-stationary noise maskers, including brief, loud intrusions. Specifically, an application of such non-causal approach to the source separation problem yielded almost 20 dB of signal-to-distortion ratio improvement [@Liu2019].
The task of generative, context-based, information retrieval has been traditionally investigated in the field of computer vision, whereby it’s referred to as the image completion or inpainting [@Iizuka2017GloballyCompletion; @Yu2018GenerativeAttention; @Liu2018ImageConvolutions]. A similar problem was recently considered in the context of audio processing and was coined the name of audio inpainting [@adler2011audio; @Perraudin2016InpaintingGraphs; @Marafioti2018AInpainting]. While recent studies reported promising results in the retrieval of missing audio information, they typically consider relatively simple signals, unlike natural speech. This work extends the existing audio inpainting framework to speech processing and proposes how it could be incorporated into general-purpose SE systems.
In this paper, we introduce the task of speech inpainting focused on the context-based recovery of missing or degraded information in time-frequency representations of natural speech. Specifically, we considered a broad range of distortions including (i) time gaps, similar to packet loss [@Lee2016PacketTransmission], but with non-uniformly distributed holes up to 400 ms in duration, (ii) frequency gaps, similar to the bandwidth extension problem [@Iser2008BandwidthSignals] but with missing frequency bins cumulatively up to 3200 Hz in bandwidth and (iii) irregular, random gaps disrupting up to 40% of the overall time-frequency representation of speech. According to the authors’ knowledge, this problem and at such scale hasn’t been investigated to date.
To tackle the problem of speech inpainting we propose an end-to-end framework based on the U-Net architecture [@Ronneberger2015U-Net:Segmentation] trained via deep feature losses [@Liao2019; @Germain2019; @Sahai2019SpectrogramSeparation]. According to recent studies, the use of deep feature losses for training SE systems can improve their overall performance, depending on the deep feature extraction approach [@Liao2019; @Germain2019]. We hypothesize that a specialized deep feature extractor, tailored specifically for speech processing, will lead to the best performance of our system. Thus, we introduce speechVGG, a deep speech features extractor, based on the classic VGG-16 architecture [@Simonyan2015VERYRECOGNITION], trained on the word classification task. We explore different configurations of the extractor for the training of the framework and compare it with the conventional training based on $L_1$ loss and other control conditions.
We introduce two configurations of the framework for informed or blind inpainting, depending on the availability of the masks indicating missing or degraded parts of speech signal. In the case of blind speech inpainting, we evaluated the system using different types of masking, such as the replacement or addition of high amplitude noise to the masked time-frequency representation of speech.
The paper is structured as follows: section \[sec:methods\] introduces our framework, section \[sec:Results\] describes the evaluation of the proposed system, and section \[sec:conclusion\] includes critical discussion, proposes the direction of future work and concludes the paper.
Materials & Methods {#sec:methods}
===================
![Speech inpainting framework. The proposed framework is composed of the U-Net for speech inpainting (left, see \[sec:UNet\] for details) and VGG-like deep feature extractor (right, see \[sec:sVGG\] for details). Deep feature losses $L_{DF}$ for training the U-Net are obtained by computing the $L_1$ distance between representations of the recovered and actual STFT magnitudes at pooling layers, at the end of each block in the feature extractor (see \[sec:DFL\] for details).[]{data-label="fig:architecture"}](SystemArchitecture_minimal.pdf){width="\linewidth"}
Data & preprocessing {#sec:experiment}
--------------------
We performed all of our experiments using the LibriSpeech corpus, an open read speech dataset sampled at 16 kHz [@Panayotov2015Librispeech:Books]. We used the (*train-clean-100*) subset to train, *test-clean* subset to validate training performance and *dev-clean*, as a held-off data to evaluate all the models explored in this work. All available speech recordings were divided into 1024 ms long segments. The magnitude of each segment was obtained by taking absolute values of a complex short-time Fourier transforms (STFT, 256 samples window with 128 samples overlap, 128 frequency bins), resulting in a 128 x 128 matrix. Then, natural logarithm was applied and followed by global mean and variance normalization of each frequency channel. The mean and variance were both obtained from the training data.
U-Net for speech inpainting {#sec:UNet}
---------------------------
The main part of the proposed framework is a deep neural network with the U-Net architecture, originally applied to the problem of biomedical image segmentation [@Ronneberger2015U-Net:Segmentation]. Here, the U-Net was trained to recover, missing or degraded parts of the log magnitude STFT of short segments of speech, obtained as specified in \[sec:experiment\]. The complete framework is in illustrated in Fig. \[fig:architecture\].
The U-Net for speech inpainting is composed of six encoding (blue, Fig. \[fig:architecture\]) and seven decoding blocks (green, Fig. \[fig:architecture\]). Each encoding block consists of a 2D convolution layer with ReLu activation. The encoding block firstly upsamples its input, using maximal values from the kernel of size 2 with stride 2, effectively doubling its size. The upsampled input is subsequently concatenated with the output from the corresponding encoding block. Such concatenated set of features is then processed through a 2D convolution with leaky ReLu activation ($\alpha=0.2$). Batch normalization [@Ioffe2015BatchShift] is applied after each 2D convolution layer across the network. The final decoding block is a 1D convolution with a linear activation function outputting the recovered version of the distorted input. Parameters of convolution layers in subsequent blocks of the U-Net are listed in table \[tab:conv-params\].
\[tab:conv-params\]
-- ------------------- --------------------
Encoding - *conv* Decoding - *dconv*
- (1,1)
(7, 16) (3, 1)
(5, 32) (3, 16)
(5, 64) (3, 32)
(3, 128) (3, 64)
(3, 128) (3, 128)
(3, 128) (3, 128)
-- ------------------- --------------------
: Model parameters. The table contains parameters used in convolution layers of the U-Net network for speech inpainting. Block 0 corresponds to the final 1D convolution (*conv1d*, Fig. \[fig:architecture\]).
We considered two versions of the U-Net: performing informed and blind speech inpainting. In the informed case, the mask is available at the U-Net input and all 2D convolutions in the network are replaced with partial convolutions (*PC*) [@Liu2018ImageConvolutions], that only process valid (not masked) parts of their input and ignore the rest. Here, we wanted to explore whether our framework can both identify and restore missing or degraded parts of the input. We refer to such setup as the blind speech inpainting and all convolution layers in the U-Net perform standard, full, 2D convolutions (*FC*). Otherwise, the two configurations of the U-Net are identical in terms of their architecture, training and evaluation.
As phase information is discarded in our framework, we apply the local weighted sums algorithm [@LeRoux2010DAFx09; @LeRoux2010ASJ09] to reconstruct speech waveforms directly from the recovered STFT magnitudes. None of several attempts of incorporating STFT phase as an input feature for the framework was successful.
SpeechVGG for deep speech features extraction {#sec:sVGG}
---------------------------------------------
To train the U-Net for speech inpainting via deep feature losses [@Liao2019; @Germain2019; @Sahai2019SpectrogramSeparation], instead of per-pixel $L_1$ loss, we employed a deep feature extractor. Insights from computer vision suggest that outputs from pooling layers in subsequent blocks of convolutional neural networks represent different features of the input image [@Yosinski2015UnderstandingVisualization]. We hypothesize that, analogically, similar architecture could be applied for extracting high-dimensional representations of distinct speech-specific features, such as harmonics, formats or phonemes.
Maintaining the same architecture that consisted of five main blocks with max-pooling layers (yellow, Fig. \[fig:architecture\]), we adapted the classic VGG-16 network [@Simonyan2015VERYRECOGNITION] to become a deep speech feature extractor. The network was re-trained to classify words from LibriSpeech corpus, instead of images from ImageNet dataset, thus speechVGG (*sVGG*). In particular, we extracted speech segments corresponding to 1000 most frequent words, that were at least 4 letters long, from the training data (*train-clean-100*). The speech segments were preprocessed in the same way as specified in section \[sec:experiment\] to obtain their log magnitude STFT. We then applied SpecAugment [@Park2019SpecAugment:Recognition] to improve the performance of the classifier, by randomly replacing blocks of time and frequency bins in the training data with mean values. To accommodate the varying length of different words, the spectrograms were randomly padded with zeros to match the U-Net’s output shape of 128 x 128.
Deep feature losses in training {#sec:DFL}
-------------------------------
The *sVGG* was employed to train our U-Net for speech inpainting with deep feature losses (Fig. \[fig:architecture\]). For each training batch of time-frequency masked speech segments, the U-Net was applied to recover missing or degraded information. Here, instead of computing, per-pixel, $L_{1}$ loss between the original ($Y$) and the reconstructed ($\hat{Y}$) STFTs to train the network, they were both processed through a deep feature extractor $E$. Then, the $L_1$ distance, between activations from a set of extractor’s pooling layers, at the end of each block, ($N$) was computed to obtain the deep feature loss $L_{DF}$: $$L_{DF} = L_{1}(E_{N}(Y), E_{N}(\hat{Y}))$$ To investigate the influence of the depth of the *sVGG* on the U-Net performance we compared several configurations of blocks used to compute $L_{DF}$ in training. Specifically, we set $N$ to be only the first three, presumably corresponding to lower-level speech features, such as harmonics (*low\_sVGG*), and the last two, possibly related to higher-level features, like phonemes (*high\_sVGG*), or all five available blocks (*full\_sVGG*). To determine whether the *sVGG* is capable of extracting speech-specific features, we compared it to the original VGG-16 network pre-trained on the ImageNet data (*imgVGG*). U-Nets trained with deep feature losses were each time compared to their counterpart optimized using $L_1$, per-pixel loss, between the original and reconstructed speech representations (*noVGG*).
Results {#sec:Results}
=======
The whole framework was implemented in Python using TensorFlow. All of the explored models were trained using ADAM optimizer [@Kingma2014Adam:Optimization]. Data were fed to each model in mini-batches of size 32. Each considered configuration of the U-Net for speech inpainting was trained for 30 full epochs using either per-pixel ($L_1$) or deep feature losses ($L_{DF}$). The *sVGG* classifier for deep feature extraction was trained using a cross-entropy loss for 50 full epochs.
![Sample speech inpainting results. Top: Masked speech spectrograms. Masked values are either missing (replaced with zeros, left) or replaced (middle), as well as, mixed with high-amplitude uniform noise (right). Middle: Results of the blind speech inpainting with U-Net trained using speechVGG-based deep feature losses (*full\_sVGG*). Bottom: Original speech spectrograms.[]{data-label="fig:reconstructions"}](Reconstructions_img.pdf){width="\linewidth"}
Informed speech inpainting {#sec:informed}
--------------------------
The first set of experiments was performed to assess the performance of our framework in the informed speech inpainting task when the exact mask is available at the input (see \[sec:UNet\] for details). Here, we replaced masked parts of the input STFT representation with zeros. We considered three shapes of masks covering: time segments, time segments & frequency bins, or random parts of the spectrogram of elliptical shapes, resembling brush strokes (Fig. \[fig:reconstructions\], top- right, left, middle, respectively). Each mask was designed to cover a specific portion of information in time and/or frequency domain distributed between one to four intrusions, none shorter than 3 bins (24 ms or 187.5 Hz bandwidth).
We considered different configurations of the framework trained with deep feature losses, obtained using either the *sVGG* (*low\_-, high\_-, full\_- sVGG*) or VGG-16 pre-trained on the ImageNet data (*imgVGG*). We compared them with the case, where the U-Net was trained using $L_1$, per-pixel loss (*noVGG*). In each configuration, the U-Net was trained using input STFT magnitudes with time & frequency or random masks, chosen randomly with equal probability for each training sample. Coverage of the mask was each time drawn from the normal distribution $N(\mu=29.4\%, \sigma=9.9\%)$. Each time, we evaluated the system performance on the held-off *dev-clean* data. We considered different sizes of masks ranging from 10% up to 40% information missing. For time and time & frequency masks, the proportion corresponds to the number of time and/or frequency bins removed and for the random case, to the overall area of the input STFT covered by the mask.
We assessed the performance of different configurations of the framework using the short term objective intelligibility (STOI) [@Taal2010ASpeech] and perceptual evaluation of speech quality (PESQ) [@PESQ1998], measured between the original and recovered speech segments. For each setup, the results were compared to the baseline of the masked (unprocessed) case. We used an additional control condition, where missing parts of the input STFT were filled with speech-shaped noise derived from the original voice sample.
Results of the evaluation are presented in table \[tab:partial\_conv\]. Each configuration of the proposed framework improved both STOI and PESQ for all the considered shapes and sizes of intrusions (by up to 0.2 and 1.35, respectively), as compared to both masked and control cases. The use of deep feature losses in training yielded better results, as compared to the case when the U-Net was trained with the $L_1$ loss (*noVGG*). However, only the speechVGG (*sVGG*), not the VGG-16 pre-trained on the ImageNet data (*imgVGG*), used as a feature extractor in training, led to notable improvements in STOI & PESQ. In particular, speechVGG utilizing all five blocks (*full\_sVGG*) to obtain $L_{DF}$ provided the best overall performance (table \[tab:partial\_conv\], in bold).
\[tab:partial\_conv\]
Blind speech inpainting {#sec:blind}
-----------------------
Blind speech inpainting refers to the case when the mask is not available at the input and reflects a more generalized application of our framework. Since partial convolutions are not used, the U-Net needs to both identify and recover the masked parts of the input speech. Therefore, we hypothesized that the type of distortion may influence framework performance.
To address this issue, we considered three different cases by setting the masked values in the input STFT magnitude to either zero (*-gaps*), the same as in the informed case, white noise (*-noise*) or a mixture of the original information and the noise (*-additive*). The noise was added directly to input frequency bins and its amplitude was set to produce transient mixtures at very low SNR, below -10 dB (i.e. very disruptive noise level).
In this experiment, we selected the best configuration of the framework from the previous experiment, namely the (*full\_sVGG*) using all five blocks to compute $L_{DF}$. The network setup was the same, expect all partial convolutions (*PC*) were replaced with regular, full 2D convolutions (*FC*). The framework was re-trained and evaluated separately for each type of intrusion (gaps, noise, additive noise). The evaluation framework was kept the same as in the case of informed speech inpainting (see \[sec:informed\] for details).
Averaged evaluation results are presented in table \[tab:full-conv\] for different configurations of the framework trained to perform both informed and blind speech inpainting. All of the considered framework configurations successfully recovered missing or degraded parts of the input speech that was confirmed by the increased STOI and PESQ scores, as compared when the masked information was missing (*Masked-gaps*). While the improvements were notable, the framework for informed inpainting (*PC-gaps*) outperformed its blind counterparts when parts of the input STFT were set to zeros or noise (*FC -gaps, -noise*, respectively). Interestingly, when the masked values were mixed with, not replaced by, the noise, the framework in the blind setup (*FC-additive*) yielded the best overall results for larger intrusions of all the considered shapes. These results suggest that the framework for blind speech inpainting takes advantage of the access to the original information underlying the noisy disruptions, even when the transient SNR is very low.
\[tab:full-conv\]
Discussion {#sec:conclusion}
==========
In this work, we introduced a novel framework for the retrieval of missing or degraded part in time-frequency representation of speech, based on the U-Net network trained via deep speech feature losses. In particular, the proposed framework allowed to recover missing or degraded information, leading to substantial improvements in STOI and PESQ scores, for intrusions as big as 400 ms or 3.2 kHz bandwidth. We showed that the use of deep speech feature extractor employed to train the framework improves the system’s performance, as compared to the typical $L_1$, per-pixel loss. We confirmed our initial hypothesis that *sVGG*, but not *imgVGG*, applied to obtain deep feature losses led to the best outcomes.
Our results suggest that the proposed framework is capable of both identifying degraded information in speech STFTs and recovering it. In particular, the framework for blind speech inpainting yielded substantial improvements in both STOI and PESQ objective evaluation when the information was missing or when it was distorted by the additive noise. It’s important to note that in our experiments employing noisy intrusions, the noise was applied directly to the input speech STFT. To address this, a broader range of ecological noises should be considered in future experiments applying the framework to joint speech denoising and inpainting.
Although our experiments considering additive noise were simplified, our findings provide a promising outlook for applying the proposed blind speech inpainting as an extension of the current SE framework. We believe that the integration of non-causal, end-to-end approaches, such as this one, with the existing causal methods can bridge the current methodological gap and lead to the next generation of general-purpose speech enhancement systems.
|
---
abstract: 'We report fabrication of graphene devices in a Corbino geometry consisting of concentric circular electrodes with no physical edge connecting the inner and outer electrodes. High device mobility is realized using boron nitride encapsulation together with a dual-graphite gate structure. Bulk conductance measurement in the quantum Hall effect (QHE) regime outperforms previously reported Hall bar measurements, with improved resolution observed for both the integer and fractional QHE states. We identify apparent phase transitions in the fractional sequence in both the lowest and first excited Landau levels (LLs) and observed features consistent with electron solid phases in higher LLs.'
author:
- 'Y. Zeng$^{1}$'
- 'J.I.A. Li$^{1}$,$^{*}$'
- 'S.A. Dietrich$^{1}$'
- 'O.M. Ghosh$^{1}$'
- 'K. Watanabe$^{2}$'
- 'T. Taniguchi$^{2}$'
- 'J. Hone$^{3}$'
- 'C.R. Dean$^{1}$'
title: 'High quality magnetotransport in graphene using the edge-free Corbino geometry'
---
The quantum Hall effect (QHE), characterized by vanishing longitudinal resistance simultaneous with quantized transverse Hall resistance [@Klitzing1980; @Cui_FQHE], represents one of the most robust examples of 2D topological phenomenon in which an insulating bulk state with non-trivial topological order is separated from the surrounding vacuum by conducting edge modes [@TKNN1982]. The edge modes associated with the QHE are chiral and therefore dissipationless at all length scales. Moreover, the transverse Hall resistance, quantized in units of $h/e^{2}$, provides a direct measure of the topological order and is insensitive to details of the sample geometry. In samples with very low disorder, new correlated phases, resulting from strong electron interactions, can be observed outside of the IQHE sequence. These include the fractional quantum Hall effect (FQHE) liquid states [@Cui_FQHE; @Laughlin1983], appearing at fractional Landau filling, and with fractionally valued Hall resistance plateaus, and interaction-driven electron solid phases, appearing at fractional filling but with re-entrant integer valued Hall quantization [@Eisenstein2002bubble; @Xia2004bubble; @Deng2012].
Monolayer graphene has emerged as a versatile platform to study the QHE, showing many of the same phenomenon that for a long time were limited to very high mobility GaAs heterostructures, while also introducing new opportunities for manipulating these phases owing to the unique combination of a non-trivial $\pi$ Berry phase, four-fold degeneracy arising from the spin and valley iso-spin degrees of freedom, and the ability to fabricate devices in a wide variety of architectures [@Novoselov2005; @Yuanbo2005; @Dean2011; @Young2012; @Feldman2012; @Amet2015; @Zibrov2017; @Hunt.17; @Li.17b]. Recent improvements in device fabrication designed to eliminate impurity scattering in sample bulk, such as use of boron-nitride as an improved substrate dielectric [@Dean.10] and fully encapsulated geometries [@Lei.13; @Zibrov2017; @Li.17b] have enabled observation of some of the most fragile ground states in the QHE regime [@Klitzing1980; @Cui_FQHE; @Du:2009; @Dean2011] including the even denominator fractional quantum Hall effect (FQHE) state [@Zibrov2017; @Li.17b; @Ki:2014] as well as various electron solid phases [@Chen2018RIQHE; @Smet2018even]. Despite these advancements, the resolution in transport measurement in conventional Hall bar geometries is often overshadowed by measurements that probe the bulk compressibility [@Feldman2012; @Zibrov2017; @Hunt.17]. This result is puzzling as it suggests that, contrary to conventional expectation, a well developed bulk gap alone is not a sufficient condition to guarantee well resolved transport measurement of the corresponding edge modes.
In this work we investigate a less explored aspect of QHE by studying the bulk property of graphene heterostructure using a Corbino geometry[@Gervais2015Corbino; @Yan2010corbino; @Zhao2012corbino; @Peters2014corbino; @Geim2017corbino; @Kumar2018corbino]. We demonstrate a novel fabrication method that allows us to realize concentric contacts in a dual-gated geometry. The successful fabrication of high quality graphene Corbino discs allows us to resolve FQHE states over larger filling fractions and to lower magnetic fields than previously demonstrated in transport measurement of conventional hall bar geometries. Using this technique we identify apparent phase transitions in the FQHE sequence providing new insight about their ground state order in both the lowest and first excited Landau levels (LLs), and demonstrate features consistent with various electron solid phases in higher LLs. Our capability to detect QHE signatures with higher resolution using the Corbino geometry, where bulk response dominates, compared to Hall bar geometries, where edge transport dominates, suggests that details of the sample edge play a significant role in the Hall bar response. This result has implications for all transport measurements of 2D topological systems, and suggests our understanding of how to probe edge modes in 2D materials may need to be revisited.
{width="1\linewidth"}
{width="1\linewidth"}
Fig. 1a illustrates the device fabrication process. The heterostructure is assembled using the previously described dry transfer technique [@Lei.13] to ensure clean interfaces between component materials and includes both top and bottom graphite gates to screen remote impurities and maximize channel mobility [@Zibrov2017; @Li.17b]. The challenge of making electrical contact to the inner and outer edges of the Corbino geometry is addressed by using a process we refer to as a flip-stack technique (Fig. 1a) (see supplementary material for more details). After the heterostructure is fully assembled the exposed graphite gate is etched into an annulus using standard lithography, the structure is then flipped over and the second graphite gate is etched so as to be aligned to the first. The entire structure is then covered with an additional BN layer and a final lithography step is used to realize edge contacts [@Lei.13] to the inner and outer rings of the graphene channel as well as the two graphite gates. In the final device structure the aligned graphite gates define the carrier density in the active region of the Corbino geometry whereas the densities in the contact regions are tuned by biasing the Si gate.
Fig. 1b shows resistance versus channel density acquired at $T\sim2$ K, and $B=0$ T. The width of the CNP resistance peak provides an estimate of the charge inhomogeneity [@Lei.13], and is found to be $~ 6\times10^{9}$ cm$^{-2}$ (Fig. 1b). This is an order of magnitude lower than previously reported in graphene devices without graphite gates [@Lei.13] but similar to what we measure in Hall bar devices that include both top and bottom graphite gates (see supplementary material).
{width="1\linewidth"}
Fig. 1c shows the low magnetic field Shubnikov de Haas (SdH) oscillations for three representative densities. Extraction of the quantum scattering time $\tau_q$ from the corresponding dingle plots (see supplementary material) shows a relatively density independent value of $\tau_{q}\sim0.3$ ps, except at very low densities (Fig. 1c inset) where it falls off. This value of $\tau_{q}$ is among the largest values reported for graphene, further confirming the low bulk disorder in our sample. An independent estimate of the quantum lifetime can be made by assuming that the SdH onsets when the field-dependent cyclotron gap, $\Delta_c$, exceeds the LL disorder broadening, $\Gamma$, where $\Gamma=\hbar/{2\tau_{q}}$ and, for graphene, $\Delta_c\sim~400\sqrt{B}\sqrt{N}$, where $B$ is the magnetic field and $N$ is the LL orbital index. This estimate also gives a mostly density-independent value of $\Gamma\sim15$ K which agrees well with the saturated value of 12 K obtained from the measured $\tau_q$ (a full density dependent comparison is shown in the supplementary material).
Fig. 1d shows a Landau fan diagram in the low density and low magnetic field regime. In a Hall bar geometry, two and four terminal measurements probe the resistance associated with both the bulk and dissipationless edge modes. By contrast, a Corbino geometry, where there are no physical edges, probes only the bulk conductance. In this case fully developed, incompressible, QHE ground states manifest as zero conductance (infinite resistance). The fan diagram in Fig. 1d shows well developed QHE states (nearly zero conductance) at filling fraction $\nu=\pm 2$ emerging at fields less than $B \sim 50$ mT. Fig. 2a shows a similar Landau fan diagram but measured over a larger density and field range. Several distinguishing features are evident: the plot shows excellent ambipolar response with both electron and hole features equally resolved; the symmetry broken IQHE emerge at less than $B=1$ T; and the FQHE is resolvable by $B=5$ T (Fig. 2b). This quality of QHE transport has been difficult to achieve in Hall bar geometries, even when the sample disorder is similar as measured by zero field transport and SdH characteristics (see supplementary material).
The origin of the improved resolution obtained in our Corbino geometry may be two-fold. First, the Hall bar measurement requires good electrical contact [@Klitzing2011], since the leads should be well equilibrated to the edge modes in order to measure zero longitudinal resistance and accurate Hall plateau. This is a less stringent requirement in the Corbino geometry where QHE ground state appears as an insulating feature in bulk conductance, even for highly resistive contacts. Second, transport measurement of the edge state may be complicated by details of the potential profile near the graphene boundary [@Cui2016; @Geim2017corbino], edge disorder [@Neto2006] and edge mode reconstruction [@Sabo2017].
The improved performance of the Corbino geometry allows us to resolve the FQHE states in graphene to an unprecedented degree, particularly in the high field/low density limit. In Fig. 2c the bulk conductance, $G_{xx}$, is plotted versus density at $B=36$ T. In both the $N=0$ LL and $N=1$ LLs, standard composite fermion (CF) sequences are observed [@Jain.89], including both even and odd numerator FQHE states, indicating that all symmetries have been lifted [@Dean.10; @Feldman.13]. Fig. 2d shows an expanded view in the $N=0$ LL between $\nu=0$ and $\nu=1$. Two-flux CF states (centered around $\nu=1/2$) and four-flux CF states (centered around $\nu=1/4$) up to denominator 15 are observed. We note that based on the depth of the conductance minima, the overall hierarchy appears remarkably electron-hole symmetric, further indicating that all symmetries are lifted within the CF levels (this is confirmed by activation gap measurements, which show a similar hierarchy, see supplementary material). A different symmetry is observed in the $N=1$ LL, suggesting that the spin and valley degeneracy is only partially lifted, and an approximate SU(2) or SU(4) symmetry is preserved for the composite fermion ground states. The persistence of the strongest FQHE states to low magnetic fields allows us to measure how their gaps evolve over a wide range of $B$. Fig. 3a shows a plot of the activation energy gap, $\Delta$, versus $B$, for the $\nu=1/3$ state. A clear kink in the trend is observed at $\textit{B} \sim 8$ T below which the gap is best fit by a linear $B$ dependence (blue dashed line) and above which the gap transitions to a $\sqrt{B}$ dependence (blue solid curve). Notably, both the linear and square-root fits extrapolate to $\Delta \sim -10$ K at $B = 0$, similar to the value of disorder broadening estimated from the SdH behavior (Fig. 1c), providing a self consistency validation of the fits.
The transition in the $B$ dependence of the gap resembles similar behavior of the $1/3$ FQHE state in GaAs quantum wells, which was interpreted in the context of CF Landau levels with spin degrees of freedom [@Dethlefsen2006; @Haug2004]. In the CF picture, the effective cyclotron gap that separates spin-degenerate CF LLs results from Coulomb interaction and is given by [@Haug2004], $\Delta_{CF}^{cyclotron} = \dfrac{{\hbar}e\textit{B}^{\ast}}{m^{\ast}}$ where $B^{*}=B-B_{\nu=1/2}$ is the effective magnetic field for CFs and $m^{\ast} = \alpha m_e \sqrt{B}$ is the CF mass, $m_{e}$ is the free electron mass and $\alpha$ depends on details of the quantum well. Allowing for spin degree of freedom, the CF LLs can split into spin branches, separated by the Zeeman energy $E_{CF}^{Zeeman}= \frac{1}{2}\mu_{B}gB$, where $\mu_{B}$ is the Bohr magneton and $g$ is the Lande $g$-factor. The transition results from a CF LL crossing when the CF Zeeman energy (linear in B), exceeds the CF cyclotron energy (square root in B), as illustrated in Fig. 3b. This model well fits our data in the Lowest LL. If we assume that the linear trend correlates to a real spin gap, the slope gives an estimate for the $g$-factor of $8.5$. This is approximately 4 times larger than the bare electron $g$-factor ($g=2$), and is indicative of strong exchange interaction and the existence of skyrmion spin textures for composite fermions [@Young2014]. In this picture we imagine that the valley degrees of freedom is frozen out [@Feldman.13] such that the square root region corresponds to the CF cyclotron gap. Fitting the above expression to this region gives a CF mass term of $\alpha=0.054\pm 0.004$. Including the projected disorder broadening of $\sim10$ K, this gives a measure of the intrinsic gap to be $\Delta_{1/3} = (8.3 \pm 0.6)\sqrt{B}$ K, or $(0.084 \pm 0.004)$ $e^{2}/\epsilon l_{B}$ in Coulomb energy units, where we use $\epsilon=6.6$ for BN-encapsulated graphene [@Hunt.17]. We note that this result is remarkably close to the theoretical value of 0.1 $e^{2}/\epsilon l_{B}$ calculated by exact diagonalization [@Morf2002] without including any additional corrections [@Balram2015] (see supplementary material for detailed comparison).
![\[fig4\] [**[RIQHE in N = 2 and 3 LL.]{}**]{} Bulk conductance as a function of filling fraction measured at different temperature in (a) N = 2 LL and (b) N = 3 LL respectively. ](fig4v2.pdf){width="1\linewidth"}
Fig. 3c shows the $B$ dependence of the $\nu=2/5$ gap. In this case the gap follows a $\sqrt{B}$-dependence over the entire accessible field range, projecting to a $\sim -10$ K at $B = 0$. The disorder broadening is consistent with measurement of the $1/3$ gap and SdH analysis. The square root dependence is qualitatively consistent with same CF picture as above in which the $\nu=2/5$ represents a cyclotron gap of CF LLs In this view however, the lack of a transition is surprising (we would expect the CF cyclotron gap to show evidence of the same CF LL crossing that gives rise to the kink in the 1/3 gap, see supplementary material), and may suggest that the exchange interaction for CFs is highly sensitive to composite fermion filling fraction[@Hunt.17].
In the $N = 1$ LL, a phase transition is observed for $\nu = 8/3$ where the energy gap vanishes at $B \sim 6$ T and then reemerges at higher field (Fig. 3d). Similar transition with vanishing energy gap was also observed in local electron compressibility measurement of suspended graphene [@Feldman.13]. Such behavior cannot be understood within the schematic energy diagram shown in Fig. 3b and is likely related to transition between different iso-spin polarizations. A complete understanding of this phase transition will require definitive identification of the iso-spin order associated with the CF states [@Young2012].
Fig. 4 plots bulk conductance measured at higher Landau levels. In the $N=2$ LL, we observed features corresponding to 4-flux CF ground states at $\nu=6+\frac{1}{5}$ and $6+\frac{4}{5}$ and electron solid states at $\nu=6+\frac{1}{3}$ and $6+\frac{2}{3}$. The electron solid state is characterized by the non-monotonic temperature dependence in the bulk conductance, with a peak at the melting transition $T_c$ which diminishes to zero at low temperature [@Eisenstein2002bubble; @Deng2012; @Chen2018RIQHE]. In the $N=3$ LL, bulk conductance displays a broad minimum around $\nu=10+\frac{1}{4}$ and $10+\frac{3}{4}$ as shown in Fig. 4b, where the temperature evolution resembles the bubble phase of $N=3$ LL observed in GaAs samples with Corbino geometry [@Gervais2015Corbino]. The deep conductance minima observed in the Corbino geometry and the high transition temperature of $\sim 1.1$ K are both indicative of a robust electron solid state, which is qualitatively similar to recent measurement in MLG samples with a Hall bar geometry [@Smet2018even]. Interestingly, the bulk conductance reveals no obvious feature at half filling down to $T=0.3$ K at $B = 25$ T (see supplementary material), which is in contrast to the even-denominator state recently reported in MLG samples with Hall bar geometry [@Smet2018even]. Given the high resolution and large energy gap of correlated states observed in Corbino geometry, a potential electron liquid phase such as the Pfaffian is expected to show up as a sharp minimum in bulk conductance.
In summary we have established a process of realizing very high quality Corbino devices in a dual-gated geometry. The ability to directly probe bulk conductance in the QHE regime, independently of the edge states, provides new access to various electron liquid and solid states in graphene beyond what has previously been possible in transport studies. Additionally, the superior quality compared to similarly constructed Hall bar devices suggests that transport measurement in the conventional Hall geometry is limited by difficulties related to probing the edge channels but not bulk disorder. This might be due to details of the edge mode structure[@Sabo2017] or difficulties in designing contacts that well equilibrate to the edge channels [@Klitzing2011].
acknowledgments {#acknowledgments .unnumbered}
===============
We thank Andrea Young for helpful discussions and sharing unpublished results. This work was supported by the ARO under MURI W911NF-17-1-0323. A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-1644779 and the State of Florida. CRD acknowledges partial support by the David and Lucille Packard Foundation.
Competing financial interests {#competing-financial-interests .unnumbered}
=============================
The authors declare no competing financial interests.
[39]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1103/PhysRevLett.45.494) [****, ()](\doibase 10.1103/PhysRevLett.48.1559) [****, ()](\doibase 10.1103/PhysRevLett.49.405) [****, ()](\doibase 10.1103/PhysRevLett.50.1395) [****, ()](\doibase 10.1103/PhysRevLett.88.076801) [****, ()](\doibase
10.1103/PhysRevLett.93.176809) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1038/nphys2007) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase
10.1038/nature23893) [****, ()](\doibase 10.1038/s41467-017-00824-w) [****, ()](\doibase
10.1126/science.aao2521) [****, ()](\doibase 10.1038/nnano.2010.172) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [ ()]{} @noop [ ()]{} [****, ()](\doibase
https://doi.org/10.1016/j.ssc.2015.05.005) [****, ()](\doibase 10.1021/nl102459t), [****, ()](\doibase 10.1103/PhysRevLett.108.106804) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1098/rsta.2011.0198) [****, ()](\doibase
10.1103/PhysRevLett.117.186601) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevLett.63.199) [****, ()](\doibase
10.1103/PhysRevLett.111.076802) [****, ()](\doibase
10.1103/PhysRevB.74.165325) [****, ()](\doibase 10.1103/PhysRevLett.92.156401) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.66.075408) @noop [****, ()]{}
|
---
abstract: 'In this paper we investigate a model (based on the idea of the outflow dynamics), in which only conformity and anticonformity can lead to the opinion change. We show that for low level of aniconformity the consensus is still reachable but spontaneous reorientations between two types of consensus (’all say yes’ or ’all say now’) appear.'
address: 'Institute of Theoretical Physics, University of Wroc[ł]{}aw, Pl. Maxa Borna 9, 50-204 Wroclaw, Poland'
author:
- 'Grzegorz Kondrat and Katarzyna Sznajd-Weron'
title: Spontaneous reorientations in a model of opinion dynamics with anticonformists
---
[*Keywords*]{}: kinetic Ising model, opinion dynamics
Introduction
============
In the past decade many models of opinion dynamics has been studied by physicists (for the recent review see [@Castellano_2009]). Among them several simple discrete models based on the famous Ising model, such as Voter model [@Liggett_1999], majority models [@Galam_2002; @Krapivsky_2003] or Sznajd model [@Sznajd_2000], have been proposed to describe consensus formation. The force which leads to consensus is conformity – one of the most observed response to the social influence. In all three models mentioned above a kind of conformity has been introduced. In the Voter model a single person is able to convince others, within the majority rule individuals follow majority opinion and in the Sznajd model unanimity is needed to convince others. Although the conformity is the major paradigm of the social influence, it is known that other types of social response are also possible.
People feel uncomfortable when they appear too different from others, but they also feel uncomfortable when they appear like everyone else [@Myers_1996]. There is an experimental evidence for asserting uniqueness - sometimes people to assert their uniqueness can change their own opinion, when they realize that this opinion is shared by others [@Myers_1996]. Therefore asserting uniqueness can lead to so called anticonformity. In 1963 Willis (reviewed recently in [@Nail_2000]) has proposed a two-dimensional model of possible responses to social influence, in which both conformers and anticonformers are similar in the sense that both acknowledge the group norm (the conformers agree with the norm, the anticonformers disagree).
Obviously the anticonformity is quite rare in comparison to the conformity. The natural question is whether the existence of the very small probability of anticonformity can influence the opinion dynamics. Will the consensus be still possible in the society with anticonformists? In this paper we decided to introduce the probability of anticonformal behavior to one of the consensus models. Recently a generalized one-dimensional model based on the original Sznajd model has been proposed to incorporate some diversity or randomness in human activity [@Kondrat_2009]. In this paper we investigate a special case of this extended model, in which both conformity and anticonformity are possible. We check how the small probability of anticoformal behavior in the presence of the strong conformity can influence the opinion dynamics. It has been known for long that conformity/anticonformity is to some extent a product of cultural conditions [@Bond_1996]. There are some experimental motivations for such statement. For example, Frager in 1970 conducted experiments among Japanese students and found a lower level of conformity compared with the U.S. results and some evidence for anticonformity [@Frager_1970]. From this point of view a ratio between the probability of conformity and anticonformity could be related to the cultural or political conditions.
The model
=========
We consider a chain of $L$ Ising spins $S_i=\pm 1, \; i=1,\ldots,L$ with periodic boundary conditions. At each step two consecutive spins are chosen at random, and they influence their outer neighbors. In the most popular version of the Sznajd model, inspired by the observation that an individual who breaks the unanimity principle reduces the social pressure of the group dramatically [@Myers_1996], only the unanimous majority influences the neighborhood. In the paper [@Kondrat_2009] all possible configurations of 4 consecutive spins has been considered. Two randomly selected middle spins decide the outcome of the update step (following [@Kondrat_2009] we write them in brackets). The action of a selected pair has been considered independently on each direction. Thus all different possible elementary cases make up a following list: ($[AA]A$, $[AA]B$, $[AB]A$ and $[AB]B$), where the symbols $A$ and $B$ stand for different opinions, i.e $A=-B=\pm 1$. To determine the dynamics the vector of probabilities ${\bf p}=(p_1,p_2,p_3,p_4)$ of change the third spin (one that is outside brackets) has been introduced [@Kondrat_2009]: $$\begin{aligned}
p_1:[AA]A \rightarrow [AA]B,\\
p_2:[AA]B \rightarrow [AA]A,\\
p_3:[AB]A \rightarrow [AB]B,\\
p_4:[AB]B \rightarrow [AB]A.\end{aligned}$$ The first parameter, $p_1$, describes the chance of spontaneous appearing an anticonformist opinion and the complementary probability $p_1'=1-p_1$ describes the situation, where in the same conditions the opinion is not changed. Second parameter, $p_2$, is a chance of convincing an individual to the other opinion, shared by his two consecutive neighbours - i.e. conformity. Again $p_2'=1-p_2$ is a probability of one’s opinion remaining unaltered with the presence of conformity among his two consecutive neighbors In this paper we investigate the special case, in which only conformity and anticonformity can lead to the opinion change, thus $p_3=p_4=0$. The case in which $p_2=1$ and $p_1=p_3=p_4=0$ corresponds to the Sznajd model. In this paper we have decided to investigate the case in which $p_2=1$ and $p_1 \in (0,1) $ is the only parameter of the model. To investigate the model, we provide Monte Carlo simulations with the random sequential updating mode and thus the time $t$ is measured in the Monte Carlo Steps (MCS) which consists of $L$ elementary updatings.
Results
=======
The quantity, which is usually measured in such models, is the public opinion $m$ as a function of time $t$. In this kind of models the public opinion is equivalent to the magnetization: $$m=\frac{1}{L} \sum_{i=1}^N S_i.$$ In the case of $p_1=0$, which corresponds to the deterministic rule of the Sznajd model, the system reaches the ferromagnetic steady state (consensus from the social point of view). Once $p_1>0$ the system never reaches any absorbing state and the opinion dynamics depends on anticonformity probability $p_1$. The time evolution of public opinion $m(t)$ is presented in Figs. 1-3. It can be seen that consensus ($m = \pm 1$) is reached only for small values of $p_1$ (Fig.1), while for larger values of anticonformity consensus is not reached and public opinion fluctuates around its mean value $m=0$ (Figs.2-3). One can also notice that the amplitude of the fluctuations decrease with $p_1$, on the other hand the frequency of fluctuations increase with $p_1$. This tendency is valid for all values of $p_1$ and thus the time of consensus state (’all up’ or ’all down’) decreases with $p_1$. For very small values of $p_1$ the system spends most of the time in one of the extreme consensus state and in the limiting case $p_1=0$ the consensus becomes the absorbing steady state.



To analyze more precisely the dependence between the consensus time and the level of anticonformity $p_1$ let us introduce the mean relative time of consensus $<\tau_c>$ as a mean number of $MCS$ for which $|m|=1$ divided by the total number of steps in the simulation. The dependence between the mean relative time of consensus $<\tau_c>$ and $p_1$ is presented in Fig.4. For small values of $p_1$ this dependence is exponential, i.e $<\tau_c> \sim \exp(\alpha p_1)$, with $\alpha=\alpha(L) \sim \frac{3}{2}L$. This means that although the relative time of consensus decrease with $p_1$, consensus is still possible for larger values of $p_1$. No qualitative change of behavior is seen while looking at $<\tau_c>$ as a function of anticonformity. On the other hand, if we look at Figs. 1-3 it seems that there is some qualitative difference between opinion dynamics presented in Fig.1 and Fig.2-3. In Fig. 1 the system is ferromagnetically ordered for most of the time and spontaneous transitions between two opposite ferromagnetic states are observed.

Therefore, let us now check the dependence between control parameter $p_1$ and the mean reorganization time $<t_r>$, defined as a mean time between arriving at two consecutive opposite consensus states. More precisely we monitor the events of time, at which the system attains the given consensus ($m=\pm 1$) for the first time since it was in the last opposite state $m=\mp 1$. It occurs that there is an optimal value of $p_1$ for which the mean reorganization time $<t_r>$ is the shortest (see Fig.5). From the social point of view this means that there is a special level of anticonformity for which reorganizations (‘revolutions’) are the most frequent. The optimal value of $p_1$ is roughly inversely proportional to the system size $L$. Thus their product $p_1L$, describing the mean number of acts of anticonformity per one Monte Carlo step, remains constant independently on the system size.
Now we can show that indeed there is a qualitative change in the opinion dynamics for a certain value of $p_1$ and this value corresponds to the optimal value of $p_1$, i.e. value for which the mean reorganization time $<t_r>$ is the shortest. To do this let us present the cumulative distribution function $CDF$ of the public opinion $m$. In Fig. 6 it can be seen that for $p_1 \le 0.04$ the curve is $\sim$ shaped and for certain value $p_1=p^* \in (0.03,0.04)$ the shape of $CDF$ changes qualitatively to the $\backsim$ shape (the change in convexity). While for $p_1 \le 0.04$ the system for most of the time is in the consensus state, for $p_1 \ge 0.03$ the consensus state is extremely low probable. One should notice (see Fig.5) that the optimal value of $p_1$ also lies in the interval $(0.03,0.04)$ and thus corresponds to $p^*$.
![The dependence between the mean reorganization time $<t_r>$ and the level of anticonformity $p_1$ for the lattice size $L=100$.[]{data-label="fig1"}](nFig5.eps)

Summary
=======
We have proposed a new model of opinion dynamics with anticonformists based on the general model proposed by Kondrat [@Kondrat_2009]. In our model only conformity (with probability $1$) and anticonformity (with probability $p_1$) can lead to the opinion change. According to Willis, both conformers and anticonformers are similar in the sense that both acknowledge the group norm (the conformers agree with the norm, the anticonformers disagree). In our model a pair of neighboring individuals sharing the same opinion will influence its neighborhood (so called outflow dynamics – the idea taken from the Sznajd model). To investigate the model, we have provided Monte Carlo simulations with the random sequential updating mode. It occurs that for small values of anticonformity level consensus is still reached, but it is not the absorbing steady state as in the case of $p_1=0$. For small values of $p_1$ spontaneous reorientations occur, which can be understood from the social point of view, as complete repolarizations (e.g. spontanous transition from dictatorship to democracy). We have shown that there is a special value of anticonformity level $p_1=p^*$ below which the system stays for most of time in the consensus state and spontaneous reorientations occur. Above this value the consensus it almost impossible and qualitative change is visible in the cumulative distribution function of the public opinion $m$.
The main criticism connected with such simple social models concerns usually oversimplifications of the assumptions. We do not want to convince anybody that there is no free will or no external factors influencing individual choices. We have only shown that even in the conformistic societies with very low (but nonzero) level of anticonformity, spontaneous reorientations of the public opinion are possible. There is no need to introduce any external field nor strong leader to explain these social repolarizations. This seems to be quite important result in the social perspective. Sociologists usually try to explain a posteriori such a rapid and unexpected transitions (like protests, revolutions, etc.) and having known the history they are quite often able to do so. On the other hand maybe from time to time there is no direct reason for such a reorientation, maybe it occurs just spontaneously because the society is the complex dynamical system.
References {#references .unnumbered}
==========
[10]{} Castellano C, Fortunato S, Loreto V, Reviews of Modern Physics **81**, 591 (2009) Liggett T, *Stochastic Interacting Systems: Contact Voter, and Exclusion Processes*, Springer-Verlag, New York (1999) Galam S, Eur. Phys. J. B **25**, 403 (2002) Krapivsky P L and Redner S, Phys. Rev. Lett. **90**, 238701 (2003) Sznajd-Weron K and Sznajd J, Int. J. Mod. Phys. C, **11** (2000) 1157 Myers D *Social Psychology*, The McGraw-Hill Companies, Inc. (1996) Nail P, MacDonald G, Levy D, Psychological Bulletin **126**, 454 (2000) Bond R and Smith P, Psychological Bulletin **119**, 111 (1996) Kondrat G, arXiv:0912.1466v1 Frager R, Journal of Personality and Social Psychology **15**, 203 (1970)
|
---
abstract: 'The interdependence between long range correlations and topological signatures in fermionic arrays is examined. End-to-end correlations, in particular classical correlations, maintain a characteristic pattern in the presence of delocalized excitations and this behavior can be used as an operational criterion to identify Majorana fermions in one-dimensional systems. The study discusses how to obtain the chain eigenstates in tensor-state representation together with the proposed assessment of correlations. Outstandingly, the final result can be written as a simple analytical expression that underlines the link with the system’s topological phases.'
author:
- Jose Reslen
title: 'End-to-end correlations in the Kitaev chain'
---
Introduction
============
Majorana fermions [@eliot; @alicea] are described by a highly versatile formalism that provides conceptual- as well as technical-tools, so much so that Majorana particles can be found in solid state systems incarnated as collective phenomena. One particular scenario where this identification takes place is the Kitaev chain [@kitaev], a one-dimensional fermionic system that displays a fundamental relation between Majorana excitations and topology, allowing the use of geometrical arguments to establish a connection between the presence of Majorana fermions in the open chain and the parity of the periodic chain, providing in this way simple and operational conditions to allocate Majorana particles, specifically, superconductivity, which ensures a gap in the bulk, and an odd number of Dirac points in the band structure of the non-interacting system. This approach has been rather successful in giving a qualitative characterization of the Majorana chain, and the discovery has been attracting a lot of attention over the past decades due to its potential applications in quantum information, prompting experimental verification in state-of-the-art setups, usually in the form of zero-bias conductance peaks on the edges of one-dimensional structures, as for instance in [@deng; @pawlak; @sun] to mention only the most recent studies.
Even though the Kitaev chain is well understood in terms of its topological structure, a more quantitative description in terms of the state’s mean values is clearly of interest. Such a description is necessary to thoroughly characterize the behavior of the state’s observables in the topological phase. This characterization can then be used on other systems where lack of integrability does not allow a direct identification of Majorana excitations [@gergs]. The fundamental observation is that since the excitations supported by Majorana fermions are highly delocalized, it is reasonable to expect they enhance the correlations between the end sites of the chain. If that is indeed the case, experimental verification could be improved if it were possible to simultaneously test electron density on both of the chain ends. The study of edge correlations in fermion chains has been addressed in references [@miao1; @wang] for specific cases that permit analytical progress [@miao2; @katsura]. Here the analysis covers the whole range of parameters and is conceptually exact, albeit with a numerical component. This approach allows to find a generalized expression for the correlations that complements the results derived analytically.
The fact that the Kitaev chain is an integrable model does not preclude the need for numerical analysis. Complications arise because the chain’s Hilbert space grows as $2^N$, being $N$ the number of sites, and many features manifest exclusively in the thermodynamic limit. These complications can be circumvented by the use of tensor state techniques. Such techniques can be implemented in different ways. One such way is Density Matrix Renormalization Group (DMRG) [@miao1; @gergs; @kitaev2], which minimizes the energy over the space of eigenstates of the chain’s local density matrices. Another approach is to use a tensor representation as a variational network in an abstract way. This is characteristic of the method known as Matrix Product States (MPS). The way tensor state techniques are applied here is different from these two, and is more in accordance with the updating protocols introduced in reference [@vidal]. A family of methods based on such protocols is known as Time Evolving Block Decimation (TEBD). However, the path followed in this report differs from this denomination. First, time-evolving- or step-integration is not incorporated, and second, the implementation is exact, so that numerical approximations like splittings of operator exponentials, which are huge error-contributors to TEBD, are not employed whatsoever. The techniques applied here to fermion chains have been first developed in the context of bosonic arrays in [@ReslenRMF; @ReslenIOP], although with some key differences, the most important one being the inclusion of pairing terms integrally in the current formalism, which is possible thanks to the decomposition of fermionic operators in terms of Majorana operators. Another aspect that contrasts with other works is that the tensor state formulation is carried completely in the fermionic Fock-space, without the extra work of incorporating the so called Wigner-Jordan transformation to reformulate the problem in terms of a spin chain, as seems to be frequent in DMRG applications to fermion systems.
The Kitaev chain, also known as the Majorana chain, is described by the following Hamiltonian [@kitaev] $$\begin{gathered}
\hat{H} = \sum_{j=1}^N -w ( \hat{c}^{\dagger}_j \hat{c}_{j+1} + \hat{c}^{\dagger}_{j+1} \hat{c}_j ) - \mu \left( \hat{c}^{\dagger}_j \hat{c}_{j} - \frac{1}{2} \right ) \nonumber \\
+ \Delta \hat{c}_j \hat{c}_{j+1} + \Delta^* \hat{c}^{\dagger}_{j+1} \hat{c}^{\dagger}_j.
\label{kita}\end{gathered}$$ Constant $w$ is the next-neighbor hopping intensity, while $\mu$ is the chemical potential, which relates to the total number of fermions in the wire. Parameter $\Delta$ is the intensity of the pairing and is known as the superconducting gap. Creation and annihilation operators follow fermionic anticommutation rules $\{\hat{c}_j,\hat{c}_k\} = 0$ and $\{\hat{c}_j,\hat{c}_k^\dagger\} = \delta_j^k$. Open or periodic boundary conditions are enforced by taking $\hat{c}_{N+1}=0$ or $\hat{c}_{N+1}=\hat{c}_1$ respectively. The model describes a quantum wire in proximity to the surface of a p-wave superconductor [@kitaev; @eliot; @alicea], and also a spin-polarized 1-D superconductor, since only one spin component is being considered and the pairing term mixes modes with opposite crystal momentum, as can be shown by switching to a momentum basis. Independently of boundary conditions, Hamiltonian (\[kita\]) commutes with the parity operator $$\begin{gathered}
\hat{\Pi} = e^{i \pi \sum_{j=1}^{N} \hat{c}_j^{\dagger} \hat{c}_j}.
\label{rainy}\end{gathered}$$ The symmetry associated to this operator is not spatial, instead, it is a parity associated to number of particles. Let us now introduce the Majorana operators (MOs) corresponding to site $j$: $$\begin{gathered}
\hat{\gamma}_{2 j - 1} = e^{\frac{i\phi}{2 }} \hat{c}_j + e^{-\frac{i\phi}{2 }} \hat{c}^{\dagger}_j, \label{eight} \\
\hat{\gamma}_{2 j} = - i e^{\frac{i\phi}{2 }} \hat{c}_j + i e^{-\frac{i\phi}{2 }} \hat{c}^{\dagger}_j. \label{nine}\end{gathered}$$ A key feature of these MOs is that they are hermitian, $\hat{\gamma}_k^\dagger = \hat{\gamma}_k$. It can be shown that the anticommutation relations are given by $\{ \hat{\gamma}_k, \hat{\gamma}_j \} = 2 \delta_k^j$, so that $\hat{\gamma}_k^2 = 1$. Since there are two Majorana modes for every site, the total number of modes doubles. Equations (\[eight\]) and (\[nine\]) can be inverted and the result can be used to write Hamiltonian (\[kita\]) as $$\begin{gathered}
\hat{H} = \frac{i}{2}\sum_{j=1}^N \left ( -\mu \hat{\gamma}_{2j-1} \hat{\gamma}_{2j} + (|\Delta|-w) \hat{\gamma}_{2j-1} \hat{\gamma}_{2j+2} \right . \nonumber \\
\left . + (|\Delta|+w) \hat{\gamma}_{2j} \hat{\gamma}_{2j+1} \right ).
\label{kike}\end{gathered}$$ When $w = \Delta = 0$, this Hamiltonian becomes diagonal in the Fock basis with a non-degenerate spectrum and a ground state that depends on the sign of $\mu$. From (\[kike\]) it can be seen that such a particular case corresponds to a chain where MOs from the same site pair up. This behavior is the generic signature of the [*trivial phase*]{}. In contrast, when $w = |\Delta| > 0$ and $\mu =0$, the pairing takes place between Majorana operators from neighbor sites, as can be seen in the transformed Hamiltonian $$\begin{gathered}
\hat{H} = i w \sum_{j=1}^{N-1} \hat{\gamma}_{2j} \hat{\gamma}_{2j+1}.
\label{aqua}\end{gathered}$$ A key feature of this expression is that it lacks both $\hat{\gamma}_{1}$ and $\hat{\gamma}_{2N}$, which are [*unpaired*]{}. From these one can build an uncoupled mode, $$\begin{gathered}
\hat{f}_N = \frac{1}{2} \left( \hat{\gamma}_{1} + i \hat{\gamma}_{2 N} \right),\end{gathered}$$ satisfying $\{\hat{f}_N, \hat{f}_N^\dagger\} = 1$. This is a highly delocalized fermionic mode, having equitable contributions on the edges. The simplest physical operator that can be in this way built is $\hat{f}_N^\dagger \hat{f}_N$, which consequently commutes with the Hamiltonian. The Hilbert space associated to such a term contains two states, one occupied and one empty. As the Hamiltonian does not include terms that could operate on neither of these, it is energetically equivalent to have many-body eigenstates with or without a particle on the aforementioned mode, i.e., the energy cost associated to this mode is zero, being that the reason why $\hat{f}_N$ is known as a Zero Mode while $\hat{\gamma}_{1}$ and $\hat{\gamma}_{2N}$ are known as Majorana Zero Modes (MZMs) [@eliot] or Edge Modes. As a consequence, the whole spectrum of (\[aqua\]) becomes two-fold degenerate. The fact that neither $\hat{\gamma}_{1}$ nor $\hat{\gamma}_{2N}$ appear in the Hamiltonian implies that they do not pick up oscillatory phases in the Heisenberg picture, which makes them robust against this kind of decoherence mechanism. From the previous arguments it can be deduced that (\[aqua\]) commutes with the symmetry operator $$\begin{gathered}
\hat{Q}_R = i \hat{\gamma}_1 \hat{\gamma}_{2 N} = 2 \hat{f}_N^\dagger \hat{f}_N - 1.
\label{delmar}\end{gathered}$$ It can be noticed that $\hat{Q}_R$ is unitary and its eigenvalues are $1$ for a filled mode and $-1$ for an unfilled one. Unlike $\hat{\Pi}$, $\hat{Q}_R$ determines a symmetry only for a specific set of parameters. In spite of the spectrum being degenerate in this case, it is possible to build ground states $|\psi_G\rangle$ of (\[aqua\]) that are also eigenstates of (\[delmar\]), so that $$\begin{gathered}
|\langle \psi_G, \hat{Q}_R \psi_G \rangle| = 1.
\label{crowded_house}\end{gathered}$$ Performing the same calculation with any normalized state that is not an eigenstate of $\hat{Q}_R$ would result in a lower value, so that the maximum is linked to eigenstates of Hamiltonians having unpaired MOs completely localized at the ends. A totally analogous case would be obtained if focus is made on the point $w = -|\Delta| < 0$ and $\mu =0$, yielding $$\begin{gathered}
\hat{H} = - i w \sum_{j=1}^{N-1} \hat{\gamma}_{2j-1} \hat{\gamma}_{2j+2}.\end{gathered}$$ This time it is $\hat{\gamma}_{2}$ and $\hat{\gamma}_{2N-1}$ which do not appear in the Hamiltonian, thus giving rise to the symmetry operator $$\begin{gathered}
\hat{Q}_L = i \hat{\gamma}_2 \hat{\gamma}_{2 N-1},\end{gathered}$$ which follows a relation analogous to (\[crowded\_house\]). According to reference [@kitaev], there are unpaired MOs, or Majorana fermions, over the region in parameter space surrounding the particular cases studied above as long as the gap of the equivalent periodic chain does not vanish. Following this argument it can be shown that unpaired MOs prevail in the region defined by $2 |w| < |\mu|$. In the general case such operators only become completely unpaired in the thermodynamic limit, although with exponential convergence, and they are not completely, but still highly, localized at the ends.
The above observations suggest that in order to scan for unpaired MOs it is useful to exploit their relation with the state’s local-symmetries. Let us therefore define the operator $$\begin{gathered}
\hat{Q} = \hat{Q}_L + \hat{Q}_R = 2 (\hat{c}_1 \hat{c}_N^\dagger + \hat{c}_N \hat{c}_1^\dagger). \end{gathered}$$ It is reasonable to expect that the mean values of $\hat{Q}$, which actually measure end-to-end single-particle hopping, determine the degree of localization of MOs at the chain ends, taking extreme values when they are completely localized and vanishing when there is none. In order to test this conjecture the following measure is proposed $$\begin{gathered}
Z = \lim_{N\rightarrow \infty} |\langle \hat{Q} \rangle|.
\label{terror}\end{gathered}$$ The numerical calculation of this expression is not always efficient because long-range correlations are involved, hence a practical approach is desirable. Next section focuses on presenting a way in which the eigenstates of (\[kita\]) as well as $Z$ can be effectively calculated.
Reduction of the Kitaev chain by a series of unitary transformations {#aji}
====================================================================
Hamiltonian (\[kike\]) has the following general structure $$\begin{gathered}
\hat{H} = \frac{i}{4} \sum_{k=1}^{2N} \sum_{l=1}^{2N} A_{kl} \hat{\gamma}_k \hat{\gamma}_l,
\label{twelve}\end{gathered}$$ where the coefficients $A_{kl}$ form a real antisymmetric matrix $A_{kl} = -A_{lk}$. Following Kitaev [@kitaev], $\hat{H}$ can be diagonalized by an unitary transformation that reduces it to [$$\begin{gathered}
\hat{H} = \frac{i}{4} \sum_{k=1}^{N} \epsilon_k \left( \hat{\zeta}_{2k-1} \hat{\zeta}_{2k} - \hat{\zeta}_{2k} \hat{\zeta}_{2k-1} \right) = \frac{i}{2} \sum_{k=1}^{N} \epsilon_k \hat{\zeta}_{2k-1} \hat{\zeta}_{2k}.
\label{thirdteen}\end{gathered}$$ ]{} The $\hat{\zeta}$’s are MO that can be expressed as linear combinations of the $\hat{\gamma}$’s $$\begin{gathered}
\left (
\begin{array}{c}
\zeta_1 \\
\zeta_2 \\
\zeta_3 \\
\zeta_4 \\
\vdots
\end{array}
\right ) =
\hat{W}
\left (
\begin{array}{c}
\gamma_1 \\
\gamma_2 \\
\gamma_3 \\
\gamma_4 \\
\vdots
\end{array}
\right ).\end{gathered}$$ Matrix $\hat{W}$ is such that it transform $\hat{A}$ in the following manner $$\begin{gathered}
\hat{W} \hat{A} \hat{W}^\dagger =
\left (
\begin{array}{ccccc}
0 & \epsilon_1 & 0 & 0 & \hdots \\
-\epsilon_1 & 0 & 0 & 0 & \hdots \\
0 & 0 & 0 & \epsilon_2 & \hdots \\
0 & 0 & -\epsilon_2 & 0 & \hdots \\
\vdots & \vdots & \vdots & \vdots & \ddots
\end{array}
\right ).
\label{sixteen}\end{gathered}$$ The single-body energies $\epsilon_k$ are real-and-positive while $\hat{W}$ is a real orthonormal matrix satisfying $\hat{W} \hat{W}^\dagger = I$. This factorization is a particular case of a procedure known as Schur decomposition [@SD]. The diagonalized Hamiltonian can be written in terms of standard fermionic modes $$\begin{gathered}
\hat{H} = \sum_{k=1}^N \epsilon_k \left( \hat{f}_k^\dagger \hat{f}_k - \frac{1}{2} \right), \text{ } \hat{f}_k = \frac{1}{2}(\hat{\zeta}_{2k-1} + i \hat{\zeta}_{2k}).
\label{fourteen}\end{gathered}$$ This can be checked by replacing $\hat{\zeta}_{2 k - 1} = \hat{f}_k + \hat{f}^{\dagger}_k$ and $\hat{\zeta}_{2 k} = - i \hat{f}_k + i \hat{f}^{\dagger}_k$ in (\[thirdteen\]). The system eigenenergies are given by $$\begin{gathered}
E_l = \sum_{k=1}^{N} \epsilon_k \left(n_k - \frac{1}{2} \right),
\label{pinky}\end{gathered}$$ in such a way that $n_k = \{0,1\}$. The system’s ground state corresponds to the case where all $n_k=0$, therefore $E_G = -\sum_{k=1}^{N} \frac{\epsilon_k}{2}$. If there are one or more single-body vanishing-energies $\epsilon_k = 0$, the spectrum becomes degenerate, because there is no energy difference between occupied and unoccupied zero modes.
In order to obtain the eigenstates an approach similar to that in reference [@wagner] is adopted. First, Hamiltonian (\[fourteen\]) is written as [$$\begin{gathered}
\hat{H} = \sum_{k=1}^N \epsilon_k \left( \frac{1}{4} \left( \sum_{j=1}^{2N} W_{2k-1,j}\hat{\gamma}_{j} - i \sum_{j=1}^{2N} W_{2k,j} \hat{\gamma}_{j} \right) \right . \times \nonumber \\
\left . \left( \sum_{j=1}^{2N} W_{2k-1,j}\hat{\gamma}_{j} + i \sum_{j=1}^{2N} W_{2k,j} \hat{\gamma}_{j} \right) - \frac{1}{2} \right).\end{gathered}$$ ]{} The $W_{j,k}$’s are the components of matrix $\hat{W}$, as defined by (\[sixteen\]). An unitary transformation acting on two consecutive MOs can be defined as $$\begin{gathered}
\left . \hat{U}^{[j]} \right .^{-1} = e^{\frac{\theta}{2} \hat{\gamma_j} \hat{\gamma}_{j-1}}.\end{gathered}$$ The effect of this transformation on the Hamiltonian is calculated through $\hat {H} \rightarrow \left . \hat{U}^{[j]} \right .^{-1} \hat{H} \hat{U}^{[j]}$. This can be performed by applying the same transformation on every operator composing $\hat{H}$. The operation over a linear combination of consecutive MOs is written as [$$\begin{gathered}
\left . \hat{U}^{[j]} \right .^{-1} ( W_{j-1} \hat{\gamma}_{j-1} + W_{j} \hat{\gamma}_{j} ) \hat{U}^{[j]} = W_{j}' \hat{\gamma}_{j} + W_{j-1}' \hat{\gamma}_{j-1},\end{gathered}$$ ]{} where $W_{j}'= W_{j} \cos \theta - W_{j-1} \sin \theta$ and $W_{j-1}' = W_{j-1} \cos \theta + W_j \sin \theta$. The contribution of $\hat{\gamma}_j$ can be taken out by choosing $$\begin{gathered}
\tan \theta = \frac{W_j}{W_{j-1}},
\label{fragance}\end{gathered}$$ in such a way that $W_j'=0$. If $W_{j-1} = 0$ and $W_{j} \ne 0$, then $\theta=\frac{\pi}{2}$. If both $W_{j-1}$ and $W_{j}$ are zero, then $\theta=0$ is enforced. In any other case the angle is well defined because the $W$’s are real. It is practical to choose the angle $\theta$ so that $sign(\sin \theta) = sign(W_j)$ and $sign(\cos \theta) = sign(W_{j-1})$. In this way $W_{j-1}' = \sqrt{W_{j-1}^2 + W_{j}^2}$, leaving a positive coefficient. In order to highlight the dependence of $\theta$ with respect to $W_j$ and $W_{j-1}$, $\theta_j$ is used from now on. When this operation is applied on the whole Hamiltonian, the mechanism can be described as an overall action on the diagonal modes: [$$\begin{gathered}
\begin{array}{c|c|c}
& \left . \hat{U}_1^{[2N]} \right .^{-1} & \\ \cline{2-2}
\hat{\zeta}_1 = & W_{1,2N}\hat{\gamma}_{2N} + W_{1,2N-1} \hat{\gamma}_{2N-1} & + W_{1,2N-2} \hat{\gamma}_{2N-2} \dots + W_{1,1} \hat{\gamma}_{1} \\
\hat{\zeta}_2 = & W_{2,2N}\hat{\gamma}_{2N} + W_{2,2N-1} \hat{\gamma}_{2N-1} & + W_{2,2N-2} \hat{\gamma}_{2N-2} \dots + W_{2,1} \hat{\gamma}_{1} \\
\vdots & \vdots & \vdots \\
%\hat{\zeta}_{2N} = & W_{2 N,2N}\hat{\gamma}_{2N} + W_{2 N,2N-1} \hat{\gamma}_{2N-1} & + W_{2 N,2N-2} \hat{\gamma}_{2N-2} \dots + W_{2 N,1} \hat{\gamma}_{1} \\
\end{array} \nonumber\end{gathered}$$ ]{} As a result, $\hat{\gamma}_{2N}$ vanishes from $\hat{\zeta}_1$ and the vertically aligned coefficients are in some way affected. The process continues by applying another transformation aimed at canceling the next component, which generates a similar effect on the stack of coefficients [$$\begin{gathered}
\begin{array}{c|c|c}
& \left . \hat{U}_1^{[2N-1]} \right .^{-1} & \\ \cline{2-2}
& W_{1,2N-1}'\hat{\gamma}_{2N-1} + W_{1,2N-2} \hat{\gamma}_{2N-2} & \dots + W_{1,1} \hat{\gamma}_1 \\
W_{2,2N}'\hat{\gamma}_{2N} + & W_{2,2N-1}'\hat{\gamma}_{2N-1} + W_{2,2N-2} \hat{\gamma}_{2N-2} & \dots + W_{2,1} \hat{\gamma}_1 \\
\vdots & \vdots & \vdots \\
W_{2N,2N}'\hat{\gamma}_{2N} + & W_{2N,2N-1}'\hat{\gamma}_{2N-1} + W_{2N,2N-2} \hat{\gamma}_{2N-2} & \dots + W_{2N,1} \hat{\gamma}_1 \\
\end{array} \nonumber \end{gathered}$$ ]{} The process is repeated, removing one component in each step and advancing toward $\hat{\gamma}_1$ [$$\begin{gathered}
\begin{array}{c|c|}
& \left . \hat{U}_1^{[2]} \right .^{-1} \\ \cline{2-2}
& W_{1,2}'\hat{\gamma}_{2} + W_{1,1} \hat{\gamma}_{1} \\
W_{2,2N}'\hat{\gamma}_{2N} + \dots + W_{2,3}''\hat{\gamma}_{3} + & W_{2,2}'\hat{\gamma}_{2} + W_{2,1} \hat{\gamma}_{1} \\
\vdots & \vdots \\
W_{2N,2N}'\hat{\gamma}_{2N} + \dots + W_{2N,3}''\hat{\gamma}_{3} + & W_{2N,2}'\hat{\gamma}_{2} + W_{2N,1} \hat{\gamma}_{1} \\
\end{array} \nonumber\end{gathered}$$ ]{} The last transformation eliminates $\hat{\gamma}_2$ and leaves only $\hat{\gamma}_1$ multiplied by $ W_{1,1}' = \sqrt{W_{1,1}^2 + W_{1,2}^2 + ... + W_{1,2N}^2} = 1$. Because all the transformations are unitary, the orthogonality of the coefficients must be preserved. Hence, if only $\hat{\gamma}_1$ remains in the top row, there cannot be $\hat{\gamma}_1$-terms in the rest of the stack. The last operation then leaves the following arrangement $$\begin{gathered}
\begin{array}{cc}
& \hat{\gamma}_{1} \\
W_{2,2N}'\hat{\gamma}_{2N} + \dots + W_{2,3}''\hat{\gamma}_{3} + W_{2,2}''\hat{\gamma}_{2} & \\
\vdots & \\
W_{2N,2N}'\hat{\gamma}_{2N} + \dots + W_{2N,3}''\hat{\gamma}_{3} + W_{2N,2}''\hat{\gamma}_{2} & \\
\end{array} \nonumber\end{gathered}$$ A similar series of operations can be devised to reduce the second row, this time avoiding any transformation involving $\hat{\gamma}_1$ in order to keep the first mode folded. The process can be repeated with the same intended effect in each step. However, the last transformation brings up an additional issue. Let us notice that before the last operation the stack of components looks like $$\begin{gathered}
\begin{array}{|c|cccc}
\left . \hat{U}_{2N-1}^{[N]} \right .^{-1} & & & & \\ \cline{1-1}
& & & & \hat{\gamma}_{1} \\
& & & \hat{\gamma}_{2} & \\
& \iddots & & & \\
W_{2N-1,2N}\hat{\gamma}_{2N} + W_{2N-1,2N-1}\hat{\gamma}_{2N-1} & & & & \\
W_{2N,2N}\hat{\gamma}_{2N} + W_{2N,2N-1}\hat{\gamma}_{2N-1} & & & & \\
\end{array} \nonumber\end{gathered}$$ Transformation $\left . \hat{U}_{2N-1}^{[N]} \right .^{-1}$ is aimed at folding the antepenultimate row, however, due to the orthonormality of the original modes, it folds the last row too. However, there is no guarantee that the resulting coefficient is positive, since the transformation only takes care of the sign of the coefficients of the row being folded. Consequently, after the folding is finished, there are two possible states of the stack $$\begin{gathered}
\begin{array}{|ccc|}
& & \hat{\gamma}_{1} \\
& \iddots & \\
\hat{\gamma}_{2N} & & \\
\end{array}
\hspace{1cm}
\text{ or }
\hspace{1cm}
\begin{array}{|ccc|}
& & \hat{\gamma}_{1} \\
& \iddots & \\
-\hat{\gamma}_{2N} & & \\
\end{array} \nonumber \end{gathered}$$ In the first case, when all the coefficients are positive, the transformed Hamiltonian becomes $$\begin{gathered}
\sum_{k=1}^N \epsilon_k \left( \frac{1}{4} \left( \gamma_{2k-1} - i \gamma_{2k} \right) \left( \gamma_{2k-1} + i \gamma_{2k} \right) - \frac{1}{2} \right) \nonumber \\
= \sum_{k=1}^N \epsilon_k \left( \hat{c}_k^\dagger \hat{c}_k - \frac{1}{2} \right).\end{gathered}$$ The eigenstates of this Hamiltonian can be identified as occupation states, $$\begin{gathered}
|\varphi_l \rangle = \prod_{k=1}^{N} \left( \hat{c}_k^\dagger \right )^{n_k} |0\rangle,
\label{seventeen}\end{gathered}$$ and the corresponding eigenergies are given by (\[pinky\]). The vacuum $|0\rangle$, or state without fermions, is simultaneously the system’s ground state.
With regard to the second case, let us first point out that fermionic operators are given in terms of MO by the relation $$\begin{gathered}
\hat{c}_{j} = \frac{e^{-\frac{i\phi}{2}}}{2} \left( \hat{\gamma}_{2j-1} + i \hat{\gamma}_{2 j} \right), \text{ }
\hat{c}_{j}^\dagger = \frac{e^{\frac{i\phi}{2}}}{2} \left( \hat{\gamma}_{2j-1} - i \hat{\gamma}_{2 j} \right). \nonumber\end{gathered}$$ It can be seen that a negative sign in $\hat{\gamma}_{2N}$ induces an particle-hole transformation, $\hat{c}_N^\dagger \Leftrightarrow \hat{c}_N$, in such a way that after passing to the fermionic basis the reduced Hamiltonian becomes $$\begin{gathered}
\sum_{k=1}^{N-1} \epsilon_k \left( \hat{c}_k^\dagger \hat{c}_k - \frac{1}{2} \right) - \epsilon_N \left( \hat{c}_N^\dagger \hat{c}_N - \frac{1}{2} \right).\end{gathered}$$ The eigenstates of this reduced Hamiltonian can be built as in (\[seventeen\]), but taking into account that the ground state is not the vacuum but the state with one fermion in the $N$-th site $$\begin{gathered}
|\varphi_l \rangle = \prod_{k=1}^{N-1} \left( \hat{c}_k^\dagger \right )^{n_k} \hat{c}_N^{n_N} |0...01\rangle.
\label{zeera}\end{gathered}$$ Likewise, the expression for the associated eigenenergy is (\[pinky\]). To obtain the eigenstates of the original Kitaev chain, $| \psi_l \rangle$, one applies the transformations in reverse order over the states (\[seventeen\]) or (\[zeera\]), depending on the result of the folding. The operation can be written as $$\begin{gathered}
| \psi_l \rangle = \prod_{k=2N-1}^1 \left ( \prod_{j=k+1}^{2N} \hat{U}_k^{[j]} \right ) | \varphi_l \rangle.
\label{twenty}\end{gathered}$$ Both $| \psi_l \rangle$ and $| \varphi_l \rangle$ are eigenstates corresponding to $E_l$, because unitary transformations do not change eigenvalues. Using (\[eight\]) and (\[nine\]) it can be shown that the transformations that appear in (\[twenty\]) are given by $$\begin{gathered}
\hat{U}_k^{[j]} = e^{-i\theta_{j,k} \left ( \hat{c}_{\frac{j}{2}}^\dagger \hat{c}_{\frac{j}{2}} - \frac{1}{2} \right )}, \text{if $j$ is even},\end{gathered}$$ and $$\begin{gathered}
\hat{U}_k^{[j]} = \exp \left [\frac{i \theta_{j,k}}{2} \left ( -e^{\frac{i\phi}{2}} \hat{c}_{\frac{j-1}{2}} + e^{\frac{-i\phi}{2}} \hat{c}_{\frac{j-1}{2}}^\dagger \right ) \right . \times \nonumber \\
\left . \left ( e^{\frac{i\phi}{2}} \hat{c}_{\frac{j+1}{2}} + e^{\frac{-i\phi}{2}} \hat{c}_{\frac{j+1}{2}}^\dagger \right ) \right ], \text{if $j$ is odd}.\end{gathered}$$ Transformations with $j$ even operate only on site $\frac{j}{2}$ and in matrix form they can be written as $$\begin{gathered}
\hat{U}_k^{[j]} =
\left (
\begin{array}{cc}
e^{\frac{i\theta_{j,k}}{2}} & 0 \\
0 & e^{-\frac{i\theta_{j,k}}{2}} \\
\end{array}
\right ).
\label{twenty-one}\end{gathered}$$ This matrix is written with respect to occupation states with the order $|0\rangle, |1\rangle$. Transformations with $j$ odd operate non-trivially on consecutive sites $\frac{j-1}{2}$ and $\frac{j+1}{2}$ through the following matrix representation $$\begin{gathered}
\hat{U}_k^{[j]} =
\left (
\begin{array}{cccc}
\cos{\frac{\theta_{j,k}}{2}} & 0 & 0 & i\sin{\frac{\theta_{j,k}}{2}} \\
0 & \cos{\frac{\theta_{j,k}}{2}} & i\sin{\frac{\theta_{j,k}}{2}} & 0 \\
0 & i\sin{\frac{\theta_{j,k}}{2}} & \cos{\frac{\theta_{j,k}}{2}} & 0 \\
i\sin{\frac{\theta_{j,k}}{2}} & 0 & 0 & \cos{\frac{\theta_{j,k}}{2}}
\end{array}
\right ).
\label{twenty-two}\end{gathered}$$ In this case the basis order is $|00\rangle, |01\rangle, |10\rangle, |11\rangle$ (the first position for site $\frac{j-1}{2}$ and the second for site $\frac{j+1}{2}$). The reducibility of the matrix underlines the fact that the Hamiltonian commutes with the parity operator (\[rainy\]) and therefore the eigenvectors inhabit spaces with even or odd parity. Because these matrices only mix states with the same parity, $| \psi_l \rangle$ and $| \varphi_l \rangle$ have equal parity.
In order to obtain the eigenstates of the Kitaev chain, first matrix $\hat{W}$ is gotten using standard numerical routines. The entries of this matrix are then used to get the folding angles $\theta_{j,k}$ from Eq. (\[fragance\]). These angles are then employed to build the transformations composing expression (\[twenty\]) in matricial form. Since such transformations involve neighbor sites only, expression (\[twenty\]) can be computed using the updating protocols described in [@vidal], so that the final result is expressed in tensorial representation. A detailed description of how to incorporate tensor product tecniques particularly on this problem is given in appendix \[tensor\].
Results {#tado}
=======
Before addressing the study of correlations in the open chain, the numerical approach proposed in the previous section is tested using the spectrum of the periodic chain. The single-body energies are given by [$$\begin{gathered}
E_{k}^{\pm} = \pm \sqrt{ \left( 2 w \cos \left( \frac{2\pi k}{N} \right) + \mu \right)^2 + 4 |\Delta|^2 \sin^2 \left( \frac{2\pi k}{N} \right)},\end{gathered}$$ ]{} for $1 \le k < \frac{N}{2}$. Additionally, $E_{\frac{N}{2}}^{+} = 2 w - \mu$ and $E_{\frac{N}{2}}^{-} = -2 w - \mu$. Such energies are numerically calculated as a by-product of the Schur decomposition in (\[sixteen\]) and then compared against these analytical results. Next, Eq. (\[pinky\]) is used to find the ground state energy. The tensorial representation of the ground state is then obtained following the folding protocol discussed before. The energy of such a ground state is calculated [*from this tensorial representation*]{}. This can be done as $N$ times the energy of two consecutive sites, but only for eigenstates with translational symmetry, which is the case as long as such eigenstates are nondegenerate. This result is then compared with the quantity obtained before as the sum of single-body-energies. The top plot in figure \[fig6\] shows the absolute difference between the two estimates.
\
The bottom plot depicts the mean number of particles, showing a behavior consistent with the contribution of a zero mode at $|\mu| = 2 |w|$.
Having verified the technique, let us now study correlations in the open chain. If the ground state is nondegenerate, $Z$ in (\[terror\]) can be determined from $$\begin{gathered}
Z = \lim_{N\rightarrow \infty}| Tr(\hat{\rho}_{1N} \hat{Q})|,
\label{ecopetrol}\end{gathered}$$ where $\hat{\rho}_{1N}$ is the reduced density matrix of the chain ends. The computation of such a matrix from a state written in tensorial representation is described in appendix \[rdm\]. Matrix $\hat{Q}$ comes given by $$\begin{gathered}
\hat{Q} =
\left (
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 2 (-1)^P & 0 \\
0 & 2 (-1)^P & 0 & 0 \\
0 & 0 & 0 & 0
\end{array}
\right),\end{gathered}$$ where $P$ is the ground state’s parity. $Z$ is found as the saturation value of (\[ecopetrol\]) with respect to $N$. The results are shown in table \[cervanto\]. The signs of $w$ and $\mu$ do not seem to influence the outcome. The observed behavior is compatible with the notion that $Z$ is correlated to the contribution of unpaired MOs. The maxima are located at points with total localization of MOs and vanishing values characterize the trivial phase. One would expect that the non vanishing values of $Z$ provide an assessment of the level of localization of Majorana fermions at the edges.
-4 -3 -2 -1 0 1 2 3 4
---- ------- ------- ------- ------- ------- ------- ------- ------- -------
4 - 0.000 0.000 0.000 0.000 0.000 0.000 0.000 -
3 0.388 - 0.000 0.000 0.000 0.000 0.000 - 0.388
2 0.666 0.533 - 0.000 0.000 0.000 - 0.533 0.666
1 0.833 0.853 0.750 - 0.000 - 0.750 0.853 0.833
0 0.888 0.960 1.000 0.888 - 0.888 1.000 0.960 0.888
-1 0.833 0.853 0.750 - 0.000 - 0.750 0.853 0.833
-2 0.666 0.533 - 0.000 0.000 0.000 - 0.533 0.666
-3 0.388 - 0.000 0.000 0.000 0.000 0.000 - 0.388
-4 - 0.000 0.000 0.000 0.000 0.000 0.000 0.000 -
: Numerical estimation of $Z$ in Eq. (\[ecopetrol\]) for the ground state of a Kitaev chain with $\Delta=1$. The data strongly suggests that $Z$ is rational as long as $w$ and $\mu$ are rational as well. The hyphen indicates the parameters for which $Z$ decreases monotonically and slowly as $N$ grows. Otherwise $Z$ converges for values of $N$ in the range of the tens.[]{data-label="cervanto"}
Interestingly, the numerical values taken by $Z$ in table \[cervanto\] correspond to rational numbers, which allows to fit the data to the following analytical function $$\begin{gathered}
Z = \max \left ( \frac{4 |w \Delta| }{(|\Delta| + |w|)^2} \left( 1 - \left( \frac{\mu}{2w} \right)^2 \right),0 \right ).
\label{malta}\end{gathered}$$ The fact that correlations between the edge sites are conditioned by the existence of unpaired MOs is readily noticeable in this elementary formula. As can be appreciated, the relation is given in terms of simple algebraic functions, so that power law coefficients are rational. The calculation of $Z$ for the first excited state yields the same ground-state values, while for the second excited state it seems to give slightly smaller values. It remains to be seen whether equation (\[malta\]) can be derived entirely by analytical means, as can be expected due to the integrability of the problem. Analytical results available so far correspond to the cases [*i*]{}: $\Delta=w$ and [*ii*]{}: $\mu=0$ [@miao1]. Both instances display structural coincidence with $Z$ in spite of differences with the correlation measure. This is because terms such as $\hat{c}_1 \hat{c}_N$ and its conjugate do not contribute to the expression $\langle i\hat{\gamma}_1 \hat{\gamma}_{2 N} \rangle$ in the thermodynamic limit.
The presented evidence hints that end-to-end correlations are good indicators of the effects generated by edge modes in one dimension. Due to the topological features of these systems, it is reasonable to assume that this result is robust in the presence of disorder or interaction. As a consequence, correlations constitute a useful inspection mechanism whenever a decomposition in terms of diagonal modes is not feasible or a topological analysis of the system’s band structure is not practical. Noticeably, the actual correlation measure seems to be relevant. Entanglement, which accounts for [*quantum correlations*]{}, vanishes exponentially as $N$ grows due to mixness developed by $\hat{\rho}_{1N}$.
Conclusions {#concl}
===========
Arguments supporting the suitability of end-to-end correlations as indicators of unpaired Majorana operators in the Kitaev chain are discussed. The proposal is verified implementing a folding protocol in combination with tensor-state representation to numerically find a given correlation criterion. The results can be written as a consistent analytical expression that evidence the connection with Majorana fermions. These findings support the hypothesis that the same approach can be used in systems with additional elements like disorder or interaction. Given the characteristics of the Kitaev chain, it would be interesting to apply similar methodologies to study Berry phases around the degeneracy points where Majorana fermions are completely localized.
It is quite likely the folding mechanism employed here have potential applications besides the Kitaev chain. First, with some modifications it can be adjusted to calculate time evolution or thermodynamic state. Second, it can also be applied to chains with long-range hopping or long-range pairing. However, in the way the method currently works, it can be used only for integrable models, because it is the diagonal modes what is actually folded. It is therefore desirable to develop a more versatile technique with a broader field of application. Nonetheless, the protocol can still be useful if interaction terms are reduced in a mean-field fashion, although it is not known how reliable such an approach is. Similarly, it is possible the method has applications in the study of open quantum systems and the numerical solution of the Lindblad equation.
Acknowledgments
===============
This research has been funded by Vicerrectoría de Investigaciones, Extensión y Proyección Social from Universidad del Atlántico under the project “Simulación numérica de sistemas cuánticos altamente correlacionados”.
Elliott S.R. and Franz M. [*Colloquium: Majorana fermions in nuclear, particle, and solid-state physics*]{} Reviews of Modern Physics [**87**]{}:137, 2015. Alicea J. [*New directions in the pursuit of Majorana fermions in solid state systems*]{} Reports on Progress in Physics [**75**]{}:076501 (2012). A. Kitaev [*Unpaired majorana fermions in quantum wires*]{} Physics-Uspekhi [**44**]{}:131, 2001. Deng M., Vaitiekenas S., Hansen E., Danon J., Leijnse M., Flensberg K., Nygard J., Krogstrup P., Marcus C. [*Majorana bound state in a coupled quantum-dot hybrid-nanowire system*]{} Science [**354**]{}:1557 (2016). Pawlak R., Kisiel M., Klinovaja J., Meier T., Kawai S., Glatzel T., Loss D., Meyer E. [*Probing atomic structure and Majorana wavefunctions in mono-atomic Fe chains on superconducting Pb surface*]{} Quantum Information [**2**]{} 16035 (2016). Sun H., Zhang K., Hu L., Li C., Wang G., Ma H., Xu Z., Gao C., Guan D., Li Y., Liu C., Qian D., Zhou Y., Fu L., Li S., Zhang F., Jia J. [*Majorana Zero Mode Detected with Spin Selective Andreev Reflection in the Vortex of a Topological Superconductor*]{} Physical Review Letters [**116**]{}:257003 (2016). N. Gergs, L. Fritz and D. Schuricht [*Topological order in the Kitaev/Majorana chain in the presence of disorder and interactions*]{} Physical Review B [**93**]{}:075129 (2016). Miao J., Jin H., Zhang F., Zhou Y. [*Majorana zero modes and long range edge correlation in interacting Kitaev chains: analytic solutions and density-matrix-renormalization-group study*]{} Scientific Reports [**8**]{}:488 (2018). Wang Y., Miao J., Jin H., Chen S. [*Characterization of topological phases of dimerized Kitaev chain via edge correlation functions*]{} Physical Review B [**96**]{}:205428 (2017). Miao J., Jin H., Zhang F., Zhou Y. [*Exact solution for the interacting Kitaev chain at the symmetric point*]{}, Physical Review Letters [**118**]{}:267701 (2017). Katsura H., Schuricht D., Takahashi M. [*Exact ground states and topological order in interacting Kitaev/Majorana chains*]{} Physical Review B [**92**]{}:115137 (2015). L. Fidkowski and A. Kitaev [*Topological phases of fermions in one dimension*]{} Physical Review B [**83**]{}:075103 (2011). G. Vidal [*Efficient classical simulation of slightly entangled quantum computations*]{} Physical Review Letters [**91**]{}:147901, 2003. J. Reslen [*Operator folding and matrix product states in linearly-coupled bosonic arrays*]{} Revista Mexicana de Fisica [**59**]{} 482 (2013). J. Reslen [*Mode folding in systems with local interaction: unitary and non-unitary transformations using tensor states*]{} Journal of Physics A: Mathematical and Theoretical [**48**]{} 175301 (2015). Horn R. and Johnson C. [*Matrix analysis*]{} Cambridge: Cambridge University Press, 1985. M. Wagner [*Unitary transformations in solid state physics*]{} Amsterdam: North-Holland Physics Publishing, 1986. Banerjee S. and Roy A. [*Linear algebra and matrix analysis for statistics*]{} Boca Raton: Chapman and Hall, 2014.
Tensorial representation of a fermion chain {#tensor}
===========================================
The reduction scheme presented in the main text can be used to write the state as a product of tensors [@vidal]. The basic principle behind such a representation is the use of Schmidt vectors [@wiki] that emerge in one-dimensional systems to build basis states that support the global quantum state. Schematically, the state of a fermion chain can be represented as in figure \[fig2\], using empty circles for unoccupied sites and black circles for sites with one fermion. Let us initially focus on one site of the chain, arbitrarily chosen. The Hilbert space of that site can be expanded using a local basis $|k\rangle$. The elements of such a basis are $|0\rangle$, to represent an empty site, and $|1\rangle$, to represent an occupied site. To complement the Hilbert space of the chain, one can consider the Schmidt vectors covering all the sites to the left of $|k\rangle$, plus the Schmidt vectors to the right, as shown in the upper draw of figure \[fig2\]. As can be seen, such vectors are represented as $|\mu_{\vdash}\rangle$ and $|\nu_{\dashv}\rangle$ respectively. As these vectors are taken as a basis, the total quantum state is given as a superposition of such vectors, as follows $$\begin{gathered}
| \psi \rangle = \sum_{\mu} \sum_{\nu} \sum_{k=0}^1 \lambda_{\mu} \Gamma_{\mu \nu}^{k} \lambda_{\nu} |\mu_{\vdash} \rangle |k \rangle |\nu_{\dashv} \rangle.
\label{kirko}\end{gathered}$$ The variables $\lambda_\mu$ and $\lambda_\nu$ are Schmidt coefficients and as such are real and positive. Although these coefficients can in principle be absorbed in the definition of the $\Gamma's$, their inclusion is an integral part of the protocol. The superposition coefficients are stored in the components of tensor $\Gamma_{\mu \nu}^{k}$. As a result this tensor is in general complex. Notice that the Schmidt vectors are orthogonal $$\begin{gathered}
\langle \mu_{\vdash} | \mu_{\vdash}' \rangle = \delta_{\mu}^{\mu'} \text{ and } \langle \nu_{\dashv} | \nu_{\dashv}' \rangle = \delta_{\nu}^{\nu'}.\end{gathered}$$ If the same expansion is done for each place of the chain, a set of tensors with no apparent connection among them is created and can serve as a representation of $|\psi \rangle$. One positive aspect of this representation is that a local unitary operation like (\[twenty-one\]) acting on site $l$ has a simple implementation [$$\begin{gathered}
\hat{U}^{[2l,2l-1]} | \psi \rangle = \sum_{\mu} \sum_{\nu}
\sum_{k=0}^1 \lambda_{\mu} e^{i \theta_{2l} \left( \frac{1}{2} - k
\right) } \Gamma_{\mu \nu}^{k} \lambda_{\nu} |\mu_{\vdash} \rangle |k \rangle |\nu_{\dashv} \rangle \nonumber \\
= \sum_{\mu} \sum_{\nu} \sum_{k=0}^1 \lambda_{\mu} {\Gamma_{\mu \nu}^{k}}' \lambda_{\nu} |\mu_{\vdash} \rangle |k \rangle |\nu_{\dashv} \rangle.\end{gathered}$$ ]{} The change involves redefining the tensors as follows $$\begin{gathered}
{\Gamma_{\mu \nu}^{k}}' = e^{i \theta_{2l} \left( \frac{1}{2} - k \right) } \Gamma_{\mu \nu}^{k}.\end{gathered}$$ To see how coefficients from different sites relate, let us take vector $|\mu_{\vdash}\rangle$ and write it as a product of the local basis, $|j\rangle$, and the Schmidt vectors to the left, $|\xi_{\vdash} \rangle$, as indicated in the middle sketch of figure \[fig2\]. In addition, let us represent this expansion in the following manner $$\begin{gathered}
| \mu_{\vdash} \rangle = \sum_{\xi} \sum_{j=0}^1 \lambda_\xi \Gamma_{\xi \mu}^j |\xi_{\vdash} \rangle |j \rangle.\end{gathered}$$ Replacing this expression in (\[kirko\]) it results [$$\begin{gathered}
| \psi \rangle = \sum_{\xi} \sum_{\nu} \sum_{j=0}^1 \sum_{k=0}^1
\lambda_\xi \left ( \sum_{\mu} \Gamma_{\xi \mu}^j \lambda_{\mu}
\Gamma_{\mu \nu}^{k} \right ) \lambda_{\nu} |\xi_{\vdash} \rangle |j \rangle |k \rangle |\nu_{\dashv} \rangle = \nonumber \\
\sum_{\mu} \lambda_{\mu} \left( \sum_{\xi} \sum_{j=0}^1 \lambda_\xi \Gamma_{\xi \mu}^j |\xi_{\vdash} \rangle |j \rangle \right) \left( \sum_{\nu} \sum_{k=0}^1 \Gamma_{\mu \nu}^{k} \lambda_{\nu} |k \rangle |\nu_{\dashv} \rangle \right).
\label{twenty-six}\end{gathered}$$ ]{} In the last expression the chain has been divided as a Schmidt decomposition with Schmidt coefficients $\lambda_\mu$. This implies that the set of vectors $$\begin{gathered}
| \mu_{\dashv} \rangle = \sum_{\nu} \sum_{k=0}^1 \Gamma_{\mu \nu}^k \lambda_\nu |k \rangle |\nu_{\dashv} \rangle,\end{gathered}$$ must be a set of Schmidt vectors to the right, making $\langle \mu_{\dashv} | \mu_{\dashv}' \rangle = \delta_{\mu}^{\mu'}$. The Schmidt decomposition of the state can then be written in the familiar form $$\begin{gathered}
| \psi \rangle = \sum_{\mu} \lambda_\mu | \mu_{\vdash} \rangle |\mu_{\dashv} \rangle. \end{gathered}$$ Let us now consider an unitary operation acting on consecutive sites $l$ and $l+1$, as for instance transformation (\[twenty-two\]). The operation can be represented as [$$\begin{gathered}
\hat {U}^{[2l+1,2l]} | \psi \rangle = \sum_{\xi} \sum_{\nu} \sum_{J=0}^1
\sum_{K=0}^1 \nonumber \\
\left ( \lambda_\xi \lambda_{\nu} \sum_{j=0}^1 \sum_{k=0}^1 U_{JK,jk} \sum_{\mu} \Gamma_{\xi \mu}^j \lambda_{\mu} \Gamma_{\mu \nu}^{k} \right ) |\xi_{\vdash} \rangle |J \rangle |K \rangle |\nu_{\dashv} \rangle.
\label{twenty-five}\end{gathered}$$ ]{} The resulting expression is no longer an evident Schmidt decomposition but it can be rearranged as one in the next manner. Let us write the operation in parenthesis as $$\begin{gathered}
\lambda_\xi \lambda_\nu \sum_{j=0}^1 \sum_{k=0}^1 \sum_{\mu} U_{JK,jk}
\Gamma_{\xi \mu}^j \lambda_{\mu} \Gamma_{\mu \nu}^{k} = M_{\xi J, K
\nu} = M_{\alpha,\beta} \nonumber\end{gathered}$$ In the last step the pairs of indices $(\xi, J)$ and $(K, \nu)$ have been replaced by single indices $\alpha$ and $\beta$. Notice that grouping indices is essentially a notation change. It resorts to the possibility of joining Hilbert spaces from adjacent sections of the chain. Matrix $M_{\alpha,\beta}$ has no restrictions apart from normalization. It is in general complex and is not necessarily square. Such a matrix can be written as a product of (less arbitrary) matrices applying a singular value decomposition (SVD) [@wiki] $$\begin{gathered}
M_{\alpha,\beta} = \sum_{\alpha'} \sum_{\beta'} T_{\alpha,\alpha'} \Lambda_{\alpha',\beta'} T_{\beta',\beta}.
\label{twenty-four}\end{gathered}$$ Both $T_{\alpha,\alpha'}$ and $T_{\beta',\beta}$ (different matrices) are complex and unitary, their rows (or columns) being orthogonal vectors. They are also square matrices. Matrix $\Lambda_{\alpha',\beta'}$ is real and diagonal. $$\begin{gathered}
\Lambda_{\alpha',\beta'} =
\left(
\begin{array}{cccc}
\lambda_1 & 0 & 0 & ... \\
0 & \lambda_2 & 0 & ... \\
0 & 0 & \lambda_3 & ... \\
\vdots & \vdots & \vdots & \ddots
\end{array}
\right)\end{gathered}$$ Normalization requires $\lambda_1^2 + \lambda_2^2 + \lambda_3^2 +... = 1$. All the $\lambda$’s are positive. In many numerical applications the number of $\lambda$’s is artificially fixed. Here the number of coefficients is handled dynamically and only those below numerical precision are discarded.
The double sum in (\[twenty-four\]) can be reduced to a single sum $$\begin{gathered}
M_{\alpha,\beta} = \sum_{\mu'} T_{\alpha,\mu'} \lambda_{\mu'} T_{\mu',\beta}.\end{gathered}$$ One can in addition write the labels $\alpha$ and $\beta$ in terms of the original labels $$\begin{gathered}
M_{\alpha,\beta} = \sum_{\mu'} T_{\xi J,\mu'} \lambda_{\mu'} T_{\mu',K \nu} = \sum_{\mu'} \lambda_\xi \Gamma_{\xi \mu'}^J \lambda_{\mu'} \Gamma_{\mu' \nu}^K \lambda_\nu.\end{gathered}$$ In the last step the components of the $T$’s have been renamed. Notice that the $\Gamma$’s in the last sum are different from the ones appearing in the initial state. No emphasized distinction is made in order not to overload the notation, but tensors with $\mu'$ are all new. Also notice that neither $\lambda_\xi$ nor $\lambda_\nu$ have changed. As $J$ and $K$ are integer labels without explicit meaning, it is valid to rename them with their lower-case equivalents $j$ and $k$. Introducing the final expression in (\[twenty-five\]) gives [$$\begin{gathered}
\hat {U}^{[2l+1,2l]} | \psi \rangle = \sum_{\xi} \sum_{\nu} \sum_{j=0}^1
\sum_{k=0}^1 \lambda_\xi \left ( \sum_{\mu} \Gamma_{\xi \mu'}^j
\lambda_{\mu'} \Gamma_{\mu' \nu}^{k} \right ) \lambda_{\nu}
|\xi_{\vdash} \rangle |j \rangle |k \rangle |\nu_{\dashv} \rangle.
\nonumber \end{gathered}$$ ]{} The state is in “canonical form”, i.e., written with respect to the (new) Schmidt vectors of the chain, since the states formed as $$\begin{gathered}
| \mu_{\vdash}' \rangle = \sum_{\xi} \sum_{j=0}^1 \lambda_\xi \Gamma_{\xi \mu'}^j |\xi_{\vdash} \rangle |j \rangle,\end{gathered}$$ and $$\begin{gathered}
| \mu_{\dashv}' \rangle = \sum_{\nu} \sum_{k=0}^1 \Gamma_{\mu' \nu}^k \lambda_\nu |k \rangle |\nu_{\dashv} \rangle,\end{gathered}$$ are orthogonal and normalized because they are the entries of matrices $T_{\alpha,\alpha'}$ and $T_{\beta',\beta}$ respectively.
This representation is very convinient to calculate local mean values. Using (\[kirko\]) it can be shown that the reduced density matrix of a given site is $$\begin{gathered}
\hat{\rho}_{k , k'} = \sum_{k=0}^1 \sum_{k'=0}^1 \sum_\mu \sum_\nu \lambda_\mu^2 \lambda_\nu^2 \Gamma_{\mu \nu}^k \Gamma_{\mu \nu}^{k' *}| k \rangle \langle k' |.\end{gathered}$$ A mean value corresponding to a matrix $\hat{\tau}$ that operates exclusively on that site can be found as $$\begin{gathered}
\langle \hat{\tau} \rangle = Tr(\hat{\rho} \hat{\tau}).\end{gathered}$$ One can work in an analogous way in the space of two consecutive positions using the corresponding reduced density matrix. It can be shown that this matrix can be written as $$\begin{gathered}
\hat{\rho}_{ j k, j' k' } = \sum_{j k} \sum_{j' k'} \sum_{\xi} \sum_{\nu} {}_{\xi}^{j}Y_{\nu}^{k} \text{ } {}_{\xi}^{j'}Y_{\nu}^{ k'\text{} *} |j k\rangle \langle j' k'|,\end{gathered}$$ such that $$\begin{gathered}
{}_{\xi}^{j} Y_{\nu}^{k} = \lambda_{\xi} \lambda_{\nu} \sum_{\mu} \Gamma_{\xi \mu}^{j} \lambda_\mu \Gamma_{\mu \nu}^k.\end{gathered}$$ Sometimes it is also useful to know how to obtain the state coefficients in the Fock basis in terms of this tensor representation. Such a relation can be derived following the arguments in reference [@vidal], thus yielding $$\begin{gathered}
c_{k_1 k_2 ... k_N} = \sum_{\mu} \sum_{\nu} ... \sum_{\xi} \Gamma_{1 \mu }^{k_1} \lambda_\mu \Gamma_{\mu \nu }^{k_2} \lambda_\nu ... \lambda_\xi \Gamma_{\xi 1 }^{k_N}.\end{gathered}$$ These operations can be efficiently computed if the number of Schmidt coefficients involved is not too large.
Example
-------
Let us initially consider a chain with no fermions. In the Fock basis the state is given by $$\begin{gathered}
|\psi \rangle = |000...0\rangle.\end{gathered}$$ When this state is split in adjacent parts, the corresponding Schmidt decomposition is trivial: There is one vector to the left, one vector to the right and the only Schmidt coefficient is $1$. From this observation the canonical decomposition can be built directly $$\begin{gathered}
\lambda = 1, \text{ } \Gamma_{1 1}^0 = 1, \text{ } \Gamma_{1 1}^1 = 0.\end{gathered}$$ The same pattern repeats for every place of the chain. Now let us consider the following unitary operation $$\begin{gathered}
\hat{U} =
\frac{1}{\sqrt{2}}
\left (
\begin{array}{cccc}
1 & 0 & 0 & i \\
0 & 1 & i & 0 \\
0 & i & 1 & 0 \\
i & 0 & 0 & 1
\end{array}
\right ).\end{gathered}$$ For simplicity let us suppose that $\hat{U}$ acts on the first two places. The action of this operator on the state is $$\begin{gathered}
\hat{U}^{[3]} |\psi \rangle = \frac{1}{\sqrt{2}} \left( |00 \rangle + i|11\rangle \right) |0...0\rangle.\end{gathered}$$ To build a canonical decomposition (the canonical decomposition is not unique), one sees the state as a composition of a local basis plus the Schmidt vectors to the right and left. Taking the local basis of the first site, the state can be written as $$\begin{gathered}
\hat{U}^{[3]} |\psi \rangle = \frac{1}{\sqrt{2}} \left( |0 \rangle |00...0\rangle + i|1 \rangle |10...0\rangle \right).\end{gathered}$$ Vectors $|\nu_1\rangle = |00...0\rangle$ and $|\nu_2\rangle = |10...0\rangle$ are normalized and orthogonal, therefore, they are valid Schmidt vectors. In this form it is possible to read out the canonical coefficients, finding $$\begin{gathered}
\lambda_1^{[1]} = \frac{1}{\sqrt{2}}, \text{ } \lambda_2^{[1]} = \frac{1}{\sqrt{2}}, \\
\Gamma_{1 1}^{0[1]} = 1, \text{ } \Gamma_{1 1}^{1[1]} = 0, \\
\Gamma_{1 2}^{0[1]} = 0, \text{ } \Gamma_{1 2}^{1[1]} = i.\end{gathered}$$ The superscript $[1]$ is added to emphasize that these coefficients correspond to the first site. With respect to this decomposition, the estate can be visualized in the following manner (with the superscript omitted) $$\begin{gathered}
\hat{U}^{[3]} |\psi \rangle = \lambda_1 \Gamma_{1 1}^0 |0 \rangle |\nu_1 \rangle + \lambda_2 \Gamma_{1 2}^1 |1 \rangle |\nu_2\rangle.\end{gathered}$$ This case has been sufficiently simple to allow a direct determination of the canonical representation. In other circumstances the protocol presented in the previous section can be used to build a representation in accordance with the original proposal using a systematic approach.
Reduced density matrix of the chain ends {#rdm}
========================================
To find the reduced density matrix the state is written as a tensor product making explicit reference to the components of each site $$\begin{gathered}
|\psi \rangle = \sum_{\alpha_1,\alpha_2,...,\alpha_{N-1}} | \alpha_0 \alpha_1 \rangle^{[1]} | \alpha_1 \alpha_2 \rangle^{[2]}... \nonumber \\
|\alpha_{N-2} \alpha_{N-1}\rangle^{[N-1]} |\alpha_{N-1} \alpha_{N}\rangle^{[N]},\end{gathered}$$ where $$\begin{gathered}
|\alpha_k \alpha_l \rangle^{[n]} = \sum_{j} \Gamma_{\alpha_k \alpha_l}^{j[n]} \lambda_{\alpha_l}^{[n]} |j\rangle.
\label{mist}\end{gathered}$$ The superscript in square brackets is used to specify the position in the chain associated to the corresponding tensor. Such a superscript is dropped in the subsequent development to simplify the notation. The reduction can be effectuated by bracketing corresponding spaces [$$\begin{gathered}
\hat{\rho}_{1N} = Tr_{\{2,3...,N-1\}}( |\psi \rangle \langle \psi | ) = \nonumber \\
\sum_{ \binom{\alpha_1,\alpha_2,...,\alpha_{N-1}}{{\alpha'}_1,{\alpha'}_2,...,{\alpha'}_{N-1}}
} |\alpha_0 \alpha_1 \rangle \langle {\alpha_0' \alpha'}_1 | \langle \alpha_1' \alpha_2' |{\alpha}_1 {\alpha}_2 \rangle
\langle \alpha_2' \alpha_3' |{\alpha}_2 {\alpha}_3 \rangle \nonumber \\
...\langle \alpha_{N-2}' \alpha_{N-1}' |{\alpha}_{N-2} {\alpha}_{N-1} \rangle |\alpha_{N-1} \alpha_{N} \rangle \langle {\alpha'}_{N-1} \alpha_{N}' |. \nonumber \end{gathered}$$ ]{} The above expression can also be written as a concatenation of index contractions [$$\begin{gathered}
\sum_{\alpha_1,{\alpha'}_1,\alpha_{N-1},{\alpha'}_{N-1}} |\alpha_0 \alpha_1 \rangle \langle {\alpha_0' \alpha'}_1 | M_{ \{ \alpha_1 {\alpha'}_1 \} \{ \alpha_2 {\alpha'}_2 \}} M_{ \{ \alpha_2 {\alpha'}_2 \} \{ \alpha_3 {\alpha'}_3 \} } \nonumber \\
... M_{ \{ \alpha_{N-2} {\alpha'}_{N-2} \} \{ \alpha_{N-1} {\alpha'}_{N-1} \} } |\alpha_{N-1} \alpha_{N} \rangle \langle {\alpha'}_{N-1} \alpha_{N}' |. \nonumber \end{gathered}$$ ]{} Thus, it all can be written as a single connecting matrix [$$\begin{gathered}
\hat{\rho}_{1N} = \sum_{\alpha_1,{\alpha'}_1,\alpha_{N-1},{\alpha'}_{N-1}} |\alpha_0 \alpha_1 \rangle \langle { \alpha_0' \alpha'}_1 | M_{ \{ \alpha_1 {\alpha'}_1 \} \{ \alpha_{N-1} {\alpha'}_{N-1} \} } \nonumber \\
...|\alpha_{N-1} {\alpha}_{N} \rangle \langle {\alpha'}_{N-1} {\alpha'}_{N} |. \nonumber\end{gathered}$$ ]{} The calculation of $M_{ \{ \alpha_1 {\alpha'}_1 \} \{ \alpha_{N-1} {\alpha'}_{N-1} \} }$ unavoidably involves all the tensors in the bulk of the representation and is the numerically heaviest task of the procedure. The resulting expression is a $4\times4$ matrix in the Fock basis of the chain edges.
|
---
abstract: '5G wireless networks are expected to support Ultra Reliable Low Latency Communications (URLLC) traffic which requires very low packet delays ( < 1 msec.) and extremely high reliability ($\sim$99.999%). In this paper we focus on the design of a wireless system supporting downlink URLLC traffic. Using a queuing network based model for the wireless system we characterize the effect of various design choices on the maximum URLLC load it can support, including: 1) system parameters such as the bandwidth, link SINR , and QoS requirements; 2) resource allocation schemes in Orthogonal Frequency Division Multiple Access (OFDMA) based systems; and 3) Hybrid Automatic Repeat Request (HARQ) schemes. Key contributions of this paper which are of practical interest are: 1) study of how the the minimum required system bandwidth to support a given URLLC load scales with associated QoS constraints; 2) characterization of optimal OFDMA resource allocation schemes which maximize the admissible URLLC load; and 3) optimization of a repetition code based HARQ scheme which approximates Chase HARQ combining.'
author:
- 'Arjun Anand, Gustavo de Veciana, [^1]'
bibliography:
- 'bibJournalList.bib'
- 'arjun.bib'
- 'ss-3.bib'
title: Resource Allocation and HARQ Optimization for URLLC Traffic in 5G Wireless Networks
---
URLLC, resource allocation, OFDMA, HARQ, wireless networks.
[^1]: This work is supported by Futurewei Technologies.
|
---
abstract: 'The cross-site linking function is widely adopted by online social networks (OSNs). This function allows a user to link her account on one OSN to her accounts on other OSNs. Thus, users are able to sign in with the linked accounts, share contents among these accounts and import friends from them. It leads to the service integration of different OSNs. This integration not only provides convenience for users to manage accounts of different OSNs, but also introduces usefulness to OSNs that adopt the cross-site linking function. In this paper, we investigate this usefulness based on users’ data collected from a popular OSN called Medium. We conduct a thorough analysis on its social graph, and find that the service integration brought by the cross-site linking function is able to change Medium’s social graph structure and attract a large number of new users. However, almost none of the new users would become high PageRank users (PageRank is used to measure a user’s influence in an OSN). To solve this problem, we build a machine-learning-based model to predict high PageRank users in Medium based on their Twitter data only. This model achieves a high F1-score of 0.942 and a high area under the curve (AUC) of 0.986. Based on it, we design a system to assist new OSNs to identify and attract high PageRank users from other well-established OSNs through the cross-site linking function.'
author:
- '[Email: {feili14, chenyang, xieronglucy}@fudan.edu.cn, {fehmi.ben.abdesslem, anders.lindgren}@ri.se]{}'
bibliography:
- 'ms.bib'
title: |
Understanding Service Integration of Online\
Social Networks: A Data-Driven Study
---
Service Integration, Online Social Networks, Cross-site Linking, High PageRank Users, Prediction, Medium.
Acknowledgement {#acknowledgement .unnumbered}
===============
This work is sponsored by National Natural Science Foundation of China (No. 61602122, No. 71731004), Natural Science Foundation of Shanghai (No. 16ZR1402200), Shanghai Pujiang Program (No. 16PJ1400700). Yang Chen is the corresponding author.
|
---
abstract: 'We establish a criterion for when an abelian extension of infinite-dimensional Lie algebras $\mathfrak{\hat{g}} = \mathfrak{g} \oplus_\omega \mathfrak{a}$ integrates to a corresponding Lie group extension $A \hookrightarrow \widehat{G} \twoheadrightarrow G$, where $G$ is connected, simply connected and $A \cong \mathfrak{a} \slash \Gamma$ for some discrete subgroup $\Gamma \subseteq \mathfrak{a}$. When $\pi_1(G)\neq 0$, the kernel $A$ is replaced by a central extension $\widehat{A}$ of $\pi_1(G)$ by $A$.'
address: 'Mathematical Physics, Royal Institute of Technology, SE-106 91 Stockholm, Sweden'
author:
- Pedram Hekmati
date: 'November 25, 2006'
title: Integrability Criterion for Abelian Extensions of Lie Groups
---
Introduction
============
Given a group $G$ with a normal subgroup $N$, one can construct the quotient group $H = G/N$. The theory of group extensions addresses the converse problem. Starting with $H$ and $N$, what different groups $G$ can arise containing $N$ as a normal subgroup such that $H \cong G/N$? The problem can be formulated for infinite-dimensional Lie groups, but the situation is more delicate. Many familiar theorems break down and one must take into account topological obstructions. In particular, Lie’s third theorem no longer holds and the question of integrability, i.e. whether a Lie algebra corresponds to a Lie group, becomes relevant.\
The aim of this paper is to establish an integrability criterion for abelian extensions of infinite-dimensional Lie groups by generalizing a geometric construction for gauge groups, [@M2; @M3; @M4]. In sections 2 and 3 we review the basic definitions of infinite-dimensional Lie groups and their abelian extensions. Section 4 gives a detailed account of the construction which leads up to the integrability criterion.
Infinite-dimensional lie groups
===============================
We define infinite-dimensional Lie groups along the lines of [@M5], which should be consulted for further details and for concrete examples. The first step is to define the concept of an infinite-dimensional smooth manifold. Here the bottom line is to replace $\mathbb{R}^n$ (or $\mathbb{C}^n$) by a more general model space on which a meaningful differential calculus can be developed. Essentially all familiar constructions in finite dimensions then carry over to the infinite-dimensional setting. We consider sequentially complete locally convex (s.c.l.c) topological vector spaces. These spaces have the property that every continuous path has a Riemann integral. We adopt the following notion of smoothness.
Let $E, F$ be s.c.l.c. topological vector spaces over $\mathbb{R}$ (or $\mathbb{C}$) and $f:U \to F$ a continuous map on an open subset $U\subseteq E$. Then $f$ is said to be differentiable at $x \in U$ if the directional derivative $$df(x)(v) = \lim_{t \to 0} \frac{1}{t}(f(x+tv)-f(x))$$ exists for all $v \in E$. It is of class $C^{1}$ if it is differentiable at all points of $U$ and $$df: U\times E \to F, \; (x,v) \mapsto df(x)(v)$$ is a continuous map on $U \times E$. Inductively we say that $f$ is of class $C^{n}$ if $df$ is a map of class $C^{n-1}$ and of class $C^{\infty}$ or smooth if it is of class $C^{n}$ for all $ n
\geq 1$.
This definition coincides with the alternative notion of *convenient* smoothness [@K2] on Fréchet manifolds. A smooth manifold modeled on a s.c.l.c. topological vector space $E$ is a Hausdorff topological space $M$ with an atlas of local charts $\{(U_i, \phi_i)\}$ such that the transition functions are smooth on overlaps $U_i\cap U_j$. A Lie group $G$ is a smooth manifold endowed with a group structure such that the operations of multiplication and inversion are smooth. The Lie algebra $\mathfrak{g}$ is defined as the space of left-invariant vector fields. A vector field $X:G \to TG$ is left-invariant if $$(L_{g*}X)(h) = X(gh), \; \; \forall g, h \in G $$ where $L_{g*}$ denotes the pushforward map of the diffeomorphism $L_g:G \to G, \; h \mapsto gh$. By definition $X$ is completely determined by its value at the identity and $\mathfrak{g}$ is therefore identified with $T_{\mathbf{1}}G$ as topological vector spaces, endowed with the continuous Lie bracket of vector fields. The most striking feature of infinite-dimensional Lie theory is that results on existence and uniqueness of ordinary differential equations and the implicit function theorem do not hold beyond Banach Lie groups. Therefore a priori there is no exponential map and even if it exists, it does not have to be locally bijective. The existence and smoothness of the exponential function hinges on the notion of regularity.
A Lie group $G$ is called regular if for each $X \in C^{\infty}([0,1],\mathfrak{g})$, there exists $\gamma \in C^{\infty}([0,1],G)$ such that $$\gamma ' (t) = L_{\gamma(t)*}(\mathbf{1}).X(t), \ \ \ \ \ \gamma(0)=\mathbf{1}$$ and the evolution map $${\rm evol}_G:C^{\infty}([0,1],\mathfrak{g}) \to G, \ \ \ \ \ X \mapsto \gamma(1)$$ is smooth.
In other words every smooth curve in the Lie algebra should arise, in a smooth way, as the left logarithmic derivative of a smooth curve in the Lie group. Note that regularity is a sharper condition than the requirement that the exponential map should be defined and smooth. Indeed if $\gamma(t)$ is the curve corresponding to the constant path $X(t) = X_0$ for some $X_0\in \mathfrak{g}$, then $\gamma(1) = {\rm exp}(X_0)$. All known Lie groups modeled on s.c.l.c. topological vector spaces are regular [@M5]. In the convenient setting for calculus, it has been shown [@M1] that all connected regular abelian Lie groups are of the form $\mathfrak{a}/\Gamma$, for some discrete subgroup $\Gamma \subseteq \mathfrak{a}$ of an abelian Lie algebra $\mathfrak{a}$. Moreover, parallel transport exists for connections on principal bundles with regular structure group [@K1]. Important examples of regular Lie groups include gauge groups $C^\infty(M,G)$ and diffeomorphism groups ${\rm Diff}(M)$, where $M$ is a smooth compact manifold and $G$ is a finite-dimensional Lie group. From this point on we assume that the Lie groups are regular unless stated otherwise.\
We digress to say a few words about the cohomology of Lie groups and Lie algebras.
Lie group cohomology
--------------------
Let $G$ be a Lie group. An abelian Lie group $A$ is called a smooth $G$-module if there is a smooth $G$-action on $A$ by automorphisms $G\times A \to A, (g,a) \mapsto g.a$. The set of smooth maps $f:G^n \to A$ such that $f(g_1,\dots,g_{n})=0$ whenever $g_j = \textbf{1}$ for some $j$, are called *n-cochains* and form an abelian group $C^n(G,A)$ under pointwise addition. A cochain complex $$\dots \to C^{n-1}(G,A) \xrightarrow{\delta_{n-1}} C^n(G,A) \xrightarrow{\delta_n} C^{n+1}(G,A) \to \dots$$ is generated by the nilpotent homomorphisms $\delta_n:C^n(G,A) \to C^{n+1}(G,A)$ defined by $$(\delta_n f)(g_1,\dots,g_{n+1}) =$$$$= g_1 . f(g_2,\dots,g_{n+1}) +
\sum_{i=1}^n(-1)^if(g_1,\dots,g_ig_{i+1},\dots,g_{n+1})+ (-1)^{n+1}f(g_1,\dots,g_n) \ .$$ Let $Z^n(G,A)={\rm ker } \ \delta_n$ and $B^n(G,A)={\rm im \; }\delta_{n-1}$ denote the subgroups of *n-cocycles* and *n-coboundaries* respectively. The $n$-th Lie cohomology group is given by $$H^n(G,A) = \frac{Z^n(G,A)}{B^n(G,A)}\ .$$
Lie algebra cohomology
----------------------
Let $\mathfrak{g}$ and $\mathfrak{a}$ be topological Lie algebras. Then $\mathfrak{a}$ is a continuous $\mathfrak{g}$-module if there is a continuous $\mathfrak{g}$-action, $\mathfrak{g} \times \mathfrak{a} \to \mathfrak{a}, (X,v) \mapsto X.v$. Denote by $C^n(\mathfrak{g},\mathfrak{a})$ the vector space of continuous alternating multilinear maps $\omega:\mathfrak{g}^n \to \mathfrak{a}$. A cochain complex $$\dots\to C^{n-1}(\mathfrak{g},\mathfrak{a}) \xrightarrow{d_{n-1}} C^n(\mathfrak{g},\mathfrak{a})
\xrightarrow{d_n} C^{n+1}(\mathfrak{g},\mathfrak{a}) \to \dots$$ is obtained by the nilpotent linear maps $d_n:C^n(\mathfrak{g},\mathfrak{a}) \to C^{n+1}(\mathfrak{g},\mathfrak{a})$ given by Palais’ formula $$(d_n \omega)(X_1,\dots,X_{n+1}) \equiv \sum_{i=1}^{n+1} (-1)^{i+1}X_i .
\omega(X_1,\dots,\hat{X}_i,\dots,X_{n+1}) +$$ $$+ \sum_{i<j}(-1)^{i+j}\omega([X_i,X_j],X_1,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{n+1})$$ where $\hat{X}_i$ means that $X_i$ is omitted. Let $Z^n(\mathfrak{g},\mathfrak{a})={\rm ker } \ d_n$ and $B^n(\mathfrak{g},\mathfrak{a})={\rm im \; }d_{n-1}$ denote the subspaces of *n-cocycles* and *n-coboundaries* respectively. The $n$-th Lie cohomology group is the quotient space $$H^n(\mathfrak{g},\mathfrak{a}) = \frac{Z^n(\mathfrak{g},\mathfrak{a})}{B^n(\mathfrak{g},\mathfrak{a})}\
.$$
In the context of Lie group and Lie algebra extensions, the second cohomology group $H^2$ is important in classifying topologically trivial abelian extensions as we will see. For $n\geq 2$ there is a derivation map $D_n: H^n(G,A) \to H^n(\mathfrak{g},\mathfrak{a})$ given by [@N] $$(D_n f)(X_1,\dots,X_n) = \frac{\partial^n}
{\partial t_1 \dots \partial t_n} \sum_{\sigma \in S_n} {\rm sgn}(\sigma)
f(\gamma_{\sigma(1)}(t_{\sigma(1)}), \dots, \gamma_{\sigma(n)}(t_{\sigma(n)}))\Big|_{t_i = 0}$$ where $\gamma_1(t_1),\dots,\gamma_n(t_n)$ is any set of smooth curves in $G$ satisfying $\gamma_i(0) = \mathbf{1}$ and $\gamma_i'(0) = X_i \in \mathfrak{g}$.
Abelian extensions
==================
An extension of Lie groups is a short exact sequence with smooth homomorphisms $$\mathbf{1} \to A \xrightarrow{i} \widehat{G} \xrightarrow{p} G \to \mathbf{1}$$ such that $p$ admits a smooth local section $s: U \to \widehat{G}$, $p \circ s = {\rm
id}_U$, where $U \subset G$ is a local identity neighborhood.
The existence of a smooth local section means that $\widehat{G}$ is a principal $A$-bundle over $G$. The extension is called *abelian* if $A$ is abelian and *central* if $i(A)$ lies in the center $Z(\widehat{G})$. Two extensions $\widehat{G}_1$ and $\widehat{G}_2$ are *equivalent* if there exists a smooth homomorphism $\phi:\widehat{G}_1\to\widehat{G}_2$ which makes the following diagram commute: $$\begin{CD}
A @>i_1>> & \widehat{G}_1 & @>p_1>> G \\
@Vid_AVV & @V\phi VV & @Vid_GVV\\
A @>i_2>> & \widehat{G}_2 & @>p_2>> G
\end{CD}$$ It is easy to verify by diagram chasing that $\phi$ must be an isomorphism. The definition for Lie algebras is analogue.
An extension of topological Lie algebras is a short exact sequence with continuous homomorphisms $$\mathbf{0} \to \mathfrak{a} \xrightarrow{i} \mathfrak{\hat{g}} \xrightarrow{p} \mathfrak{g} \to \mathbf{0}$$
Two extensions $\mathfrak{\hat{g}}_1$ and $\mathfrak{\hat{g}}_2$ are said to be *equivalent* if there is an isomorphism of topological Lie algebras $\phi: \mathfrak{\hat{g}}_1 \to
\mathfrak{\hat{g}}_2$ such that the following diagram commutes: $$\begin{CD}
\mathfrak{a} @>i_1>> & \hat{\mathfrak{g}}_1 & @>p_1>> \mathfrak{g} \\
@Vid_\mathfrak{a}VV & @V\phi VV & @Vid_\mathfrak{g}VV\\
\mathfrak{a} @>i_2>> & \hat{\mathfrak{g}}_2 & @>p_2>> \mathfrak{g}
\end{CD}$$ Next we show how abelian extensions can be constructed explicitly. We have to make an assumption however. Viewed as a principal bundle, $\widehat{G}$ is assumed to be topologically trivial.
Let $G$ be a Lie group, $A$ a smooth $G$-module and $f \in
Z^2(G,A)$ a 2-cocycle. The smooth manifold $G \times A$ endowed with the multiplication $$(g_1,a_1)(g_2,a_2) = (g_1g_2, a_1 +g_1.a_2 + f(g_1,g_2))$$ defines an abelian extension $\widehat{G}= G \times_f A$ of $G$ by $A$.
Associativity of the group law follows by the 2-cocycle property. The unit element is $(\mathbf{1},0)$ and $(g,a)^{-1} = (g^{-1},-g^{-1}.(a+f(g,g^{-1})))$. The extension is topologically trivial since $s:G \to \widehat{G}, g \mapsto (g, f(g,g))$ defines a smooth global section. Moreover, the conjugation action of $\widehat{G}$ on $A$ coincides with the smooth $G$-action. There is a similar cocycle construction for Lie algebras.
Let $\mathfrak{g}$ be a topological Lie algebra, $\mathfrak{a}$ a continuous $\mathfrak{g}$-module and $\omega \in Z^2(\mathfrak{g},\mathfrak{a})$ a 2-cocycle. The topological vector space $\mathfrak{g}\oplus \mathfrak{a}$ endowed with the continuous Lie bracket $$[(X_1,v_1),(X_2,v_2)] = ([X_1,X_2], X_1.v_2 - X_2.v_1 + \omega(X_1,X_2))$$ defines a topologically split abelian extension $\hat{\mathfrak{g}}= \mathfrak{g}\oplus_\omega
\mathfrak{a}$ of $\mathfrak{g}$ by $\mathfrak{a}$. A continuous global section is given by $s: \mathfrak{g} \to \hat{\mathfrak{g}}, X \mapsto (X, 0)$.
It turns out that all topologically trivial abelian extensions, where the conjugation action of $\widehat{G}$ on $A$ induces the smooth $G$-action, arise in this way. Furthermore, two such extensions are equivalent if and only if the 2-cocycles differ by a 2-coboundary [@N]. The second cohomology group $H^2$ therefore parametrize the set of equivalence classes of these extensions. The Lie algebra of $\widehat{G}= G \times_f A$ is as one would expect $\hat{\mathfrak{g}}= \mathfrak{g}\oplus_{D_2 f}\mathfrak{a}$.
Integrability Criterion
=======================
In this section we elucidate when an abelian extension of Lie algebras $\mathfrak{\hat{g}} = \mathfrak{g} \oplus_\omega \mathfrak{a}$ corresponds to a Lie group extension $\widehat{G}$. If $\omega = D_2 f$ for some $f \in Z^2(G,A)$, then by the previous section the corresponding Lie group extension is $\widehat{G}= G \times_f A$. In the general case, $\omega$ must satisfy a certain condition that will become apparent by the following construction. The basic idea is to construct $\widehat{G}$ as the quotient of a larger group $\mathcal{P}G \times_\gamma A$. This means that in general the extension will be topologically twisted and therefore the group multiplication cannot be described by a smooth global 2-cocycle.\
Let $G$ be the connected Lie group of $\mathfrak{g}$ and $A$ a smooth $G$-module of the form $\mathfrak{a}/\Gamma$ for some discrete subgroup $\Gamma \subseteq \mathfrak{a}$. We write $e:\mathfrak{a} \to A$ for the exponential (quotient) map and employ multiplicative notation. The Lie algebra cocycle $\omega \in Z^2(\mathfrak{g},\mathfrak{a})$ defines a closed $G$-equivariant 2-form $\omega^{eq} \in \Omega^2(G,\mathfrak{a})$ by $$\omega^{eq}(g)(L_{g*}X, L_{g*}Y) = (L_{g}^*\omega^{eq})(\mathbf{1})(X,Y) = g.\omega(X,Y) \ \ \ \ \ \forall X,Y \in
\mathfrak{g} \ .$$ For central extensions, the $G$-action on $A$ is trivial and this is simply the associated left-invariant 2-form. Let $\mathcal{P}G$ denote the space of smooth based paths $\hat{g}:[0,1]\to G$, originating at the identity $\hat{g}(0) = \mathbf{1}$ and ending at $\hat{g}(1) = g$. It is given the $C^\infty$-topology of uniform convergence of the paths and all their derivatives. We require further that the left logarithmic derivative of the paths should be periodic, i.e. $\hat{g}^{-1}d\hat{g}(0)=\hat{g}^{-1}d\hat{g}(1)$. Consider $\mathcal{P}G\times A$ and introduce an equivalence relation $$(\hat{g}_1, a_1) \sim (\hat{g}_2, a_1e^{\int_{\pi[\hat{g}_1,\hat{g}_2]} \omega^{eq}})$$ whenever two paths end at the same point $\hat{g}_1(1) = \hat{g}_2(1)$ and form a smooth contractible loop, so that there is a well-defined 2-dimensional surface $\pi = \pi[\hat{g}_1,\hat{g}_2]$ in $G$ bounded by these paths. If $\Delta_2 = \{(t,s)\in \mathbb{R}^2 | 0\leq s \leq t \leq 1 \}$, then one could take $\pi$ as the smooth singular 2-chain $$\pi: (t,s) \mapsto \begin{cases} g & t=1, \ 0\leq s \leq1 \\ \hat{g}_1(\alpha(t(t-s)))\hat{g}_2(\alpha(ts)) & \textrm{otherwise} \end{cases}$$ where $\alpha:[0,1]\to[0,1]$ is any smooth map satisfying $\alpha(0) = 0, \alpha(1) = 1$ and $\frac{d\alpha}{du}(0)=\frac{d\alpha}{du}(1)=0$. The surface $\pi$ is not unique however. If $\pi'$ is another smooth 2-chain with the same boundary, then $$e^{\int_{\pi'} \omega^{eq}} = e^{\int_{\pi'} \omega^{eq}+ \int_{\pi+ \pi^{-}} \omega^{eq}} = e^{\int_{\pi' + \pi^{-}} \omega^{eq}}e^{\int_\pi \omega^{eq}}$$ where $\pi^{-}$ denotes $\pi$ with the opposite orientation. Here $\pi' + \pi^{-}$ is the smooth 2-cycle corresponding to the closed surface obtained by gluing together $\pi'$ and $\pi^{-}$ along their common boundary. For the equivalence relation to be well-defined we require that $e^{\int_{\pi'+ \pi^-}\omega^{eq}}= 0$, or equivalently that $$\int_{c} \omega^{eq} \in \Gamma$$ for all smooth 2-cycles $c \in Z_2(G)$. If $c = \partial b \in B_2(G)$ is a 2-boundary, then this is automatically satisfied by Stoke’s theorem $$\int_{\partial b} \omega^{eq} = \int_b d\omega^{eq} = 0$$ and therefore the condition factors through to homology.\
We can now proceed to construct the Lie group extension $\hat{G}$ corresponding to $\mathfrak{\hat{g}} = \mathfrak{g} \oplus_\omega \mathfrak{a}$ by defining a multiplication on $\mathcal{P}G\times A / \sim $ $$[(\hat{g}_1,a_1)][(\hat{g}_2,a_2)] = [(\hat{g}_1\hat{g}_2,a_1(\hat{g}_1.a_2)e^{\gamma(\hat{g}_1,\hat{g}_2)})]$$ where $\hat{g}.a := \hat{g}(1).a = g.a$ is the given $G$-action on $A$ and $\gamma : \mathcal{P}G \times \mathcal{P}G \to \mathfrak{a}$ is a smooth 2-cocycle. The latter must be defined in such a way that it yields the correct Lie algebra cocycle $\omega$ and is compatible with the equivalence relation. This is accomplished by choosing $$\gamma(\hat{g}_1,\hat{g}_2) = \int_\sigma \omega^{eq}$$ where $\sigma: \Delta_2 \to G, (t,s) \mapsto \hat{g}_1(t)\hat{g}_2(s)$ is the smooth singular 2-chain with vertices in $1$, $g_1$ and $g_1g_2$ and bounded by the paths $\hat{g}_1$, $\hat{g}_2$ and $g_1\hat{g}_2$. The 2-cocycle identity $$(\delta_2\gamma)(\hat{g}_1, \hat{g}_2,\hat{g}_3) = \hat{g}_1.\gamma(\hat{g}_2,\hat{g}_3) - \gamma(\hat{g}_1\hat{g}_2,\hat{g}_3 )+ \gamma(\hat{g}_1, \hat{g}_2\hat{g}_3) -\gamma(\hat{g}_1,\hat{g}_2)= 0 \ \ {\rm mod} \ \Gamma$$ is satisfied by $(1)$ since the regions of integration form a smooth 2-cycle, see Figure 1.
![[]{data-label="Figure1"}](Figure1.eps "fig:"){width="2in"}\
The face not joining to $\mathbf{1}$ is the left translation by $g_1$ of the domain of integration $$\hat{g}_1.\gamma(\hat{g}_2,\hat{g}_3) = \int_\sigma g_1.\omega^{eq} = \int_\sigma L_{g_1}^* \omega^{eq} = \int_{L_{g_1} \sigma} \omega^{eq}$$ where we have used the $G$-equivariance of $\omega^{eq}$. Thus, we conclude that the multiplication is associative. To see that it is well-defined, i.e. independent of the representatives, a straightforward calculation leads to $$\int_{\pi[\hat{g}_1\hat{g}_2,\hat{g}'_1\hat{g}_2]}\omega^{eq}- \int_{\pi[\hat{g}_1,\hat{g}'_1]}\omega^{eq} - \gamma(\hat{g}'_1,\hat{g}_2)+\gamma(\hat{g}_1,\hat{g}_2) = 0 \ \ {\rm mod} \ \Gamma$$ $$\int_{\pi[\hat{g}_1\hat{g}_2,\hat{g}_1\hat{g}'_2]}\omega^{eq}- \int_{\pi[\hat{g}_2,\hat{g}'_2]}g_1.\omega^{eq} - \gamma(\hat{g}_1,\hat{g}'_2)+\gamma(\hat{g}_1,\hat{g}_2) = 0 \ \ {\rm mod} \ \Gamma$$ $$\int_{\pi[\hat{g}_1\hat{g}_2,\hat{g}'_1\hat{g}'_2]}\omega^{eq} - \int_{\pi[\hat{g}_1,\hat{g}'_1]}\omega^{eq} - \int_{\pi[\hat{g}_2,\hat{g}'_2]}g_1.\omega^{eq} - \gamma(\hat{g}'_1,\hat{g}'_2)+\gamma(\hat{g}_1,\hat{g}_2) = 0 \ \ {\rm mod} \ \Gamma \ .$$\
Again the regions of integration form closed 2-dimensional surfaces in $G$, depicted in Figure 2. The label on each face refers to the corresponding term in the expressions above, numbered from left to right.
![[]{data-label="Figure2"}](Figure2.eps "fig:"){width="5in"}\
\
Next let us calculate the Lie algebra cocycle. We use the equivariance property to evaluate $\omega^{eq}$ only at the identity $$\begin{aligned}
\int_\sigma \omega^{eq} &=& \int_{\Delta_2}\omega^{eq}(\sigma(t,s))\left(\sigma_*\frac{\partial}{\partial t}, \sigma_*\frac{\partial}{\partial s}\right)dt\wedge ds \\
&=& \int_{\Delta_2}\omega^{eq}(\hat{g}_1(t)\hat{g}_2(s))\left(\frac{d\hat{g}_1}{dt}\hat{g}_2(s), \hat{g}_1(t)\frac{d\hat{g}_2}{ds}\right)dtds \\ &=& (g_1 g_2).\int_{\Delta_2}\omega^{eq}(\mathbf{1})\left(\hat{g}^{-1}_2(s)\hat{g}^{-1}_1(t)\frac{d\hat{g}_1}{dt}\hat{g}_2(s), \hat{g}^{-1}_2(s)\frac{d\hat{g}_2}{ds}\right)dtds \ .\end{aligned}$$ Near the identity in $\mathcal{P}G$ we can write $\hat{g}_1(t) = \exp(\tau X(t))$ and $\hat{g}_2(s) = \exp(\sigma Y(s))$ for $X,Y \in \mathcal{P}\mathfrak{g}$, where $\exp $ is defined pointwise by the exponential map of $G$. The Lie algebra cocycle is given by $$\begin{aligned}
(D_2e^\gamma)(X,Y) &=& \frac{d^2}{d\sigma d\tau}(e^{\gamma(\hat{g}_1,\hat{g}_2)}e^{-\gamma(\hat{g}_2, \hat{g}_1)})|_{\tau=\sigma=0} \\
&=& \int_{0\leq s \leq t \leq 1}
\omega^{eq}(\mathbf{1})\left(\frac{dX}{dt}, \frac{dY}{ds}\right)ds dt +
\int_{0\leq t \leq s \leq 1} \omega^{eq}(\mathbf{1})\left(\frac{dX}{dt},
\frac{dY}{ds}\right)ds dt \\ &=& \int_{0\leq s, t \leq 1}
\omega^{eq}(\mathbf{1})\left(\frac{dX}{dt}, \frac{dY}{ds}\right)ds dt =
\omega^{eq}(\mathbf{1})(X(1),Y(1)) = \omega(X,Y)\end{aligned}$$ where we have used the antisymmetry of $\omega^{eq}$ and the fact that $\omega^{eq}(\mathbf{1})(X(0),Y(0)) = 0$. With that, we have verified that $\gamma$ induces a well-defined group multiplication and the correct cocycle at the Lie algebra level. The Lie group extension corresponding to $\mathfrak{\hat{g}}=\mathfrak{g}\oplus_\omega
\mathfrak{a}$ is the principal $\widehat{A}$-bundle $ \widehat{G} =
\mathcal{P}G\times_\gamma A / \sim \ \to G$ with the projection $[(\hat{g},a)] \mapsto \hat{g}(1)$. The fiber $\widehat{A} = \pi_1(G)\times_\gamma A$ is a central extension of $\pi_1(G)$ by $A$. This is easily seen by the following. If $\pi_1(G)\hookrightarrow \tilde{G}\twoheadrightarrow G$ denotes the universal covering, then the same construction for $\tilde{G}$ gives $$\begin{CD}
A @>i_1>> & \mathcal{P}\tilde{G}\times_\gamma A / \sim & @>p_1>> \tilde{G} \\
@VVV & @VVV & @VqVV\\
\widehat{A} @>i_2>> & \mathcal{P}G\times_\gamma A / \sim & @>p_2>> G
\end{CD}$$ Restriction to the subgroup $\pi_1(G) \subset \tilde{G}$ induces a central extension $A \xrightarrow{i_1} \pi_1(G)\times_\gamma A \xrightarrow{p_1} \pi_1(G)$, since $\pi_1(G)$ is discrete and acts trivially on $A$. Finally we have $\widehat{A} = {\rm ker \ } p_2 = {\rm ker \ } p_1 \circ q = \pi_1(G)\times_\gamma A$. The right action of the structure group $\widehat{A}$ is $[(\hat{g},a)].[(\eta, a')] = [(\hat{g}\eta, a(\hat{g}.a')e^{\gamma(\hat{g}, \eta)})]$.\
To see that condition $(1)$ is not only necessary but sufficient, let $A\hookrightarrow\widehat{G}\twoheadrightarrow G$ be a Lie group extension corresponding to $\mathfrak{\hat{g}} = \mathfrak{g} \oplus_\omega \mathfrak{a}$. We claim that $\omega^{eq}$ is the curvature form of a connection on this principal bundle. Indeed, if $pr: \mathfrak{g}\oplus_\omega \mathfrak{a} \to
\mathfrak{a}$ denotes the projection onto the ideal $\mathfrak{a}$, then there is a canonical connection 1-form on $\widehat{G}$ given by $$\alpha := -pr(g^{-1}dg): \widehat{G} \to T\widehat{G}^* \otimes \mathfrak{a}$$ with the curvature $\Omega = d\alpha + \frac{1}{2}[\alpha,\alpha] =
d\alpha$, where the last equality holds since $\mathfrak{a}$ is abelian. By the Maurer-Cartan equation $$\Omega =-pr d(g^{-1}dg) = \frac{1}{2} pr [g^{-1}dg, g^{-1}dg] \ .$$ We are interested in the pullback of the curvature 2-form onto the base space. Let $U \subset G$ be an identity neighborhood and define a smooth local section $s:U\to\widehat{G}$ by $g \mapsto (g,0)$. Evaluating the pullback on $X,Y \in \mathfrak{g}$ at the identity, we get precisely $$\begin{aligned}
s^*(\Omega)(\mathbf{1})(X,Y) &=& d\alpha((\mathbf{1},0))((X,0),(Y,0)) \\
&=& \frac{1}{2} pr [(X,0),(Y,0)] \\ &=& \frac{1}{2} \omega(X,Y) - \frac{1}{2} \omega(Y,X) = \omega(X,Y) \ .\end{aligned}$$ Equipped with a connection, one can define the horizontal lift of a curve on the base space and subsequently the notion of parallel transport. In particular, the holonomy around a loop $\eta:S^1 \to
G$ is given by $${\rm hol(\alpha, \eta)} = e^{ \int_\eta \alpha} = e^{ \int_\pi \omega^{eq} }$$ where $\pi$ is a surface enclosed by $\eta$. The arbitrariness in the choice of this surface leads as before to the requirement $\int_{[c]} \omega^{eq} \in \Gamma$ for all smooth 2-cycles $[c] \in H_2(G)$. Thus, we are lead to the result:
Let $G$ be a connected regular Lie group and $A$ a smooth regular $G$-module of the form $\mathfrak{a}/\Gamma$ for some discrete subgroup $\Gamma \subseteq \mathfrak{a}$. The abelian Lie algebra extension $\hat{\mathfrak{g}} =
\mathfrak{g}\oplus_\omega \mathfrak{a}$ integrates to a Lie group extension $\mathbf{1} \to \widehat{A}\to \widehat{G}\to G \to \mathbf{1}$ if and only if $$\int_{[c]} \omega^{eq} \in \Gamma$$ for all $[c] \in H_2(G)$, where $\widehat{A} = \pi_1(G)\times_\gamma A$ is a central extension of $\pi_1(G)$ by $A$.
When $G$ is simply connected, then $\widehat{A} = A$ and by Hurewicz theorem $\pi_2(G)= H_2(G)$, so the condition coincides with that found in [@N].
acknowledgements {#acknowledgements .unnumbered}
================
I would like to thank Jouko Mickelsson and Karl-Hermann Neeb for helpful comments.
[10]{}
H. Glöckner, *Fundamental Problems in the Theory of Infinite-Dimensional Lie Groups.* J. Geom. Symm. Phys. **5** (2006), pp. 24-35.
A. Kriegl, P. W. Michor, *Regular Infinite Dimensional Lie Groups.* J. Lie Theory **7** (1997), pp. 61-99.
A. Kriegl, P. W. Michor, *The Convenient Setting of Global Analysis.* Amer. Math. Soc., Providence (1997).
P. W. Michor, J. Teichmann, *Description of infinite dimensional abelian regular Lie groups.* J. Lie Theory **9** (1999) pp. 487-489.
J. Mickelsson, *Current Algebras and Groups.* Plenum press, 233 Spring Street, New York (1989).
J. Mickelsson, R. Percacci, *Global Aspects of p-Branes.* J. Geom. Phys. **15** (1995), pp. 369-380.
M. K. Murray, *Another Construction of the Central Extension of the Loop Group.* Commun. Math. Phys. **116** (1988), pp. 73-80.
J. Milnor, *Remarks on Infinite-Dimensional Lie Groups.* “Relativité, Groupes et Topologie II” B. DeWitt and R. Stora (Eds), North-Holland, Amsterdam (1983), pp. 1007-1057.
K. H. Neeb, *Abelian extensions of infinite-dimensional Lie groups.* Mathematical works. Part XV. Luxembourg: Université du Luxembourg, Séminaire de Mathématique (2004), pp. 69-194.
|
---
abstract: |
In noncommutative QED photons present self-interactions in the form of triple and quartic interactions. The triple interaction implies that, even though the photon is electrically neutral, it will deflect when in the presence of an electromagnetic field. If detected, such deflection would be an undoubted signal of noncommutative space-time. In this work we derive the general expression of the deflection of a photon by any electromagnetic field. As an application we consider the case of the deflection of a photon by an external static Coulomb field.\
PACS numbers: 12.90.+b; 13.40.-f.
author:
- 'C. A. de S. Pires [^1]'
title: Photon deflection by a Coulomb field in noncommutative QED
---
Introduction {#sec1}
============
It is very well-known that when an electrically charged particle passes through an electromagnetic field it will suffer a deflection, while electrically neutral particles will not. This is particularly true for the photon since that in ordinary QED a photon will never be deflected when passing through an electromagnetic field.
Things change considerably in regard to photons when we formalute QED in the framework of non-commutative space-time(NCST). In noncommutative QED photons develop self-interactions in the form of triple and quartic interactions [@ahr][^2] $$\begin{aligned}
{\cal L}_{photonic}&=&-\frac{1}{4}F^{\mu \nu}F_{\mu \nu}
- e\sin(\frac{p_1 C p_2}{2\Lambda^2})
(\partial_\mu A_\nu -\partial_\nu A_\mu)A^\mu A^\nu\nonumber \\
&&-e^2\sin^2(\frac{p_1 C p_2}{2\Lambda^2})A^4.
\label{photonic}\end{aligned}$$ These self-interactions implies that photons(even though electrically neutral) will undergo a deflection when passing through an external electromagnetic field $A_\mu(x)$ $$\begin{aligned}
\gamma(p)+ A(q) \rightarrow \gamma(p^{\prime}).
\label{process}\end{aligned}$$ This weird prediction, if detected, would be a clear and undoubted signal of NCST.
The proposal of this brief report is to study the photon scattering by an electromagnetic field in the context of noncommutative QED. Our plan is first to obtain the general form of the differential cross section for the scattering of a photon by any external electromagnetic field. After we apply it for the simplest case, which is the scattering of a photon by an external static Coulomb field.
Case of a general electromagnetic field
=======================================
In lowest order the scattering arises from the first-order term of the S-matrix expansion $$\begin{aligned}
S^{(1)}_\gamma =-2e\sin(\frac{p_1 C p_2}{2\Lambda^2})\int d^4x T\{N[(\partial_\mu A_\nu -\partial_\nu A_\mu)A^\mu A^\nu]\},
\label{int}\end{aligned}$$ where $T$ and $N$ means [*time-ordered* ]{} and normal product respectively.
We consider the following Fourier expansion for the quantum field of the photon $$\begin{aligned}
A_\mu(x) = \sum\left( \frac{1}{2Vw_p} \right)^{1/2}\left[ \epsilon_\mu ({\bf p}) a({\bf p}) e^{-ip.x} + \epsilon^*_\mu ({\bf p})a^{\dagger}({\bf p}) e^{-ip.x} \right].
\label{photon}\end{aligned}$$ With this we obtain the following expression for the first-order term for the transition of a photon from a state $|i \rangle $, with momentum $p=(E,p)$ and polarization vector $\epsilon_\mu({\bf p})$, to a state $|f \rangle $, with momentum $p^{\prime}=(E^{\prime},p^{\prime})$ and polarization vector $\epsilon_\mu({\bf p^{\prime}})$ caused by the scattering with an electromagnetic field $A_\mu(x)$ $$\begin{aligned}
\langle f|S^{(1)}_\gamma |i \rangle =\left[ (2 \pi)\delta (E-E^{\prime})\left( \frac{1}{2V w_{p^{\prime}}} \right)^{1/2} \left( \frac{1}{2V w_{p}} \right)^{1/2} \right] {\cal M},
\nonumber\end{aligned}$$ with
$$\begin{aligned}
{\cal M}=2i e \sin(\frac{pCq}{2 \Lambda^2}) [-(p-q)^\rho g^{\mu \nu} -
(q+p^{\prime})^\mu g^{\nu \rho} + (p+p^{\prime})^\nu g^{\mu \rho} ]\epsilon_\mu({\bf p})\epsilon_\rho({\bf p}^{\prime})A_\nu({\bf q}),
\label{ampinvariant}\end{aligned}$$
being the Feynman amplitude for the scattering depicted in FIG. (1). In this amplitude $A_{\mu}({\bf q})$ is the Fourier transform of $A_{\mu}({\bf x})$.
The S-matrix element $ \langle f|S^{(1)}_\gamma |i\rangle$ above leads to the following transition probability per unity time
$$\begin{aligned}
\omega =\frac{1}{T}|\langle f|S^{(1)}_\gamma |i \rangle |^2 =\left[ (2 \pi)\delta (E-E^{\prime})\left( \frac{1}{2V w_{p^{\prime}}} \right) \left( \frac{1}{2V w_{p}} \right) \right] |{\cal M}|^2
\label{transprobability}\end{aligned}$$
In order to obtain the differential cross section for this kind of scattering, we have to multiply the transition probability by the density of final states, $$\begin{aligned}
\frac{V d^3{\bf p}^{\prime}}{(2 \pi)^3}=\frac{VE^{\prime 2}dE^{\prime} d\Omega}{(2 \pi)^3},
\label{density}\end{aligned}$$ and divide by the incident photon flux $1/V$. After all this we obtain $$\begin{aligned}
\frac{d\sigma}{d\Omega}=\frac{1}{16\pi^2}|{\cal M}|^2,
\label{difcross}\end{aligned}$$ for the differential cross section for the deflection of a photon by an external electromagnetic field as represented in (\[process\]). As ${\cal M}$ in (\[ampinvariant\]) depends on the type of the electromagnetic field, what we have to do next is to specify the electromagnetic field that will scatter the incoming photon.
Case of an external static Coulomb field
========================================
As a first approach for the subject, we think it is sufficient to consider the case of a static Coulomb field whose source is a massive center of charge $Ze$ as depicted in FIG. 1. This case is particularly interesting because it is similar to the deflection of an electron by an external Coulomb field whose nonrelativistic case is the so-called Rutherford scattering, while its relativistic version is the so-called Mott scattering[@mott].
-18. cm
In the Coulomb gauge the Coulomb field has the form[@coulombfield] $$\begin{aligned}
A_\mu({\bf x})=\left( \frac{Ze}{4\pi|{\bf x}|},0,0,0 \right).
\label{coulombpotential}\end{aligned}$$
Let us establish the kinematic of the process. The process involves the elastic scattering of an incoming photon. This implies only a change of direction of the incoming photon. Then there is no azimuthal dependence. Thus the momenta involved are these $$\begin{aligned}
&&p=E(1,0,01)\,\,,\,\, p^{\prime}=E(1,\sin \theta,0,\cos \theta),\nonumber \\
&& q=p^{\prime}-p=E(0,\sin \theta,0,\cos \theta-1).
\label{momenta}\end{aligned}$$
With this at hand, and summing over the polarization, we obtain $$\begin{aligned}
|{\cal M}|^2=\frac{4Z^2e^4 E^2}{{\bf q}^4}\sin^2(\frac{pCq}{2 \Lambda^2})(1+2\sin^2(\theta/2)).
\label{feynmamamplitude}\end{aligned}$$
The set of momenta in (\[momenta\]) yields ${\bf q}^2=4E^2\sin^2(\theta/2)$, which along with (\[feynmamamplitude\]) leads to $$\begin{aligned}
\frac{d\sigma}{d\Omega}=\frac{Z^2}{4}\frac{\alpha^2(1+2\sin^2(\theta/2)}{E^2\sin^4(\theta/2)}\sin^2(\frac{pCq}{2 \Lambda^2}).
\label{difform}\end{aligned}$$
Let us now write out explicitly $\frac{p^\mu C_{\mu \nu}q^\nu }{2 \Lambda^2}$. With the set of momenta in (\[momenta\]), we obtain $$\begin{aligned}
\frac{p^\mu C_{\mu \nu}q^\nu }{2 \Lambda^2}=\frac{E^2}{2\Lambda^2}(\sin\theta(C_{01}-C_{13})+(\cos\theta -2)C_{03}).
\label{development} \end{aligned}$$
On substituting (\[development\]) in (\[difform\]), we then obtain the following general expression for the scattering of a photon by a Coulomb field in noncommutative QED $$\begin{aligned}
\frac{d\sigma}{d\Omega}=\frac{Z^2}{4}\frac{\alpha^2(1+2\sin^2(\theta/2)}{E^2\sin^4(\theta/2)}\sin^2(\frac{E^2}{2\Lambda^2}(\sin\theta(C_{01}-C_{13})+(\cos\theta -2)C_{03})).
\label{dsigma} \end{aligned}$$ As we can see from the expression above, the differential cross section gets contribution from space-time as well as from space-space noncommutativity. We also can see that, for large $\Lambda$, reasonable deflection requires an intense field (large $Z$) in conjunction with an energetic incoming photon. Such behaviour must be valid for other type of electromagnetic field.
In face of this behaviour of $\frac{d\sigma}{d\Omega}$ with $\Lambda$, what motivates us to go further with such proposal is the fact that the recent trends in this area expect noncommutativity among space-time to manifest at TeV scale. In this relative low energy, according with the differential cross section above, an energetic photon will suffer a considerable deflection when in the presence of a Coulomb field.
Another reason to go further is that it is expected that future linear colliders run the photon-photon collision. Thus people can use those energetic photons to run the photon scattering by an electromagnetic field. This would be interesting because any deflection would be an undoubted signal of NCST.
In view of this, let us then analyze the differential cross section above in the regime $E\leq\Lambda$. We also restrict our analysis to noncommutativity only among space-space. For this we take $C_{01}=C_{03}=0$ and normalize $C_{13}=1$. With these considerations, we then obtain $$\begin{aligned}
\frac{d\sigma}{d\Omega}=\frac{ Z^2 \alpha^2 E^2}{4\Lambda^4}\frac{(1+2\sin^2(\theta/2))(1-\sin^2(\theta/2))}{\sin^2(\theta/2)}.
\nonumber\end{aligned}$$ We see in this expression for the differential cross section that for $\Lambda$ around few TeV´s , and an energetic incoming photon, photon deflection by a Coulomb field provides an alternative and interesting way to probe NCST.
In general NCST will lead to deviation from the basic processes of QED. NCQED were recently probed at LEP through the process $e^+ e^- \rightarrow \gamma \gamma$[@lep]. No significant deviation from the standard prediction was found. In the present stage of the development of NCST, we can conclude that none of those basic processes is good enough to probe NCST. This is so because no agreement has been reached yet regarding a phenomenological noncommutative standard model. However the phenomenology of NCQED has being intensively investigated. As the main novelty in NCQED is the triple and quartic couplings, they have being investigated through the Compton scattering[@mathews], pair annihilation process, $e^+ e^- \rightarrow \gamma
\gamma$, and in the $\gamma \gamma \rightarrow \gamma \gamma$ process[@rizzo; @list]. All these processes are sensitive to a NCST manifesting in the scale of TeV. Then problem is the background of the standard model and of the proper ordinary QED. Of those processes, the only one whose ordinary QED does not contribute at tree level is the $\gamma \gamma \rightarrow \gamma \gamma$ process. This made of such process the natural process to probe NCQED. In this work we proposed an alternative check of NCQED. One of the reason that make our proposal very interesting is the fact that it has no similar in ordinary QED, neither in the standard model. It is a pure noncommutative effect. This makes of the photon deflection the main signal of NCST.
It is necessary to say that perhaps, from the experimental side, the Coulomb field of a center of charge $Ze$ is not the appropriate field to probe NCST. It could be that an electric field of a capacitor could do the job more appropriately, or even the magnetic field of some specific apparatus. However, whatever the field be, the calculation done here can be easily extended for any electromagnetic source.
To finalize, we reinforce that the deflection of an energetic photon by an intense electromagnetic field would provide an unquestionable test for this recent idea of NCST at relative low energy.
[*Acknowledgments.*]{} This work was supported by Conselho Nacional de Pesquisa e Desenvolvimento - CNPq.
\#1 \#2 \#3 [Mod. Phys. Lett. A [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Nucl. Phys. B [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Phys. Lett. B [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Phys. Rep. [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Phys. Rev. D [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Phys. Rev. Lett. [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Rev. Mod. Phys. [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Nuc. Inst. Meth. [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Z. Phys. [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Eur. Phys. J. C [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Int. J. Mod. Phys. A [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [J. High Energy Phys. [**\#1**]{}, \#2 (\#3)]{}
[99]{} A. Armoni, 593 229 2001 ; M. Hayakawa, 478 394 2000 ; I .F. Riad, M. M. Sheikh-Jabbari 08 45 2000 .
The present interest in NCST arouse in string theory: N. Seiberg and E. Witten, 09 32 1999 , however the original idea dates back to 1947 in the work of Snyder, H. S. Snyder , 71 38 1947 .
N. F. Mott, Proc. R. Soc. London Ser. A [**124**]{}, 425 (1929); [**135**]{}, 429 (1932). For the derivation of the Mott formula in the framework of ordinary QED see: C. Itzykson and J-B Zuber, QUANTUM FIELD THEORY, (McGraw-Hill), 1985.
In the context of NCST the Coulomb field receives a correction proportional to $\frac{1}{\Lambda^2}$: M. Chaichian , M. M. Sheikh-Jabbari, A. Tureanu 86 2716 2001 . In this work we discarded such correction once it leads to higher order corrections in $\frac{1}{\Lambda}$.
G. Abbiendi et al (OPAL Collaboration), Phys. Lett. [**B**]{}568, 181 (2003).
P. Mathews, 63 075007-1 2001 .
J. L. Hewett, F. J. Petriello and T. G. Rizzo, 64 075012 2001 .
For other papers concerning NCQED phenomenology see: H. Grosse and Y. Liao Phys. Rev. D [**64**]{}, 115007 2001; S. Godfrey and M. A. Doncheski, Phys. Rev. D [**65**]{}, 015005 2002; I. Hinchliffe and N. Kersting, hep-ph/0205040.; C. A. de S. Pires and S. M. Lietti, Eur. Phys. J. C [**35**]{}, 137 (2004).
[^1]: E-mail: cpires@fisica.ufpb.br
[^2]: In the expressions in (\[photonic\]) the parameter $\Lambda$ is the scale of energy where NCST is expected to manifest, while $C$ is an antisymmetric matrix that appears in the commutator $[\hat{x}_\mu ,\hat{ x}_\nu ] = i\frac{C_{\mu \nu}}{\Lambda^2}$ [@NCST].
|
---
abstract: 'A new relaxation mechanism is shown to arise from overdamped two-level systems above a critical temperature $T^*\approx 5$ K, thus yielding an explanation for experimental observations in dielectric glasses in the temperature range between $T^*$ and the relaxation peak at 50 K. Using the distribution function of the tunnelling model for the parameters of the two-level systems, both the linear decrease of the sound velocity and the linear increase of the absorption up to the relaxation maximum, are quantitatively accounted for by our theory.'
author:
- |
[*Peter Neu*]{}$\quad$ and $\quad$[*Alois Würger*]{}\
[*Institut für*]{}\
[*Theoretische Physik*]{}\
[*Universität Heidelberg*]{}\
[*Philosophenweg 19*]{}\
[*69120 Heidelberg, Germany*]{}
title: RELAXATION DUE TO INCOHERENT TUNNELLING IN DIELECTRIC GLASSES
---
0[\_0]{}
Ł[[L]{}]{} ¶[[P]{}]{} [H]{}
$\\$
PACS. 61.40– amorphous and polymeric materials $\\$ PACS. 63.50 – disordered solids, vibrational states $\\$ PACS. 77.22G – relaxation phenomena, dielectrics
Low temperature properties of glasses below 1 Kelvin [@HA] are satisfactorily explained by the assumption of localised tunnelling states (TS) with a wide distribution of energies and relaxation times; these TS are commonly described by a mapping on two-level systems (TLS) [@AHV]. Usually only the direct (one-phonon) relaxation mechanism of TS with phonons is considered [@Jae]. For this reason there is poor agreement with experiment at temperatures above a few Kelvin [@Krau; @Ant; @Do]; especially the linear increase of the absorption up to the relaxation peak at about 50 K and the linear decrease with temperature of the sound velocity seem to be a universal characteristic of amorphous substances [@Krau; @Ant; @Do]. There are attempts to explain that linear temperature variation by thermally activated processes [@merz; @KKI], or by elastic anharmonicity of the lattice [@Do], or by a modification of the standard distribution function for the tunnelling parameters [@Ant].
In this letter we provide an explanation for the temperature variation above a few Kelvin which relies on the two-level description and the standard distribution function, thus avoiding introduction of new parameters. We find in this temperature regime incoherent tunnelling rather than coherent oscillations; as a result, all TLS, even the symmetric ones, contribute to relaxation and therefore yield a more pronounced temperature dependence of sound and microwave propagation. In this paper we extend our previous treatment of symmetric TS [@NW] to the biased case.
The model is described by the spin-boson Hamiltonian [@Legg] H = -\_x + \_z + e\_z + H\_B , \[sphoas\] where $\dn$ denotes the tunnelling amplitude, $\dd$ the bias, $\gam$ the deformation potential, and e = i\_[k]{}k (b\_k - b\_k\^) the distortion of the lattice. We consider the coupling of the TS to three-dimensional acoustic phonons ($\om_k = k v$) described by $H_B = \sum_k\hbar\om_k b_k^{\dag} b_k$ and bosonic operators fulfilling $[b_k,b_{k'}^{\dag}] = \delta_{k,k'}$. The model is specified by the spectral density which in Debye approximation is given by () = \_[k]{} (-\_k) = \^2\^3 (-/\_D) , \[spdichas\] where $\gamt^2 = \gam^2/(\pi^2\varrho v^5\hbar)$, and $\om_D$ is the Debye frequency, $\varrho$ the mass density and $v$ the sound velocity. In the tunnelling model the parameters $\dn$ and $\dd$ are assumed to be distributed according to $P(\dn,\dd)=\bar{P}/\dn$, which is equivalent to P(,r)r = r \[DF\] with a constant $\bar{P}$ and new parameters $r = \dn^2/\eps^2$ and $\eps =
\sqrt{\dn^2 \,+\, \dd^2}$, where $r_{\rm min} \le r\le 1$.
All dynamical information is contained in the symmetrized two-time correlation function $C_{zz}(t)$, which is calculated in the framework of the Mori-Zwanzig projection formalism [@Mori] using a mode-coupling approximation [@NW; @Beck]. With the projector $\P=\sum_{\al} |\s_{\al})(\s_{\al}|
=\I - \Q$ and the scalar product $(A|B) = \tr [\rho_{eq} (1/2) (AB + BA)]$, the equilibrium density matrix $\rho_{eq} = \exp(-\beta H)/\tr(\exp(-\beta H))$, the resolvent matrix $C_{\al\beta}(z) =
(\s_{\al}|[\L-z]^{-1}|\s_{\beta})$ ($\al = x,y,z$) of the Liouvillian $\L\ast = [H,\ast]/\hbar$ can be written as C\_(z) = - \_, \[Mo22\] Here $\Om_{\al\bet}=(\s_{\al}|\L|\s_{\bet})$ is the frequency matrix and M\_(z)=(Ł\_|\[-z\]\^[-1]{}|Ł\_) = (
[ccc]{} \_[yy]{}(z) & -\_[yx]{}(z) & 0\
-\_[xy]{}(z) & \_[xx]{}(z) & 0\
0 & 0 & 0
) \[Mo23\] is the damping matrix with the spin-phonon resolvent $\si_{\al\bet}(z) = \left(e\s_{\al}|\,[\QQ-z]^{-1}\,|e\s_{\bet}\right)$. In mode-coupling approximation the memory functions are decoupled according to [@NW; @Beck] \_”() = C\_”() () , \[mod1\] for $\al = x,$ $y,$ $z, $ $a,$ $s$, where we have defined the bath spectral function $\tilde{J}(\om)=J(\om)$ $\,\coth(\beta\hbar\om/2)$, the weighted convolution integral g()h() = g(- ’)h(’) and the resolvent functions $C_\al(z) = C_{\al\al}$ for $\al = x,y,z$, and $C_a(z) = -i\, ( C_{xy}(z) - C_{yx}(z))$ and $C_s(z) = C_{xy}(z) + C_{yx}(z)$. Here the imaginary parts of the resolvent functions, i. e. the spectral functions, are indicated by a double prime; the real parts are obtained from these via a Kramers-Kronig relation. By noting $C_y''(\om) \ = \ (\om\,/\,\dn)^2\, C_z''(\om)$ eq. (\[Mo22\] - \[mod1\]) get closed and can be solved numerically by iteration. They show a transition from coherent tunnelling, where $C_z''(\om)$ has resonances at $\om\approx\pm\eps$ and $\om = 0$, to incoherent tunnelling motion, where the three resonances have merged in one single resonance at $\om = 0$ whose width narrows with further rising temperature. In both asymptotic regimes an analytic solution of eq. (\[Mo22\] - \[mod1\]) is possible.
$(i)$ In the [*coherent*]{} or weak-coupling regime first Born-approximation is reliable; after replacing in (\[mod1\]) $C_\al''(\om)$ by the free spin-spectral function, and also discarding the spin-phonon interaction in $\rho_{eq}$, one easily derives the well-known results (cf. [@PiGo]) C\_z\^() &=& \^2 () ,\
&+& + \[aspol1\] with the usual one-phonon rate \_1 2\_2 = r\^2\^3(/2) . \[coh1\]
$(ii)$ In the [*incoherent*]{} or strong-coupling regime the full dynamics in $C_z^{\prime\prime}(\om)$ and $\rho_{eq}$ is kept and the eq. (\[Mo22\] - \[mod1\]) are treated self-consistently. Off-diagonal correlations like $\si_{xy}(z)$ and spin-polarisations $\EW$, $\EV$ are now negligible. The relevant singularities of $C_z(z)$ are at $z_\pm = \pm \Om - i\Gam_2$, $z_0 = -i\Gam_1$ where $\Om \approx \dd $, $\Gam_2 \approx \Gamt$, $\Gam_1 \approx \dn^2/\Gamt$ asymptotically. Here we have identified $\Sigma_{x}(z)$ with $\Sigma_{y}(z)$ and have defined $\Gamt$ by $i\,\Gamt = \Sigma_{x}(z=0)$. For $\Gamt\gg\eps$ the oscillating poles have zero residue, so that the asymptotic form of the relevant spectral function reads C\_z”() = . \[qw\] Inserting this in the mode-coupling integral (\[mod1\]) yields = ()\^2 . \[inc1\]
For low-frequency acoustic experiments on glasses, only the relaxational pole at $\om = 0$ is relevant. For this pole the following formulae reasonably interpolate between the behaviour in the coherent (\[aspol1\] - \[coh1\]) and the incoherent regime (\[qw\] - \[inc1\]) C\_[rel]{}\^() = with the relaxation rate \_1 = =: , \[g1ipp\] $\tau_{\rm min}$ being independent of $r$, and = {
[ll]{} \^2\^3(/2) =: \_[1ph]{} & T<T\^\
()\^2 =: \_[MC]{} & T T\^ .
. Here $T^\ast$ is the temperature were all thermal TLS ($\hbar\eps \le k_BT$) are overdamped. The condition $\tilde{\Gam}_{\rm MC} (T^\ast) \equiv
k_BT^\ast/\hbar$ yields the transition temperature T\^ = . \[tst\]
Internal friction and variation of sound velocity are given by [@HA] Q\^[-1]{}&=&\
&=& - ; the absorbative part $\chi''(\om)$ of the dynamical susceptibility is related to the fluctuating part of the spectral function by the fluctuation-dissipation theorem. The bar denotes the average over tunnelling systems with respect to (\[DF\]). By cutting the $\eps$-integration at $\eps = $max$(k_BT/\hbar,\Gamt)$ one finds for the internal friction Q\^[-1]{} = {
[ll]{}
&T<T\^\_[res]{}\
C( 1 - ())\
+ C (- ()) \[exp1\] , & T>T\^\_[res]{}
. and – by adding the contribution of the resonant part $\delta v/v|_{\rm res} =
C\ln(T/T_0)$ – for the change of the sound velocity = {
[ll]{} C&T<T\^\_[res]{}\
- C & T\^\_[res]{}<T<T\^\
- (+)C () . & T>T\^
. with $C = \bar{P}\gam^2/\varrho v^2$. $T^\ast_{\rm res}$ separates the regimes where the resonant $(T<T^\ast_{\rm res})$ and the relaxational process $(T>T^\ast_{\rm res})$ prevails. Below $T^\ast$ one finds the well-known logarithmic temperature dependence of the sound velocity and the constant internal friction. At $T = T^\ast$ the temperature dependence changes to a linear increase in the absorption and a linear decrease in the sound velocity.
From a recent experiment on Suprasil W [@Hannes] one finds a transition from the plateau to the linear increase at about $T^\ast\approx 6$ K which corresponds according to (\[tst\]) to a deformation potential of about $\gam \approx 2$ eV. Here we have used $C = 2.8\,\times\,10^{-4}$ [@Hannes], $\varrho =$ 2.2 g/$\mbox{cm}^3$, $v_\ell= 5.8\,\times\,
10^5$ cm/sec, $v_t = 3.75\,\times\,10^5$ cm/sec [@Jae] and $\gam_\ell^2 \approx 2\gam_t^2$ with $\gam^2/v^5 =
\gam_\ell^2/v_\ell^5 + 2\gam_t^2/v^5$. With these values we calculate the slope with respect to temperature; for a comparison with experiment see Table 1. Both the prefactor and the logarithmic variation of the sound velocity with frequency show full agreement.
In Fig. 1-2 we have plotted our theoretical results together with the experimental data for Suprasil W [@Hannes]. At temperatures between 100 mK and 15 K there is full agreement between experiment and theory; for both absorption and sound velocity the measured data could be reproduced with the same numerical values for $\gamt$ and $C$. A similarly good agreement has been found at other frequencies and in recent experiments on GeO$_2$ [@Sonja].
Finally, we comment on the absorption peak at about 30 K. According to (19) we find in our theory $T_{\rm max}\propto\sqrt{\om}$. However, experimentally the frequency dependence of the relaxation maximum is found to be much weaker [@Hannes]. This would indicate the onset of thermally activated processes to occur at some temperature below $T_{\rm max}$ which yields a logarithmical variation with frequency, rather than the square root dependence. For that case the quoted value of $r_{\rm min}$ in Fig. 1 has no physical relevance. We stress that this does not affect the temperature variation below $T_{\rm max}$.
In summary, we have shown that the experimental data in the absorption and the sound velocity up to the relaxation peak can be explained in the framework of the tunnelling model. The novel features arise from the incoherent dynamics of the tunnelling motion at temperatures above $T^*$. In particular, this provides a new relaxation mechanism which accounts well for the experimental findings in glasses above 5 K.
Acknowledgement {#acknowledgement .unnumbered}
===============
We are grateful to Johannes Classen, Christian Enss and Sonja Rau for helpful discussions and for kindly communicating experimental data prior to publication.
[12]{}
Hunklinger, S., Arnold, W.: In: Physical acoustics, vol. 12, Thurston R. N., Mason W. P. (eds). New York: Academic Press (1976); Hunklinger, S., Raychaudhuri, A. K.: In: Progress in low temperature physics, vol. IX, Brewer D. F. (ed.). Amsterdam: Elsevier (1986) Anderson, P. W., Halperin, B. I., Varma, C.: Philos. Mag. [**25**]{}, 1 (1972); Phillips, W. A.: J. Low. Temp. Phys. [**7**]{}, 351 (1972) Jäckle, J.: Z. Phys. [**257**]{}, 212 (1972) Krause, J. T.: J. Appl. Phys. [**42**]{}, 3035 (1971); Bellessa, G., Lemercier, C., Caldemaison, D.: Phys. Lett. [**62A**]{}, 127 (1977); Bellesa, G.: Phys. Rev. Lett. [**40**]{}, 1456 (1978) Anthony, P. J., Anderson, A. C.: Phys. Rev. [**B 20**]{}, 763 (1979) Doussineau, P., Frenois, C., Leisure, R. G., Levelut, A., Prieur, J.-Y.: J. Physique [**41**]{}, 1193 (1980) Tielburger, D., Merz, R., Ehrenfels, R., Hunklinger, S.: Phys. Rev. [**B**]{} [**45**]{}, 2750 (1992) Buchenau, U., Galperin, Yu. M., Gurevich, V. L., Parshin, D. A., Ramos, M. A., Schober, H.R.: Phys. Rev [**B**]{} [**46**]{}, 2798 (1992) Neu, P., Würger, A.: appears in Z. Phys. [**B**]{} Leggett, A. J., Chakravarty, S., Dorsey, A. T., Fisher, M. P. A., Garg, A., Zwerger, W.: Rev. Mod. Phys. [**59**]{}, 1 (1987); Weiss, U.: Quantum Dissipative Dynamics, Series in Modern Condensed Matter Physics, Vol. 2, World Scientific, Singapore (1993) Mori, H.: Progr. Theor. Phys. [**33**]{}, 127 (1965); Zwanzig, R.: J. Chem. Phys. [**33**]{}, 1338 (1960) Beck, R., Götze, W., Prelovsek, P.: Phys. Rev [**A**]{} [**20**]{}, 1140 (1979); Zwerger, W.: Z. Phys. B [**53**]{}, 53 (1983); ibid [**54**]{}, 87 (1983) Pirc, R., Gosar, P.: Phys. kondens. Materie [**9**]{}, 377 (1969) Classen, J., Enss, C., Bechinger, C., Weiss, G., Hunklinger, S.: appears in Annalen der Physik (1994) Rau, S.: Private communication (1994)
(1500,900)(0,0) (220,219)(240,219) (1436,219)(1416,219) (198,219)[(0,0)\[r\][2]{}]{} (220,329)(240,329) (1436,329)(1416,329) (198,329)[(0,0)\[r\][4]{}]{} (220,439)(240,439) (1436,439)(1416,439) (198,439)[(0,0)\[r\][6]{}]{} (220,548)(240,548) (1436,548)(1416,548) (198,548)[(0,0)\[r\][8]{}]{} (220,658)(240,658) (1436,658)(1416,658) (198,658)[(0,0)\[r\][10]{}]{} (220,767)(240,767) (1436,767)(1416,767) (198,767)[(0,0)\[r\][12]{}]{} (220,877)(240,877) (1436,877)(1416,877) (198,877)[(0,0)\[r\][14]{}]{} (220,113)(220,133) (220,877)(220,857) (220,68)[(0,0)[0.01]{}]{} (312,113)(312,123) (312,877)(312,867) (312,68)[(0,0)]{} (365,113)(365,123) (365,877)(365,867) (365,68)[(0,0)]{} (403,113)(403,123) (403,877)(403,867) (403,68)[(0,0)]{} (432,113)(432,123) (432,877)(432,867) (432,68)[(0,0)]{} (457,113)(457,123) (457,877)(457,867) (457,68)[(0,0)]{} (477,113)(477,123) (477,877)(477,867) (477,68)[(0,0)]{} (495,113)(495,123) (495,877)(495,867) (495,68)[(0,0)]{} (510,113)(510,123) (510,877)(510,867) (510,68)[(0,0)]{} (524,113)(524,133) (524,877)(524,857) (524,68)[(0,0)[0.1]{}]{} (616,113)(616,123) (616,877)(616,867) (616,68)[(0,0)]{} (669,113)(669,123) (669,877)(669,867) (669,68)[(0,0)]{} (707,113)(707,123) (707,877)(707,867) (707,68)[(0,0)]{} (736,113)(736,123) (736,877)(736,867) (736,68)[(0,0)]{} (761,113)(761,123) (761,877)(761,867) (761,68)[(0,0)]{} (781,113)(781,123) (781,877)(781,867) (781,68)[(0,0)]{} (799,113)(799,123) (799,877)(799,867) (799,68)[(0,0)]{} (814,113)(814,123) (814,877)(814,867) (814,68)[(0,0)]{} (828,113)(828,133) (828,877)(828,857) (828,68)[(0,0)[1]{}]{} (920,113)(920,123) (920,877)(920,867) (920,68)[(0,0)]{} (973,113)(973,123) (973,877)(973,867) (973,68)[(0,0)]{} (1011,113)(1011,123) (1011,877)(1011,867) (1011,68)[(0,0)]{} (1040,113)(1040,123) (1040,877)(1040,867) (1040,68)[(0,0)]{} (1065,113)(1065,123) (1065,877)(1065,867) (1065,68)[(0,0)]{} (1085,113)(1085,123) (1085,877)(1085,867) (1085,68)[(0,0)]{} (1103,113)(1103,123) (1103,877)(1103,867) (1103,68)[(0,0)]{} (1118,113)(1118,123) (1118,877)(1118,867) (1118,68)[(0,0)]{} (1132,113)(1132,133) (1132,877)(1132,857) (1132,68)[(0,0)[10]{}]{} (1224,113)(1224,123) (1224,877)(1224,867) (1224,68)[(0,0)]{} (1277,113)(1277,123) (1277,877)(1277,867) (1277,68)[(0,0)]{} (1315,113)(1315,123) (1315,877)(1315,867) (1315,68)[(0,0)]{} (1344,113)(1344,123) (1344,877)(1344,867) (1344,68)[(0,0)]{} (1369,113)(1369,123) (1369,877)(1369,867) (1369,68)[(0,0)]{} (1389,113)(1389,123) (1389,877)(1389,867) (1389,68)[(0,0)]{} (1407,113)(1407,123) (1407,877)(1407,867) (1407,68)[(0,0)]{} (1422,113)(1422,123) (1422,877)(1422,867) (1422,68)[(0,0)]{} (1436,113)(1436,133) (1436,877)(1436,857) (1436,68)[(0,0)[100]{}]{} (220,113)(1436,113)(1436,877)(220,877)(220,113) (-45,495)[(0,0)\[l\]]{} (828,23)[(0,0)[Temperature (K)]{}]{} (312,822)[(0,0)\[l\][$\tilde{\gamma} = 2.85 \times 10^{-13}$ s]{}]{} (312,767)[(0,0)\[l\][$f = 11.4$ kHz]{}]{} (312,713)[(0,0)\[l\][$r_{\rm min} = 1.55\times 10^{-8}$]{}]{} (1306,812)[(0,0)\[r\][Suprasil W]{}]{} (1350,812)[(0,0)[$\diamond$]{}]{} (221,122)[(0,0)[$\diamond$]{}]{} (221,122)[(0,0)[$\diamond$]{}]{} (228,122)[(0,0)[$\diamond$]{}]{} (231,122)[(0,0)[$\diamond$]{}]{} (231,123)[(0,0)[$\diamond$]{}]{} (240,123)[(0,0)[$\diamond$]{}]{} (245,124)[(0,0)[$\diamond$]{}]{} (249,125)[(0,0)[$\diamond$]{}]{} (253,125)[(0,0)[$\diamond$]{}]{} (258,126)[(0,0)[$\diamond$]{}]{} (262,126)[(0,0)[$\diamond$]{}]{} (264,126)[(0,0)[$\diamond$]{}]{} (266,127)[(0,0)[$\diamond$]{}]{} (272,127)[(0,0)[$\diamond$]{}]{} (277,128)[(0,0)[$\diamond$]{}]{} (282,130)[(0,0)[$\diamond$]{}]{} (288,130)[(0,0)[$\diamond$]{}]{} (292,131)[(0,0)[$\diamond$]{}]{} (296,132)[(0,0)[$\diamond$]{}]{} (301,133)[(0,0)[$\diamond$]{}]{} (306,134)[(0,0)[$\diamond$]{}]{} (312,137)[(0,0)[$\diamond$]{}]{} (317,138)[(0,0)[$\diamond$]{}]{} (323,138)[(0,0)[$\diamond$]{}]{} (325,140)[(0,0)[$\diamond$]{}]{} (330,143)[(0,0)[$\diamond$]{}]{} (336,145)[(0,0)[$\diamond$]{}]{} (344,147)[(0,0)[$\diamond$]{}]{} (388,162)[(0,0)[$\diamond$]{}]{} (396,165)[(0,0)[$\diamond$]{}]{} (403,167)[(0,0)[$\diamond$]{}]{} (412,171)[(0,0)[$\diamond$]{}]{} (419,177)[(0,0)[$\diamond$]{}]{} (428,180)[(0,0)[$\diamond$]{}]{} (434,184)[(0,0)[$\diamond$]{}]{} (434,184)[(0,0)[$\diamond$]{}]{} (436,185)[(0,0)[$\diamond$]{}]{} (439,188)[(0,0)[$\diamond$]{}]{} (440,185)[(0,0)[$\diamond$]{}]{} (442,189)[(0,0)[$\diamond$]{}]{} (446,192)[(0,0)[$\diamond$]{}]{} (447,189)[(0,0)[$\diamond$]{}]{} (450,195)[(0,0)[$\diamond$]{}]{} (455,198)[(0,0)[$\diamond$]{}]{} (459,202)[(0,0)[$\diamond$]{}]{} (463,203)[(0,0)[$\diamond$]{}]{} (468,209)[(0,0)[$\diamond$]{}]{} (474,211)[(0,0)[$\diamond$]{}]{} (479,216)[(0,0)[$\diamond$]{}]{} (485,221)[(0,0)[$\diamond$]{}]{} (491,225)[(0,0)[$\diamond$]{}]{} (495,230)[(0,0)[$\diamond$]{}]{} (501,234)[(0,0)[$\diamond$]{}]{} (507,239)[(0,0)[$\diamond$]{}]{} (513,243)[(0,0)[$\diamond$]{}]{} (520,250)[(0,0)[$\diamond$]{}]{} (527,255)[(0,0)[$\diamond$]{}]{} (535,260)[(0,0)[$\diamond$]{}]{} (541,266)[(0,0)[$\diamond$]{}]{} (548,272)[(0,0)[$\diamond$]{}]{} (554,277)[(0,0)[$\diamond$]{}]{} (559,280)[(0,0)[$\diamond$]{}]{} (563,284)[(0,0)[$\diamond$]{}]{} (569,286)[(0,0)[$\diamond$]{}]{} (575,291)[(0,0)[$\diamond$]{}]{} (581,294)[(0,0)[$\diamond$]{}]{} (588,299)[(0,0)[$\diamond$]{}]{} (595,304)[(0,0)[$\diamond$]{}]{} (601,309)[(0,0)[$\diamond$]{}]{} (607,313)[(0,0)[$\diamond$]{}]{} (614,315)[(0,0)[$\diamond$]{}]{} (622,320)[(0,0)[$\diamond$]{}]{} (632,326)[(0,0)[$\diamond$]{}]{} (636,325)[(0,0)[$\diamond$]{}]{} (642,330)[(0,0)[$\diamond$]{}]{} (648,333)[(0,0)[$\diamond$]{}]{} (654,336)[(0,0)[$\diamond$]{}]{} (662,337)[(0,0)[$\diamond$]{}]{} (669,339)[(0,0)[$\diamond$]{}]{} (675,342)[(0,0)[$\diamond$]{}]{} (679,341)[(0,0)[$\diamond$]{}]{} (685,342)[(0,0)[$\diamond$]{}]{} (692,344)[(0,0)[$\diamond$]{}]{} (699,345)[(0,0)[$\diamond$]{}]{} (701,348)[(0,0)[$\diamond$]{}]{} (709,349)[(0,0)[$\diamond$]{}]{} (710,348)[(0,0)[$\diamond$]{}]{} (711,347)[(0,0)[$\diamond$]{}]{} (717,348)[(0,0)[$\diamond$]{}]{} (720,351)[(0,0)[$\diamond$]{}]{} (726,350)[(0,0)[$\diamond$]{}]{} (727,351)[(0,0)[$\diamond$]{}]{} (732,351)[(0,0)[$\diamond$]{}]{} (735,351)[(0,0)[$\diamond$]{}]{} (739,351)[(0,0)[$\diamond$]{}]{} (743,352)[(0,0)[$\diamond$]{}]{} (746,352)[(0,0)[$\diamond$]{}]{} (751,353)[(0,0)[$\diamond$]{}]{} (753,352)[(0,0)[$\diamond$]{}]{} (760,352)[(0,0)[$\diamond$]{}]{} (761,352)[(0,0)[$\diamond$]{}]{} (769,352)[(0,0)[$\diamond$]{}]{} (770,350)[(0,0)[$\diamond$]{}]{} (777,351)[(0,0)[$\diamond$]{}]{} (780,353)[(0,0)[$\diamond$]{}]{} (786,354)[(0,0)[$\diamond$]{}]{} (789,355)[(0,0)[$\diamond$]{}]{} (795,353)[(0,0)[$\diamond$]{}]{} (799,353)[(0,0)[$\diamond$]{}]{} (803,353)[(0,0)[$\diamond$]{}]{} (807,354)[(0,0)[$\diamond$]{}]{} (811,352)[(0,0)[$\diamond$]{}]{} (814,351)[(0,0)[$\diamond$]{}]{} (819,354)[(0,0)[$\diamond$]{}]{} (820,357)[(0,0)[$\diamond$]{}]{} (825,354)[(0,0)[$\diamond$]{}]{} (826,354)[(0,0)[$\diamond$]{}]{} (831,354)[(0,0)[$\diamond$]{}]{} (838,355)[(0,0)[$\diamond$]{}]{} (848,356)[(0,0)[$\diamond$]{}]{} (852,356)[(0,0)[$\diamond$]{}]{} (858,358)[(0,0)[$\diamond$]{}]{} (863,356)[(0,0)[$\diamond$]{}]{} (870,358)[(0,0)[$\diamond$]{}]{} (875,358)[(0,0)[$\diamond$]{}]{} (882,358)[(0,0)[$\diamond$]{}]{} (886,358)[(0,0)[$\diamond$]{}]{} (895,358)[(0,0)[$\diamond$]{}]{} (899,358)[(0,0)[$\diamond$]{}]{} (902,359)[(0,0)[$\diamond$]{}]{} (902,357)[(0,0)[$\diamond$]{}]{} (906,357)[(0,0)[$\diamond$]{}]{} (913,357)[(0,0)[$\diamond$]{}]{} (916,357)[(0,0)[$\diamond$]{}]{} (922,357)[(0,0)[$\diamond$]{}]{} (927,355)[(0,0)[$\diamond$]{}]{} (932,355)[(0,0)[$\diamond$]{}]{} (937,355)[(0,0)[$\diamond$]{}]{} (945,356)[(0,0)[$\diamond$]{}]{} (945,354)[(0,0)[$\diamond$]{}]{} (954,354)[(0,0)[$\diamond$]{}]{} (954,355)[(0,0)[$\diamond$]{}]{} (973,355)[(0,0)[$\diamond$]{}]{} (973,355)[(0,0)[$\diamond$]{}]{} (982,356)[(0,0)[$\diamond$]{}]{} (982,355)[(0,0)[$\diamond$]{}]{} (990,357)[(0,0)[$\diamond$]{}]{} (990,358)[(0,0)[$\diamond$]{}]{} (997,359)[(0,0)[$\diamond$]{}]{} (997,359)[(0,0)[$\diamond$]{}]{} (1004,362)[(0,0)[$\diamond$]{}]{} (1013,366)[(0,0)[$\diamond$]{}]{} (1018,376)[(0,0)[$\diamond$]{}]{} (1019,376)[(0,0)[$\diamond$]{}]{} (1020,368)[(0,0)[$\diamond$]{}]{} (1022,377)[(0,0)[$\diamond$]{}]{} (1025,377)[(0,0)[$\diamond$]{}]{} (1027,372)[(0,0)[$\diamond$]{}]{} (1029,378)[(0,0)[$\diamond$]{}]{} (1032,379)[(0,0)[$\diamond$]{}]{} (1035,381)[(0,0)[$\diamond$]{}]{} (1040,382)[(0,0)[$\diamond$]{}]{} (1044,387)[(0,0)[$\diamond$]{}]{} (1048,391)[(0,0)[$\diamond$]{}]{} (1054,396)[(0,0)[$\diamond$]{}]{} (1057,400)[(0,0)[$\diamond$]{}]{} (1062,406)[(0,0)[$\diamond$]{}]{} (1067,410)[(0,0)[$\diamond$]{}]{} (1071,415)[(0,0)[$\diamond$]{}]{} (1077,420)[(0,0)[$\diamond$]{}]{} (1082,428)[(0,0)[$\diamond$]{}]{} (1087,438)[(0,0)[$\diamond$]{}]{} (1092,449)[(0,0)[$\diamond$]{}]{} (1130,510)[(0,0)[$\diamond$]{}]{} (1135,514)[(0,0)[$\diamond$]{}]{} (1138,518)[(0,0)[$\diamond$]{}]{} (1144,533)[(0,0)[$\diamond$]{}]{} (1151,542)[(0,0)[$\diamond$]{}]{} (1156,545)[(0,0)[$\diamond$]{}]{} (1166,569)[(0,0)[$\diamond$]{}]{} (1174,577)[(0,0)[$\diamond$]{}]{} (1182,591)[(0,0)[$\diamond$]{}]{} (1189,604)[(0,0)[$\diamond$]{}]{} (1195,619)[(0,0)[$\diamond$]{}]{} (1201,634)[(0,0)[$\diamond$]{}]{} (1205,643)[(0,0)[$\diamond$]{}]{} (1211,659)[(0,0)[$\diamond$]{}]{} (1217,673)[(0,0)[$\diamond$]{}]{} (1229,697)[(0,0)[$\diamond$]{}]{} (1234,699)[(0,0)[$\diamond$]{}]{} (1240,707)[(0,0)[$\diamond$]{}]{} (1246,715)[(0,0)[$\diamond$]{}]{} (1251,721)[(0,0)[$\diamond$]{}]{} (1257,729)[(0,0)[$\diamond$]{}]{} (1263,733)[(0,0)[$\diamond$]{}]{} (1266,730)[(0,0)[$\diamond$]{}]{} (1271,733)[(0,0)[$\diamond$]{}]{} (1275,730)[(0,0)[$\diamond$]{}]{} (1278,725)[(0,0)[$\diamond$]{}]{} (1281,723)[(0,0)[$\diamond$]{}]{} (1286,717)[(0,0)[$\diamond$]{}]{} (1290,704)[(0,0)[$\diamond$]{}]{} (1294,697)[(0,0)[$\diamond$]{}]{} (1298,686)[(0,0)[$\diamond$]{}]{} (1303,671)[(0,0)[$\diamond$]{}]{} (1308,658)[(0,0)[$\diamond$]{}]{} (1312,646)[(0,0)[$\diamond$]{}]{} (1316,636)[(0,0)[$\diamond$]{}]{} (1316,616)[(0,0)[$\diamond$]{}]{} (1320,607)[(0,0)[$\diamond$]{}]{} (1323,594)[(0,0)[$\diamond$]{}]{} (1326,583)[(0,0)[$\diamond$]{}]{} (1329,573)[(0,0)[$\diamond$]{}]{} (1332,562)[(0,0)[$\diamond$]{}]{} (1334,552)[(0,0)[$\diamond$]{}]{} (1337,541)[(0,0)[$\diamond$]{}]{} (1339,533)[(0,0)[$\diamond$]{}]{} (1342,523)[(0,0)[$\diamond$]{}]{} (1344,514)[(0,0)[$\diamond$]{}]{} (1346,505)[(0,0)[$\diamond$]{}]{} (1348,497)[(0,0)[$\diamond$]{}]{} (1350,488)[(0,0)[$\diamond$]{}]{} (1352,480)[(0,0)[$\diamond$]{}]{} (1354,471)[(0,0)[$\diamond$]{}]{} (1356,464)[(0,0)[$\diamond$]{}]{} (1357,456)[(0,0)[$\diamond$]{}]{} (1359,448)[(0,0)[$\diamond$]{}]{} (1361,440)[(0,0)[$\diamond$]{}]{} (1363,434)[(0,0)[$\diamond$]{}]{} (1365,426)[(0,0)[$\diamond$]{}]{} (1367,420)[(0,0)[$\diamond$]{}]{} (1368,413)[(0,0)[$\diamond$]{}]{} (1370,406)[(0,0)[$\diamond$]{}]{} (1371,400)[(0,0)[$\diamond$]{}]{} (1373,393)[(0,0)[$\diamond$]{}]{} (1374,388)[(0,0)[$\diamond$]{}]{} (1376,383)[(0,0)[$\diamond$]{}]{} (1377,376)[(0,0)[$\diamond$]{}]{} (1378,371)[(0,0)[$\diamond$]{}]{} (1380,367)[(0,0)[$\diamond$]{}]{} (1381,362)[(0,0)[$\diamond$]{}]{} (1383,358)[(0,0)[$\diamond$]{}]{} (1384,352)[(0,0)[$\diamond$]{}]{} (1385,348)[(0,0)[$\diamond$]{}]{} (1387,343)[(0,0)[$\diamond$]{}]{} (1388,343)[(0,0)[$\diamond$]{}]{} (1390,334)[(0,0)[$\diamond$]{}]{} (1391,337)[(0,0)[$\diamond$]{}]{} (1392,334)[(0,0)[$\diamond$]{}]{} (1394,328)[(0,0)[$\diamond$]{}]{} (1395,323)[(0,0)[$\diamond$]{}]{} (1396,320)[(0,0)[$\diamond$]{}]{} (1400,317)[(0,0)[$\diamond$]{}]{} (1402,309)[(0,0)[$\diamond$]{}]{} (1403,312)[(0,0)[$\diamond$]{}]{} (1404,287)[(0,0)[$\diamond$]{}]{} (1405,280)[(0,0)[$\diamond$]{}]{} (1407,277)[(0,0)[$\diamond$]{}]{} (1410,265)[(0,0)[$\diamond$]{}]{} (1412,258)[(0,0)[$\diamond$]{}]{} (1414,253)[(0,0)[$\diamond$]{}]{} (1415,254)[(0,0)[$\diamond$]{}]{} (1415,253)[(0,0)[$\diamond$]{}]{} (1416,250)[(0,0)[$\diamond$]{}]{} (1417,245)[(0,0)[$\diamond$]{}]{} (1418,241)[(0,0)[$\diamond$]{}]{} (1419,239)[(0,0)[$\diamond$]{}]{} (1420,236)[(0,0)[$\diamond$]{}]{} (1421,234)[(0,0)[$\diamond$]{}]{} (1423,228)[(0,0)[$\diamond$]{}]{} (1424,225)[(0,0)[$\diamond$]{}]{} (1424,222)[(0,0)[$\diamond$]{}]{} (1426,218)[(0,0)[$\diamond$]{}]{} (1427,216)[(0,0)[$\diamond$]{}]{} (1427,214)[(0,0)[$\diamond$]{}]{} (1429,212)[(0,0)[$\diamond$]{}]{} (1429,209)[(0,0)[$\diamond$]{}]{} (1430,208)[(0,0)[$\diamond$]{}]{} (1431,205)[(0,0)[$\diamond$]{}]{} (1432,199)[(0,0)[$\diamond$]{}]{} (1432,203)[(0,0)[$\diamond$]{}]{} (1433,199)[(0,0)[$\diamond$]{}]{} (1434,199)[(0,0)[$\diamond$]{}]{} (1434,197)[(0,0)[$\diamond$]{}]{} (1436,197)[(0,0)[$\diamond$]{}]{} (1306,767)[(0,0)\[r\][theory]{}]{} (1328,767)(1394,767) (353,113)(355,113)(367,114)(380,116)(392,117)(404,120)(417,123)(429,127)(441,132)(453,138)(466,146)(478,156)(490,167)(503,180)(515,195)(527,211)(539,227)(552,244)(564,260)(576,275)(588,289)(601,301)(613,311)(625,320)(638,326)(650,332)(662,336)(674,340)(687,342)(699,344)(711,346)(724,347)(736,348)(748,349)(760,350)(773,350)(785,351)(797,351)(810,352)(822,352)(834,353)(846,353)(859,354)(871,354)(883,355)(896,356)(908,357)(920,358)(932,360)(945,361)(957,364) (957,364)(969,366)(982,369)(994,373)(1006,377)(1018,382)(1031,388)(1043,395)(1055,403)(1068,412)(1080,423)(1092,436)(1104,450)(1117,466)(1129,484)(1141,504)(1153,526)(1166,550)(1178,575)(1190,602)(1203,628)(1215,654)(1227,678)(1239,699)(1252,715)(1264,725)(1276,728)(1289,722)(1301,708)(1313,687)(1325,660)(1338,628)(1350,594)(1362,559)(1375,524)(1387,491)(1399,459)(1411,430)(1424,402)(1436,377)
(1500,900)(0,0) (220,113)(240,113) (1436,113)(1416,113) (198,113)[(0,0)\[r\][4]{}]{} (220,209)(240,209) (1436,209)(1416,209) (198,209)[(0,0)\[r\][5]{}]{} (220,304)(240,304) (1436,304)(1416,304) (198,304)[(0,0)\[r\][6]{}]{} (220,400)(240,400) (1436,400)(1416,400) (198,400)[(0,0)\[r\][7]{}]{} (220,495)(240,495) (1436,495)(1416,495) (198,495)[(0,0)\[r\][8]{}]{} (220,591)(240,591) (1436,591)(1416,591) (198,591)[(0,0)\[r\][9]{}]{} (220,686)(240,686) (1436,686)(1416,686) (198,686)[(0,0)\[r\][10]{}]{} (220,782)(240,782) (1436,782)(1416,782) (198,782)[(0,0)\[r\][11]{}]{} (220,877)(240,877) (1436,877)(1416,877) (198,877)[(0,0)\[r\][12]{}]{} (340,113)(340,133) (340,877)(340,857) (340,68)[(0,0)[2]{}]{} (462,113)(462,133) (462,877)(462,857) (462,68)[(0,0)[4]{}]{} (584,113)(584,133) (584,877)(584,857) (584,68)[(0,0)[6]{}]{} (705,113)(705,133) (705,877)(705,857) (705,68)[(0,0)[8]{}]{} (827,113)(827,133) (827,877)(827,857) (827,68)[(0,0)[10]{}]{} (949,113)(949,133) (949,877)(949,857) (949,68)[(0,0)[12]{}]{} (1071,113)(1071,133) (1071,877)(1071,857) (1071,68)[(0,0)[14]{}]{} (1192,113)(1192,133) (1192,877)(1192,857) (1192,68)[(0,0)[16]{}]{} (1314,113)(1314,133) (1314,877)(1314,857) (1314,68)[(0,0)[18]{}]{} (1436,113)(1436,133) (1436,877)(1436,857) (1436,68)[(0,0)[20]{}]{} (220,113)(1436,113)(1436,877)(220,877)(220,113) (-45,495)[(0,0)\[l\]]{} (828,23)[(0,0)[Temperature (K)]{}]{} (979,686)[(0,0)\[l\][$\tilde{\gamma} = 2.85\times 10^{-13}$ s]{}]{} (979,591)[(0,0)\[l\][$f = 11.4$ kHz]{}]{} (1306,812)[(0,0)\[r\][Suprasil W]{}]{} (1350,812)[(0,0)[$\diamond$]{}]{} (220,799)[(0,0)[$\diamond$]{}]{} (220,800)[(0,0)[$\diamond$]{}]{} (220,801)[(0,0)[$\diamond$]{}]{} (220,802)[(0,0)[$\diamond$]{}]{} (220,802)[(0,0)[$\diamond$]{}]{} (220,803)[(0,0)[$\diamond$]{}]{} (220,803)[(0,0)[$\diamond$]{}]{} (221,804)[(0,0)[$\diamond$]{}]{} (221,805)[(0,0)[$\diamond$]{}]{} (221,805)[(0,0)[$\diamond$]{}]{} (221,805)[(0,0)[$\diamond$]{}]{} (221,806)[(0,0)[$\diamond$]{}]{} (221,806)[(0,0)[$\diamond$]{}]{} (221,807)[(0,0)[$\diamond$]{}]{} (221,807)[(0,0)[$\diamond$]{}]{} (221,808)[(0,0)[$\diamond$]{}]{} (221,808)[(0,0)[$\diamond$]{}]{} (221,808)[(0,0)[$\diamond$]{}]{} (222,809)[(0,0)[$\diamond$]{}]{} (222,810)[(0,0)[$\diamond$]{}]{} (222,810)[(0,0)[$\diamond$]{}]{} (222,810)[(0,0)[$\diamond$]{}]{} (222,811)[(0,0)[$\diamond$]{}]{} (222,811)[(0,0)[$\diamond$]{}]{} (222,811)[(0,0)[$\diamond$]{}]{} (222,811)[(0,0)[$\diamond$]{}]{} (223,811)[(0,0)[$\diamond$]{}]{} (223,812)[(0,0)[$\diamond$]{}]{} (224,813)[(0,0)[$\diamond$]{}]{} (224,813)[(0,0)[$\diamond$]{}]{} (225,813)[(0,0)[$\diamond$]{}]{} (225,813)[(0,0)[$\diamond$]{}]{} (225,813)[(0,0)[$\diamond$]{}]{} (226,813)[(0,0)[$\diamond$]{}]{} (226,813)[(0,0)[$\diamond$]{}]{} (226,813)[(0,0)[$\diamond$]{}]{} (227,812)[(0,0)[$\diamond$]{}]{} (227,812)[(0,0)[$\diamond$]{}]{} (228,812)[(0,0)[$\diamond$]{}]{} (228,811)[(0,0)[$\diamond$]{}]{} (229,811)[(0,0)[$\diamond$]{}]{} (229,811)[(0,0)[$\diamond$]{}]{} (230,810)[(0,0)[$\diamond$]{}]{} (230,810)[(0,0)[$\diamond$]{}]{} (231,809)[(0,0)[$\diamond$]{}]{} (232,808)[(0,0)[$\diamond$]{}]{} (232,808)[(0,0)[$\diamond$]{}]{} (233,807)[(0,0)[$\diamond$]{}]{} (234,807)[(0,0)[$\diamond$]{}]{} (234,806)[(0,0)[$\diamond$]{}]{} (236,805)[(0,0)[$\diamond$]{}]{} (236,804)[(0,0)[$\diamond$]{}]{} (237,803)[(0,0)[$\diamond$]{}]{} (238,803)[(0,0)[$\diamond$]{}]{} (239,802)[(0,0)[$\diamond$]{}]{} (240,801)[(0,0)[$\diamond$]{}]{} (241,800)[(0,0)[$\diamond$]{}]{} (241,800)[(0,0)[$\diamond$]{}]{} (243,799)[(0,0)[$\diamond$]{}]{} (243,799)[(0,0)[$\diamond$]{}]{} (243,799)[(0,0)[$\diamond$]{}]{} (245,798)[(0,0)[$\diamond$]{}]{} (245,797)[(0,0)[$\diamond$]{}]{} (246,797)[(0,0)[$\diamond$]{}]{} (247,796)[(0,0)[$\diamond$]{}]{} (248,796)[(0,0)[$\diamond$]{}]{} (248,795)[(0,0)[$\diamond$]{}]{} (249,795)[(0,0)[$\diamond$]{}]{} (250,794)[(0,0)[$\diamond$]{}]{} (251,794)[(0,0)[$\diamond$]{}]{} (252,793)[(0,0)[$\diamond$]{}]{} (253,793)[(0,0)[$\diamond$]{}]{} (255,792)[(0,0)[$\diamond$]{}]{} (255,792)[(0,0)[$\diamond$]{}]{} (257,791)[(0,0)[$\diamond$]{}]{} (257,790)[(0,0)[$\diamond$]{}]{} (260,789)[(0,0)[$\diamond$]{}]{} (260,788)[(0,0)[$\diamond$]{}]{} (263,788)[(0,0)[$\diamond$]{}]{} (264,787)[(0,0)[$\diamond$]{}]{} (266,787)[(0,0)[$\diamond$]{}]{} (267,786)[(0,0)[$\diamond$]{}]{} (268,786)[(0,0)[$\diamond$]{}]{} (270,784)[(0,0)[$\diamond$]{}]{} (272,784)[(0,0)[$\diamond$]{}]{} (273,783)[(0,0)[$\diamond$]{}]{} (275,782)[(0,0)[$\diamond$]{}]{} (275,782)[(0,0)[$\diamond$]{}]{} (278,781)[(0,0)[$\diamond$]{}]{} (278,780)[(0,0)[$\diamond$]{}]{} (281,780)[(0,0)[$\diamond$]{}]{} (284,779)[(0,0)[$\diamond$]{}]{} (289,777)[(0,0)[$\diamond$]{}]{} (291,776)[(0,0)[$\diamond$]{}]{} (294,774)[(0,0)[$\diamond$]{}]{} (298,773)[(0,0)[$\diamond$]{}]{} (302,771)[(0,0)[$\diamond$]{}]{} (305,769)[(0,0)[$\diamond$]{}]{} (310,767)[(0,0)[$\diamond$]{}]{} (313,766)[(0,0)[$\diamond$]{}]{} (319,763)[(0,0)[$\diamond$]{}]{} (322,762)[(0,0)[$\diamond$]{}]{} (325,761)[(0,0)[$\diamond$]{}]{} (325,762)[(0,0)[$\diamond$]{}]{} (328,760)[(0,0)[$\diamond$]{}]{} (334,757)[(0,0)[$\diamond$]{}]{} (337,755)[(0,0)[$\diamond$]{}]{} (342,752)[(0,0)[$\diamond$]{}]{} (347,750)[(0,0)[$\diamond$]{}]{} (352,747)[(0,0)[$\diamond$]{}]{} (357,745)[(0,0)[$\diamond$]{}]{} (366,741)[(0,0)[$\diamond$]{}]{} (366,739)[(0,0)[$\diamond$]{}]{} (376,734)[(0,0)[$\diamond$]{}]{} (376,734)[(0,0)[$\diamond$]{}]{} (401,717)[(0,0)[$\diamond$]{}]{} (401,716)[(0,0)[$\diamond$]{}]{} (413,709)[(0,0)[$\diamond$]{}]{} (413,709)[(0,0)[$\diamond$]{}]{} (425,703)[(0,0)[$\diamond$]{}]{} (425,703)[(0,0)[$\diamond$]{}]{} (437,696)[(0,0)[$\diamond$]{}]{} (437,695)[(0,0)[$\diamond$]{}]{} (450,689)[(0,0)[$\diamond$]{}]{} (466,677)[(0,0)[$\diamond$]{}]{} (475,674)[(0,0)[$\diamond$]{}]{} (476,674)[(0,0)[$\diamond$]{}]{} (479,672)[(0,0)[$\diamond$]{}]{} (484,670)[(0,0)[$\diamond$]{}]{} (489,667)[(0,0)[$\diamond$]{}]{} (494,664)[(0,0)[$\diamond$]{}]{} (497,663)[(0,0)[$\diamond$]{}]{} (504,659)[(0,0)[$\diamond$]{}]{} (511,655)[(0,0)[$\diamond$]{}]{} (521,650)[(0,0)[$\diamond$]{}]{} (531,643)[(0,0)[$\diamond$]{}]{} (541,638)[(0,0)[$\diamond$]{}]{} (554,629)[(0,0)[$\diamond$]{}]{} (563,625)[(0,0)[$\diamond$]{}]{} (578,615)[(0,0)[$\diamond$]{}]{} (589,608)[(0,0)[$\diamond$]{}]{} (602,600)[(0,0)[$\diamond$]{}]{} (620,589)[(0,0)[$\diamond$]{}]{} (635,580)[(0,0)[$\diamond$]{}]{} (652,570)[(0,0)[$\diamond$]{}]{} (668,561)[(0,0)[$\diamond$]{}]{} (683,553)[(0,0)[$\diamond$]{}]{} (696,543)[(0,0)[$\diamond$]{}]{} (717,527)[(0,0)[$\diamond$]{}]{} (735,515)[(0,0)[$\diamond$]{}]{} (800,475)[(0,0)[$\diamond$]{}]{} (819,464)[(0,0)[$\diamond$]{}]{} (840,452)[(0,0)[$\diamond$]{}]{} (854,443)[(0,0)[$\diamond$]{}]{} (886,426)[(0,0)[$\diamond$]{}]{} (921,405)[(0,0)[$\diamond$]{}]{} (948,397)[(0,0)[$\diamond$]{}]{} (1004,353)[(0,0)[$\diamond$]{}]{} (1058,325)[(0,0)[$\diamond$]{}]{} (1108,296)[(0,0)[$\diamond$]{}]{} (1157,266)[(0,0)[$\diamond$]{}]{} (1196,245)[(0,0)[$\diamond$]{}]{} (1244,220)[(0,0)[$\diamond$]{}]{} (1278,201)[(0,0)[$\diamond$]{}]{} (1329,173)[(0,0)[$\diamond$]{}]{} (1380,146)[(0,0)[$\diamond$]{}]{} (1306,767)[(0,0)\[r\][theory]{}]{} (1328,767)(1394,767) (220,800)(220,800)(232,797)(245,792)(257,787)(269,782)(281,777)(294,771)(306,765)(318,759)(331,753)(343,747)(355,741)(367,735)(380,729)(392,722)(404,716)(417,709)(429,703)(441,696)(453,690)(466,683)(478,676)(490,670)(503,663)(515,656)(527,649)(539,642)(552,635)(564,628)(576,621)(588,614)(601,607)(613,600)(625,593)(638,586)(650,579)(662,572)(674,564)(687,557)(699,550)(711,543)(724,535)(736,528)(748,521)(760,513)(773,506)(785,499)(797,491)(810,484)(822,476) (822,476)(834,469)(846,461)(859,454)(871,446)(883,439)(896,431)(908,424)(920,416)(932,409)(945,401)(957,393)(969,386)(982,378)(994,370)(1006,363)(1018,355)(1031,347)(1043,340)(1055,332)(1068,324)(1080,316)(1092,309)(1104,301)(1117,293)(1129,285)(1141,277)(1153,270)(1166,262)(1178,254)(1190,246)(1203,238)(1215,230)(1227,222)(1239,215)(1252,207)(1264,199)(1276,191)(1289,183)(1301,175)(1313,167)(1325,159)(1338,151)(1350,143)(1362,135)(1375,127)(1387,119)(1396,113)
Captions {#captions .unnumbered}
========
Fig. 1: Internal friction for Suprasil W at 11.4 kHz. The data ($\diamond$) are from Classen et al. [@Hannes]. In our theory (–) we have used $C =
2.8\times 10^{-4}$ and $\gam_\ell = 2.2$ eV
Fig. 2: Relative variation of the sound velocity for Suprasil W at 11.4 kHz. The data ($\diamond$) are from Classen et al. [@Hannes]. In our theory (–) we have used the same numerical value for $C$ and $\gam_\ell$ as in Fig. 1
[Table 1: Comparison of theoretical and experimental results for Suprasil W in the incoherent regime $T>T^\ast$. The experimental values are taken from [@Hannes]]{}\
[|c||c|c|]{} & &\
Q\^[-1]{} & (7 2) 10\^[-5]{} T/ & 6.1 10\^[-5]{} T/\
v/v & 210\^[-5]{} T/(1.610\^[-13]{}/2)& 2.210\^[-5]{} T/(T\^/(k\_BT\^2))\
|
---
abstract: 'Inelastic neutron scattering was used to measure the magnetic field dependence of spin excitations in the antiferromagnetic $S$=$\frac{1}{2}$ chain ${\rm CuCl_2 \cdot 2(dimethylsulfoxide)}$ (CDC) in the presence of uniform and staggered fields. Dispersive bound states emerge from a zero-field two-spinon continuum with different finite energy minima at wave numbers $q$=$\pi$ and $q_i\approx\pi(1-2\langle S_z \rangle)$. The ratios of the field dependent excitation energies are in excellent agreement with predictions for breather and soliton solutions to the quantum sine-Gordon model, the proposed low-energy theory for $S$=$\frac{1}{2}$ chains in a staggered field. The data are also consistent with the predicted soliton and $n$=$1,2$ breather polarizations and scattering cross sections.'
author:
- 'M. Kenzelmann$^{1,2}$, Y. Chen$^{1,3}$, C. Broholm$^{1,2}$, D. H. Reich,$^{1}$ and Y. Qiu$^{2,4}$'
title: 'Bound spinons in an antiferromagnetic ${\bf S}$=$\frac{1}{2}$ chain with a staggered field'
---
Shortly after the advent of quantum mechanics, Hans Bethe introduced a model antiferromagnet that continues to play a central role in quantum many body physics [@bethe]. The isotropic antiferromagnetic (AF) spin-1/2 chain, has a simple spin Hamiltonian: ${\cal H}=J\sum_n{\bf S}_n\cdot{\bf S}_{n+1}$, is integrable through Bethe’s Ansatz, and is realized with high fidelity in a number of magnetically anisotropic Cu$^{2+}$ based materials. Because it sits at the boundary between quantum order and spin order at $T=0$, Bethe’s model is ideally suited for exploring quantum critical phenomena and the qualitatively different phases that border the critical point [@sachdev; @chainreview]. This letter presents an experimental study of the profound effects of a symmetry breaking staggered field on excitations in the spin-1/2 chain.
In zero field, the fundamental excitations of the spin-1/2 chain are not spin waves but domain wall like quasi-particles called spinons that separate reversed AF domains [@Faddeev_Takhtajan; @Muller; @Karbach_Bougourzi]. The ground state is a Luttinger liquid and the spinons are non-local spin-1/2 objects with short range interactions. Thus, spinons can only be excited in pairs and produce a gapless continuum. Such a spectrum has been observed in several quasi-one-dimensional spin-1/2 chain systems [@Tennant; @DenderPRB; @Stone] and is now understood to be a distinguishing attribute of quantum-critical systems. The dramatic effect of a staggered field was discovered through a high field neutron scattering experiment on the quasi-one-dimensional spin-1/2 antiferromagnet copper benzoate [@DenderPRL]. Designed to verify theoretical predictions of a field driven gapless incommensurate mode [@Muller], this experiment instead revealed a field induced gap in the excitation spectrum. The critical exponent of $\sim 2/3$ describing the field dependence of the gap in copper benzoate, $\rm
[PM·Cu(NO_3)_2\cdot(H_2O)_2]_n$ [@Feyerherm], and $\rm
Yb_4As_3$ [@Kohgi], identified the source of this gap as the staggered field that accompanies a uniform field in materials with alternating Cu-coordination. The staggered field yields an energetic distinction between reversed domains, which confines spinons in multi-particle bound states [@Oshikawa_Affleck].
A quantitative theory for this effect was developed by Affleck and Oshikawa starting from the following extension of Bethe’s model Hamiltonian [@Oshikawa_Affleck; @Affleck_Oshikawa], $$\begin{aligned}
\mathcal{H} = J\sum_i {\bf S}_i\, {\bf S}_{i+1}
+\sum_j (-1)^j {\bf D}\cdot ({\bf S}_{j-1}\times{\bf S}_j)&\nonumber \\
- \sum_{j,\alpha,\beta} H^\alpha [g^u_{\alpha\beta} + (-1)^j
g^s_{\alpha\beta}]S^\beta_j \, . \label{Hamiltonian}\end{aligned}$$ The alternating spin environment is represented by the staggered Dzyaloshinskii-Moriya (DM) interaction and Zeeman terms. Through an alternating coordinate transformation, the model can be mapped to a spin-1/2 chain in a transverse staggered field that is proportional to the uniform field $H$. While the zero field properties of Eq.(1) are indistinguishable from Bethe’s model, an applied field induces transverse AF Ising spin order and a gap. Using bosonization techniques to represent the low energy spin degrees of freedom, Affleck and Oshikawa showed that their dynamics is governed by the quantum sine-Gordon model (QSG) with Lagrangian density $$\begin{aligned}
{\cal L}=\frac{1}{2}[(\partial_t\overline{\phi})^2-(\partial_x\overline{\phi})^2 ]+hC\cos(\beta\overline{\phi}).\end{aligned}$$ Here $h\propto H$ is the effective staggered field. $C(H)$ and $\beta(H)$ (which goes to $ \sqrt{2\pi}$ for $H\rightarrow 0$) vary smoothly with the applied field and can be determined numerically through the Bethe Ansatz for $h<<H$ [@Affleck_Oshikawa; @Essler98].
With applications from classical to particle physics, the SG model plays an important role in the theory of non-linear dynamic systems [@Tsvelik_book]. Excitations are composed of topological objects called solitons that encompass a localized $\pm 2\pi/\beta$ shift in $\overline{\phi}$ for a soliton and anti-soliton respectively [@Dashen_Hasslacher]. In addition there are soliton-antisoliton bound states called breathers, which drop below the soliton-anti-soliton continuum as the non-linear term is increased. The excited state wave functions are known for both solitons and breathers, which enables exact calculation of the inelastic scattering cross sections. In this Letter we use neutron scattering from a magnetized spin-1/2 chain with two spins per unit cell to test these results and more generally to explore the dynamics of spinons with long range interactions.
Based on the temperature dependence of the susceptibility and specific heat, ${\rm CuCl_2 \cdot 2((CD_3)_2SO)}$ (CDC) was identified as an AF $S$=$\frac{1}{2}$ chain system with $J$=$1.5\;\mathrm{meV}$, a staggered $g$-tensor and/or DM interactions [@Landee; @Chen_CDC]. The spin chains run along the ${\bf a}$-axis of the orthorhombic crystal structure ([*Pnma*]{}) [@Willett_Chang], with the ${\rm Cu^{2+}}$ ions separated by $0.5{\bf a} \pm 0.22{\bf c}$. Wave vector transfer is indexed in the corresponding reciprocal lattice ${\bf Q}(hkl)=h{\bf
a}^*+k{\bf b}^*+l{\bf c}^*$, and we define $q={\bf Q}\cdot {\bf
a}$. Due to weak inter-chain interactions, CDC has long-range AF order in zero field below $T_N$=$0.93\;\mathrm{K}$ with an AF wave-vector ${\bf Q}_m={\bf a}^*$. An applied field strongly suppresses the ordered phase [@Chen_CDC], indicating that inter-chain interactions favor correlations that are incompatible with the field-induced staggered magnetization[@Oshikawanew]. Above the $H_c = 3.9\;\mathrm{T}$ critical field for Néel order, we find that CDC is an excellent model system for our purpose.
Deuterated single crystals were grown through slow cooling of saturated methanol solutions of anhydrous copper chloride and deuterated dimethyl sulfoxide ($\rm (CD_3)_2SO$) in a 1:2 molar ratio [@Chen_CDC]. The sample studied consisted of four crystals with a total mass $7.76\;\mathrm{g}$. The experiments were performed using the disk chopper time-of-flight spectrometer (DCS) at the NIST Center for Neutron Research with the ${\bf
c}$-axis and the magnetic field vertical. In configuration A the incident energy was $E_i$=$3.03\;\mathrm{meV}$ and the ${\bf
a}$-axis was parallel to the incident neutron beam direction ${\bf
k}_i$. Configuration B had $E_i$=$4.64\;\mathrm{meV}$ and $\angle({\bf k_i},{\bf a})$=$60^{o}$. The counting time was 18 hrs at 11 T and an average of 5 hrs for each measurement between 0 and 8T. The raw scattering data were corrected for a time-independent background measured at negative energy transfer, for monitor efficiency, and for the ${\rm Cu^{2+}}$ magnetic form factor, folded into the first Brillouin zone, and put onto an absolute scale using the elastic incoherent scattering from CDC. For the normalization, the H/D ratio (=0.02) was measured independently through prompt-$\gamma$ neutron activation analysis.
Figures \[Fig1-colorplot\](a) and \[Fig1-colorplot\](b) show that for $T \ll J/k_{\rm B}$, the zero-field excitation spectrum of CDC consists of continuum scattering above a low-energy threshold that varies as $\hbar\omega$=$\frac{\pi}{2}J |\sin(q)|$ through the zone [@Muller]. An exact analytical expression for the two spinon contribution to the scattering cross section which accounts for 72.89% of the total spectral weight was recently obtained by Bougourzi [*et al.*]{} [@bougourzi1996; @fledderjohann1996; @Karbach_Bougourzi]. Figures \[Fig2-piscans\] and \[Fig3-incompeak\] show a quantitative comparison of this result (blue line), duly convoluted with the experimental resolution, to the experimental data. The excellent quantitative agreement between model and data provides compelling evidence for spinons in the zero field state of CDC. Note that the Goldstone modes that are expected due to Néel order for $\hbar\omega < k_{\rm B}T_N\approx 0.1\;\mathrm{meV}$, are not resolved in this experiment.
Figures \[Fig1-colorplot\](c) and \[Fig1-colorplot\](d) show that the magnetic excitations in CDC change dramatically with field and are dominated by resolution-limited modes for $H=11$ T. Figures \[Fig2-piscans\] and \[Fig3-incompeak\](b) show spectra at the wave vectors corresponding to the minima in the dispersion relations, which occur at $q = \pi$, and $q_i =0.77
\pi$, as determined from the constant-$\hbar\omega$ cut in Fig. \[Fig3-incompeak\](a). These data graphically illustrate the field-induced transfer of spectral weight from the two-spinon continuum into single-particle excitations. A phenomenological cross-section of long-lived dispersive excitations was fit to the data near $q=\pi$ and $q_i$ to take into account the experimental resolution and thereby accurately locate the excitation energies. These fits are shown as red lines in Figs. \[Fig2-piscans\] and \[Fig3-incompeak\], and the inferred parameters characterizing the dispersion relations are displayed for a series of fields in Fig. \[Fig4-fitresults\].
We now examine whether our high field observations are consistent with the QSG model for spin-1/2 chains in a staggered field [@Affleck_Oshikawa]. First, the model predicts single soliton excitations at $q_i$=$\pi(1-2\langle S_z \rangle)$. This is qualitatively consistent with the raw data in Figs. 1(c) and 1(d), and with Fig. 4(d), which shows how $q_i$ moves across the zone with $H$. Quantitative agreement is also apparent from the solid line in Figure 4(d), which is the predicted field dependence as calculated from the magnetization curve for a spin-1/2 chain [@Muller].
The soliton and antisoliton mass is related to the exchange interaction of the original spin chain and the uniform and staggered fields, $H$ and $h$, as follows [@Dashen_Hasslacher; @Affleck_Oshikawa] $$\begin{aligned}
&M\approx J (\frac{g \mu_B h}{J})^{(1+\xi)/2} ~\times&\nonumber \\
&\{B(\frac{J}{g \mu_BH})^{(2\pi-\beta^2)/4\pi}
(2-\frac{\beta^2}{\pi})^{1/4}\}^{-(1+\xi)/2}. &
\label{Eq2}\end{aligned}$$Here $\xi=\beta^2/(8\pi-\beta^2)\rightarrow 1/3$ for $H\rightarrow 0$ and $B=0.422169$. Assuming $h\propto H$, the soliton energy versus field is shown as a solid red line in Fig. \[Fig4-fitresults\](a). While Eq. (\[Eq2\]) is at the limit of validity for CDC at $H=11$ T, we attribute the discrepancy with the mode energy at $q_i$ (red triangles) to inter-chain interactions that suppress the effective staggered field close to $H_c$. The more general expression, $h \propto
(H-H_c)^{\alpha}$, yields a good fit for $\alpha=0.68(5)$ (dashed red line).
The QSG model predicts that breather bound states of $2n$-solitons should be accessible at $q$=$\pi$ with masses $$M_n=2M\sin(n\pi\xi/2)\, .$$ Sharp modes are indeed observed in CDC at $q=\pi$. Their energies are compared to the breather masses predicted for $h \propto H$ (solid lines) and $h \propto (H-H_c)^{0.68}$ (dashed lines) in Fig. \[Fig4-fitresults\](a). The ratios of the commensurate and putative breather mode energies to the lowest energy incommensurate and putative soliton mode energy, shown in Fig. \[Fig4-fitresults\](b), are in excellent agreement with the normalized breather energies $M_n/M$ for $n$=$1$ and $n$=$2$. This comparison, which is insensitive to the origin of the staggered field, suggests that breathers indeed exist in CDC.
The evidence for breathers is strengthened as we examine the polarization of the scattering at $q=\pi$. According to the QSG model, $n$=odd (even) breathers are polarized in the plane normal to ${\bf H}$ and perpendicular (parallel) to ${\bf
h}$[@Essler98]. Neutron scattering probes the projection of spin fluctuations on the plane normal to the scattering vector ${\bf Q}$. Figure \[Fig2-piscans\] shows that the $\hbar\omega$=$1\;\mathrm{meV}$ peak seen for ${\bf Q}_{\rm
A}\approx(1,1.64,0)$ in configuration A is absent for $\hbar\omega$=$1\;\mathrm{meV}$ and ${\bf Q}_{\rm
B}\approx(1,0,0)$ in configuration B. In a quasi-one-dimensional system the only explanation for this is that the excitation is polarized along ${\bf Q}_{\rm B} ||{\bf a} || {\bf h}$ as expected for an even numbered breather, and hence is extinguished by the polarization factor in the neutron scattering cross section for configuration B. The predicted ${\bf b}$ and ${\bf c}$ axis polarizations respectively of the $n$=$1$ breather and the soliton are confirmed by the consistent polarization factor corrected intensities from configurations A and B for $H=11\;\mathrm{T}$ in Fig. \[Fig4-fitresults\](c).
One of the unusual aspects of the QSG model is that complex features such as the breather and soliton structure factors can be calculated exactly [@Essler98]. The solid lines in Fig. \[Fig4-fitresults\](c) show that these exact results are consistent with the field dependent intensities of the commensurate and incommensurate low energy modes in CDC. For $H=11$ T the third breather is expected at about $1.4\;\mathrm{meV}$, close to the energy of the soliton mode at $h$=$1$. The peak close to $1.4\;\mathrm{meV}$ has intensity $I=0.14(3)$ in configuration A and $I=0.26(2)$ in configuration B and this is consistent with a third breather contribution polarized along ${\bf b}$. The inferred ${\bf b}$-polarized intensity of $I_b^{\rm exp}=0.23(7)$ is however much greater than the intensity predicted for the $n=3$ breather ($I_3^{\rm
QSG}=0.026(7)$), which indicates additional anisotropic contributions to the inelastic scattering there.
In addition to the field-induced resonant modes, the experiment shows that a high energy continuum persists for $H$=$11\;\mathrm{T}$. Fig. \[Fig1-colorplot\](d) for example clearly shows a broad maximum in the $q-$dependence of neutron scattering for energies $\hbar\omega>1.6$ meV and $q\approx\pi$. Firm evidence for continuum scattering comes from the field dependence of the first moment $\langle\hbar\omega\rangle_{\bf
Q}$=$\hbar^2\int S({\bf Q},\omega)\omega d\omega$, which is proportional to the ground state energy $\langle{\mathcal
H}\rangle$ with a negative $q$-dependent prefactor [@Hohenberg_Brinkman]. At zero field the experimental value of $\langle\hbar\omega\rangle_{\bf Q}$ corresponds to $\langle{\mathcal H}\rangle=-0.4(1)J$, in agreement with Bethe’s result of $\langle{\mathcal H}\rangle=(\frac{1}{4}$-$\ln2)J\approx
-0.44J$. At $11\;\mathrm{T}$, however, $\tilde{\langle{\mathcal
H}\rangle}$ derived solely from the resonant modes is $-0.25(6)J$ when $\langle{\mathcal H}\rangle$ is expected to be $-0.34J$ [@Chen_CDC]. The discrepancy is an independent indication of spectral weight beyond the resonant modes. For $q=\pi$, the transverse contribution to the continuum scattering predicted by the QSG model [@Essler98] and shown as a solid line in the inset of Fig. \[Fig2-piscans\] has a maximum close to a weak peak in the measured scattering intensity. The shortfall of the theoretical result suggests that there are additional longitudinal contributions to the continuum scattering.
In summary, staggered field induced spinon binding in spin-1/2 chains provides an experimental window on the unique non-linear dynamics of the quantum sine-Gordon model. Our neutron scattering experiment on quasi-one-dimensional CDC in a high magnetic field yields clear evidence for soliton/antisoliton creation at wave vector transfer $q=\pi(1-2\langle S^z\rangle)$, as well as $n=1$ and $n=2$ breather bound states at $q=\pi$. Interpretation of the data throughout the Brillouin zone will require exact diagonalization studies and a better understanding of lattice effects than provided by the continuum field theory reviewed in this paper. Other results that call for further experimental and theoretical work are the observation of high energy continuum scattering in the gapped phase and indications that inter-chain interactions can renormalize the soliton mass.
We thank C. P. Landee, J. Copley, C. Batista, I. Affleck, and F. Essler for helpful discussions and R. Paul for prompt gamma analysis. Work at JHU was supported by the NSF through DMR-0306940. DCS and the high-field magnet at NIST were supported in part by the NSF through DMR-0086210 and DMR-9704257.
[10]{}
H. Bethe, Z. Phys. **71**, 205 (1931).
S. Sachdev, *Quantum Phase Transitions*, Cambridge University Press (2000).
C. Broholm, *et al.*, in *High Magnetic Fields: Applications in condensed matter physics and spectroscopy*, C. Berthier *et al.*, Eds. Springer Verlag (2002).
L. D. Faddeev *et al.*, Phys. Lett. A [**85**]{}, 375 (1981).
G. Müller *et al.*, Phys. Rev. B [**24**]{}, 1429 (1981).
M. Karbach *et al.*, Phys. Rev. B [**55**]{}, 12510 (1997).
D. A. Tennant *et al.*, Phys. Rev. B [**52**]{}, 13368 (1995).
D. Dender *et al.*, Phys. Rev. B [**53**]{}, 2583 (1996).
M. B. Stone *et al.*, Phys. Rev. Lett. [**91**]{}, 037205 (2003).
D. C. Dender *et al.*, Phys. Rev. Lett [**79**]{}, 1750 (1997).
R. Feyerherm *et al.*, J. Phys. Cond. Mat. [**12**]{}, 8495 (2000).
M. Kohgi *et al.*, Phys. Rev. Lett. [**86**]{}, 2439 (2001).
M. Oshikawa *et al.*, Phys. Rev. Lett. [**79**]{}, 2883 (1997).
I. Affleck *et al.*, Phys. Rev. B [**60**]{}, 1038 (1999).
F. H. L. Essler *et al.*, Phys. Rev. B [**57**]{}, 10592 (1998) and [*ibid*]{} [**68**]{}, 064410 (2003).
A. M. Tsvelik, *Quantum Field Theory in Condensed Matter Physics*, Cambridge University Press (1995).
R. F. Dashen *et al.*, Phys. Rev. D [**11**]{}, 3424 (1975).
C. P. Landee *et al.*, Phys. Rev. B [**35**]{}, 228 (1987).
Y. Chen and *et al.*, unpublished.
R. D. Willett *et al.*, Inorg. Chem. Acta [**4**]{}, 447 (1970).
M. Sato and M. Oshikawa, Phys. Rev. B in press (2004).
A. H. Bougourzi *et al.*, Phys. Rev. B [**54**]{}, R12669 (1996).
A. Fledderjohann *et al.*, Phys. Rev. B [**53**]{}, 11543 (1996). P. C. Hohenberg *et al.*, Phys. Rev. B [**10**]{}, 128 (1974).
|
---
abstract: 'Dilaton stabilization may occur in a theory based on a single asymptotically free gauge group with matter due to an interplay between quantum modification of the moduli space and tree-level superpotential. We present a toy model where such a mechanism is realized. Dilaton stabilization in this mechanism tends to occur at strong coupling values unless some unnatural adjustment of parameters is involved.'
address: |
$^1$Theory Division, CERN, CH-1211, Geneva 22, Switzerland\
$^2$Lyman Laboratory of Physics, Harvard University, Cambridge, MA 02138\
$^3$Department of Physics, Northeastern University, Boston, MA 02115
author:
- 'Gia Dvali$^{1}$[^1] and Zurab Kakushadze$^{2,3}$[^2]'
title: A Remark on Dilaton Stabilization
---
= 10000
epsf
The gauge and gravitational couplings in the effective field theory of any string derived model are determined by vevs of certain moduli. Thus, for example, in perturbative heterotic superstring the gauge $g_a$ and gravitational (that is, string) $g$ couplings at the string scale are related to each other via $K_a g^2_a=g^2$, where $K_a$ are the current algebra levels of the corresponding gauge subgroups $G_a$. The string coupling is determined by the vev of the dilaton field $S$: $\langle S\rangle=1/g^2+i\theta/8\pi^2$ (where $\theta$ is the vacuum angle). Perturbatively $S$ is a modulus field, and its expectation value is undetermined. The dilaton stabilization must therefore have non-perturbative origin.
One possibility is to consider the standard “race-track” scenario [@kras], where non-perturbative superpotential (which is exponential in $S$) is generated by gaugino condensation. Dilaton stabilization then requires presence of at least two gauge groups giving rise to different exponentials[^3] in the superpotential[^4].
In this note we argue that dilaton stabilization may occur in a theory based on a [*single*]{} asymptotically free gauge group with matter[^5]. As we discuss below, the dilaton is stabilized here due to an interplay between [*quantum modification*]{} of the moduli space and [*tree-level*]{} couplings of the gauge invariants with additional gauge singlets. Here we present a toy model where such a mechanism is realized.
Thus, consider a theory with $SU(N)$ gauge group and $N$ flavors $Q^i,{\tilde Q}_{\bar j}$ ($i,{\bar j}=1,\dots,N$). The gauge invariant degrees of freedom are mesons $M^i_{\bar j}\equiv Q^i {\tilde Q}_{\bar j}$, and baryons $B\equiv\epsilon_{{i_1}\dots {i_{N_c}}}Q^{i_1}\cdots Q^{i_{N_c}}$ and ${\tilde B}\equiv\epsilon^{{{\bar j}_1}\dots {{\bar j}_{N_c}}}
{\tilde Q}_{{\bar j}_1}\cdots {\tilde Q}_{{\bar j}_{N_c}}$. The classical moduli space in this theory receives quantum corrections which can be accounted for via the following superpotential [@Seiberg] $$\begin{aligned}
\label{non-pert}
{\cal W}_{non-pert}=A\left(\det(M)-
B{\tilde B}-\Lambda^{2N}\right)~,\end{aligned}$$ where $A$ is the Lagrange multiplier ($A\Lambda^{2N}=W_aW_a$ is the “glue-ball” field), and $\Lambda\equiv\exp(-4\pi^2 S/N)$ is the dynamically generated scale of the theory. (Here for simplicity we take the $SU(N)$ current algebra level to be 1.) The quantum constraint then follows from the $F$-flatness condition for the field $A$ and reads: $$\begin{aligned}
\label{quantum}
\det(M)-B{\tilde B}-\Lambda^{2N}=0~.\end{aligned}$$ Note that with just this constraint the dilaton is not stabilized. If, however, $\det(M)$, $B$ and ${\tilde B}$ are fixed via some other dynamics, then the quantum constraint (\[quantum\]) will fix the dilaton vev (provided that $0<\vert\det(M)-B{\tilde B}\vert<1$).
The simplest possibility here is to require that there be present tree-level contributions to the superpotential (which could be both renormalizable and non-renormalizable couplings). Note that [*a priori*]{} they need not even respect any of the global symmetries of the above quantum moduli space. Upon inclusion of such couplings into the superpotential, dilaton may be stabilized (without breaking supersymmetry).
As a simple toy example consider the following tree-level superpotential: $$\label{tree}
{\cal W}_{tree} = YB+{\tilde Y}{\tilde B}+(\lambda-X)\det(M) + {\rho \over {n+1}}X^{n+1}~,$$ where $X,Y,{\tilde Y}$ are additional chiral superfields (which are singlets of $SU(N)$), and $\lambda,\rho$ are some couplings. The superpotential is given by ${\cal W}={\cal W}_{non-pert}+{\cal W}_{tree}$. The $F$-flatness conditions for the singlets $Y,{\tilde Y}$ imply that $B={\tilde B}=0$. The $F$-flatness condition for the singlet $X$ implies that $\det(M)=\rho X^n$. Note that if $X=0$ then the quantum constraint (\[quantum\]) cannot be satisfied for any finite values of the dilaton vev $S$. Thus, $X=0$ lies on a non-supersymmetric [*runaway*]{} branch. There is, however, a family of supersymmetric vacua in the moduli space. First note that the dilaton $F$-flatness condition implies $A=0$. Next, if $X\not=0$, then it follows that $\det(M)\not=0$. On the other hand, the $F$-flatness conditions for the mesons $M^i_{\bar j}$ imply that $(\lambda-X){\cal M}_i^{\bar j}=0$, where ${\cal M}_i^{\bar j}\equiv \partial\det(M)/\partial M^i_{\bar j}$. This implies that ${\cal M}_i^{\bar j}\equiv 0$ unless $X=\lambda$. Since $\det(M)=M^i_{\bar j} {\cal M}_i^{\bar j}/N$, it follows from the above $F$-flatness conditions that if $X\not=0$ then $X=\lambda$. Finally, the quantum constraint (\[quantum\]) along with the rest of the $F$-flatness conditions we have just discussed implies that $\Lambda^{2N}=\det(M)=
\rho X^n=\rho\lambda^n$ provided that $X=\lambda$. Thus, the above superpotential has a family of supersymmetric vacua with $$\label{vacuum}
A=B={\tilde B}=0~,~~~X=\lambda~,~~~
S={n\over 8\pi^2}\log(\tau)~,~~~\det(M)={1\over \tau^n}$$ provided that $\vert\tau\vert>1$. Here $1/\tau\equiv\rho^{1/n}\lambda$. Note that this family of supersymmetric vacua is parametrized by the meson vevs $M^i_{\bar j}$ subject to the constraint $\det(M)=1/\tau^n$. Thus, there are $N^2-1$ left-over flat directions. The other vevs, including the dilaton, however, are fixed. Note that this family of supersymmetric vacua is separated from the runaway branches by potential barriers.
We see that in this example the dilaton is stabilized at strong coupling values unless $n\sim 8\pi^2$ (assuming that $\log(\vert\tau\vert)\sim1$), which looks unnatural. One may attempt to find an “improvement” for the above toy model (at the expense of introducing additional singlet fields and tree-level couplings) which would allow weak coupling stabilization. However, all the models we have constructed so far look rather contrived. There appears to be a simple reason for this. The entire idea of dilaton stabilization described in this note is based on the quantum constraint (\[quantum\]). To stabilize the dilaton one requires that the quantity ${\cal C}\equiv\det(M)-B{\tilde B}$ is stabilized at a non-zero value via some additional dynamics. The dilaton enters Eq (\[quantum\]) in the combination $\Lambda^{2N}=\exp(-8\pi^2 S)$, and the stabilized value of $S$ is given by $S=-{1\over 8\pi^2} \log({\cal C})$. Unless ${\cal C}$ is an exponentially small number, the stabilized value of $S$ will always be at strong coupling.
The above toy model defined by Eq (\[tree\]) is not generic. However, generic models can also be constructed. For instance, consider the following tree-level superpotential: $${\cal W}_{tree}=Xf(\det(M),B{\tilde B})+Yg(\det(M),B{\tilde B})~,$$ where $X,Y$ are singlet superfields both with $R$-charge 2, and $f,g$ are arbitrary polynomials of their arguments $\det(M)$ and $B{\tilde B}$. This superpotential respects all the symmetries of Eq (\[non-pert\]). The dilaton is stabilized without breaking supersymmetry provided that the equation $f=g=0$ has isolated solutions with $0<\vert\det(M)-B{\tilde B}\vert<1$.
We would like to thank Luis [Á]{}lvarez-Gaum[é]{}, Ignatios Antoniadis, Ram Brustein, Savas Dimopoulos, Emilian Dudas, Andrei Johansen, Vadim Kaplunovsky, Alex Pomarol, Lisa Randall, Riccardo Rattazzi, Tom Taylor, and Henry Tye for discussions. The work of Z.K. was supported in part by the grant NSF PHY-96-02074, and the DOE 1994 OJI award. Z.K. would like to thank CERN Theory Division for their kind hospitality while parts of this work were completed. Z.K. would also like to thank Albert and Ribena Yu for financial support.
See, [*e.g.*]{},\
N.V. Krasnikov, Phys. Lett. [**B193**]{} (1987) 37;\
L. Dixon, V. Kaplunovsky, J. Louis and M. Peskin, SLAC-PUB-5229 (1990);\
J.A. Casas, Z. Lalak, C. Mu[ñ]{}oz and G.G. Ross, Nucl. Phys. [**B347**]{} (1990) 243;\
T.R. Taylor, Phys. Lett. [**B252**]{} (1990) 59.
C.P. Burgess, A. de la Macorra, I. Maksymyk and F. Quevedo, hep-th/9707062.
V. Kaplunovsky and J. Louis, hep-th/9708049.
See, e.g.,\
T. Banks and M. Dine, Phys. Rev. [**D50**]{} (1994) 7454;\
P. Bin[é]{}truy, M.K. Gaillard and Y.-Y. Wu, Nucl. Phys. [**B481**]{} (1996) 109;\
J.A. Casas, Phys. Lett. [**B384**]{} (1996) 103.
N. Seiberg, Phys. Rev. [**D49**]{} (1994) 6857.
[^1]: E-mail: georgi.dvali@cern.ch
[^2]: E-mail: zurab@string.harvard.edu
[^3]: In Ref [@que] exponential contributions to the superpotential were argued to also arise in [*non*]{}-asymptotically-free gauge theories.
[^4]: This mechanism requires rather large gauge groups to achieve weak coupling stabilization. Such large gauge groups can [*a priori*]{} appear in non-perturbative string vacua. This idea was recently discussed in the context of $F$-theory in Ref [@KL].
[^5]: Dilaton stabilization might be possible in a theory with a single gaugino condensate [@BD] if the K[ä]{}hler potential receives large non-perturbative corrections.
|
---
abstract: 'We present an algorithm to simulate the many-body depletion interaction between anisotropic colloids in an implicit way, integrating out the degrees of freedom of the depletants, which we treat as an ideal gas. Because the depletant particles are statistically independent and the depletion interaction is short-ranged, depletants are randomly inserted in parallel into the excluded volume surrounding a single translated and/or rotated colloid. A configurational bias scheme is used to enhance the acceptance rate. The method is validated and benchmarked both on multi-core CPUs and graphics processing units (GPUs) for the case of hard spheres, hemispheres and discoids. With depletants, we report novel cluster phases, in which hemispheres first assemble into spheres, which then form ordered hcp/fcc lattices. The method is significantly faster than any method without cluster moves and that tracks depletants explicitly, for systems of colloid packing fraction $\phi_c<0.50$, and additionally enables simulation of the fluid-solid transition.'
author:
- Jens Glaser
- 'Andrew S. Karas'
- 'Sharon C. Glotzer'
title: A parallel algorithm for implicit depletant simulations
---
Introduction
============
The self-assembly of anisotropic particles into complex structures has emerged as a promising strategy towards the fabrication of materials with novel properties [@Glotzer2007]. Methods for the synthesis of anisotropic nano- and colloidal particles [@Sacanna2013; @Xia2015] are becoming available, and enable experiments that study their phase behavior [@Sacanna2010; @Henzie2012; @Ye2013]. Anisotropic particles, such as proteins, are also emerging building blocks for biomaterials [@Liljestrom2014; @Park2014]. Simulations predict a wealth of different crystal structures that hard shapes form through maximization of entropy. [@Damasceno2012; @Agarwal2011] In addition to particle shape, attractive interactions between patchy particles can be important in achieving desired target structures [@Tang2006; @Ye2013; @Ye2013c]. Towards that end, the main routes that are actively being explored include surface functionalization of nanoparticles using short DNA molecules [@Jones2010; @Auyeung2014], and exploiting the depletion interaction between colloids in the presence of small polymer chains [@Sacanna2010; @Rossi2011; @Rossi2015]. Here we focus on the depletion interaction, since it is of entropic origin and arises without the need for engineering particle surface chemistry, emerging in mixtures of colloids with non-adsorbing polymer.
Depletion[@Asakura1954a] describes the emergent attraction between colloids in solution that maximize the free volume available to a small-particle cosolute via overlap of their excluded volume shells. It has been demonstrated that depletion enhances the directional entropic forces [@Damasceno2012; @Young2013a; @Anders2014; @Anders2014a] resulting from anisotropic particle shape, and that it promotes the contact between large facets. The depletion interaction can promote binding between lock and key colloids [@Sacanna2010; @Colon-Melendez2015] and lead to the formation of porous phases [@Ashton2015]. Because depletion mediates an additional attraction of entropic origin, this interaction can be thought of as competing with contact (excluded volume) interactions resulting from particle shape. Depletion thus enables novel phase behavior through the additional parameters of depletant shape and density [@Rossi2015; @Karas2015a]. Therefore, it is desirable to have a method to investigate the self-assembly of anisotropic shapes in the presence of depletants. Results for the phase behavior of binary hard sphere mixtures have been reported[@Dijkstra1998] using thermodynamic integration. In general, however, such results are challenging to obtain because of the size disparity between the colloid and the depletant. If one is interested in the phase behavior of the colloids, a customary approximation treats the depletant particles as an ideal gas[@Asakura1954a; @Widom1970]. This approximation would, in principle, allow integrating out the depletant to arrive at an effective colloid-colloid interaction; however, the resulting interaction is a many-body interaction and we are not aware of any prior implementation that treats many-body effects exactly. Here, we propose a novel, parallel Monte Carlo algorithm to simulate the depletion interaction between arbitrarily shaped colloids in an efficient manner that includes many-body effects.
![Explicit ([*left*]{}) vs. implicit ([*right*]{}) treatment of depletion interactions. Hard tetrahedra in solution with small, penetrable hard spheres aggregate face to face, to maximize the free volume available to the depletants.[]{data-label="fig:explicit_implicit"}](tetrahedra){width="\columnwidth"}
Figure \[fig:explicit\_implicit\] shows the effect of depletion interactions between two hard tetrahedra in solution with small penetrable hard spheres. The small spheres mediate an attractive interaction between the colloids that drives them to aggregate face to face. For two particles only, the depletion interaction can be easily simulated explicitly (left panel) or implicitly (right panel). However, implicit simulation of depletion interactions allow for a tremendous performance benefit, particularly for dilute systems of colloids and high densities of depletants, as we demonstrate below.
This paper is organized as follows. In section \[sec:background\], we discuss previous numerical methods for the simulation of depletion interactions. We describe our algorithm in section \[sec:algorithm\], and validate it against published data for hard spheres in the following section \[sec:validation\]. Section \[sec:results\] contains new results for hemispheres and discoids[@Hsiao2015a] in the presence of depletants, obtained with the new algorithm. Finally, in Sec. \[sec:conclusion\] we summarize and give an outlook on future applications of the method.
Background {#sec:background}
==========
Previous numerical treatments of depletion interactions employ cluster moves. Biben, Bolhuis and Frenkel proposed a configurational bias approach [@Bolhuis1994; @Biben1996], where depletants overlapping with a moved colloid are reinserted to enhance the acceptance probability of colloid moves. A geometric cluster algorithm has also been proposed by Dress and Krauth [@Dress1995], which is rejection-free and can therefore greatly enhance the equilibration of dilute systems of colloids. However, when the system is dense in colloids, clusters can span the system and the algorithm ceases to be efficient [@Ashton2013c]. To explore the phase behavior of a system of hard spheres in penetrable hard-sphere depletants, Vink and Horbach proposed grand-canonical simulation of both the colloids and the depletants, and they could efficiently sample the gas-liquid coexistence curve [@Vink2004]. However, their scheme does not generalize well beyond to the fluid-solid transition, because it is based on particle insertion.
All these methods have in common that they track the small depletant particles explicitly, which are stored in memory. An interesting alternative was proposed by Dijkstra et al. [@Dijkstra2006], who proposed a Monte Carlo integration of the free volume around every single moved colloid. However, their scheme does not obey detailed balance, and achieving sufficient accuracy comes at the expense of computation time, as we discuss in more detail below. Another implicit implementation of the depletion interaction between octahedra was proposed by Henzie et al. [@Henzie2012], where the generally anisotropic many-body interaction is reduced to an isotropic pair potential. We note that such a drastic simplification, while rendering the problem computationally tractable, is insufficient to allow the study of arbitrary shapes.
The scheme we describe in the following section is a completely general treatment of depletion interactions between anisotropic particles due to an ideal gas of depletants, and works well both for dilute and dense systems. In the ideal gas treatment, depletants interact with colloids but not with each other. The algorithm is rigorous, i.e. it obeys detailed balance, and it can be efficiently implemented on multi-core processors and graphics processing units (GPUs).
Description of the algorithm {#sec:algorithm}
============================
Semigrand N$\mu_p$VT ensemble
-----------------------------
We simulate a semigrand ensemble of $N$ colloids in a grand-canonical bath of penetrable depletants of chemical potential $\mu_p$. The partition sum for the depletants is $$\begin{aligned}
e^{-\beta\Xi{\{\vec r_{c,i}\}}}
&=& \sum\limits_{N_p = 0}^\infty \frac{e^{\beta \mu_p N_p}}{N_p! \lambda_p^{3 N_p}} \int d\vec r^{N_p}_{p,i}
e^{-\beta H_{cc}-\beta H_{cp}}\\
&=&\sum\limits_{N_p = 0}^\infty \frac{e^{\beta \mu_p N_p}}{N_p! \lambda_p^{3 N_p}} \int d\vec r^{N_p}_{p,i}
e^{-\beta H_{cc}} V_f^{N_p}\end{aligned}$$ where $V_f = V_f[\vec r_{c,i}]$ is the free volume available to depletants and $\lambda_p$ the thermal de Broglie wavelength associated with the depletants. We denote the colloid-colloid contribution to the Hamiltonian as $H_{cc} =
\sum_{i,j \in \mathrm{colloids}} U_{ij}$, where $U_{ij} = \infty$ for two colloids that overlap, and $U_{ij} = 0$ otherwise. The colloid-polymer contribution to the Hamiltonian $H_{cp}$ is defined analogously. Summation over the number $N_p$ of depletants in the system results in $$e^{-\beta\Xi{\{\vec r_{c,i}\}}}=e^{z_p V_f - \beta H_{cc}},
\label{eq:gc_ensemble}$$ where $z_p \equiv \frac{e^{\beta \mu_p}}{\lambda_p^3}$ is the depletant fugacity.
Basic idea {#sec:basic}
----------
Our central algorithmic result is the following Monte Carlo scheme to integrate the colloids under the action of the effective potential $H_{\mathrm{eff}}\equiv - \beta^{-1} z_p V_f
[\vec r_{c,i}]$ occurring in Eq. . The basic idea of the algorithm, which we present here, is very simple, and we describe optimized versions of it in ensuing sections.
\[sec:integration\_scheme\]
1. Propose a trial move for the colloids $M\to M'$.
2. Generate $N_p$ random depletant positions $\vec r_i^{(p)}$ uniformly in the free volume of the old configuration $M$, where $N_p$ is chosen according to $P_{z_p V_f}(N_p) \sim \mbox{Poisson}(V_f z_p)$, where $\mbox{Poisson}(\lambda)$ is the Poisson distribution of mean and variance $\lambda$. One possibility is to use rejection sampling in a larger volume $V_0
\supset V_f$.
3. Reject the trial move if any depletant overlaps with the new colloid configuration $M'$, otherwise accept.
In other words, we have an [*a priori*]{} move generation probability $$\begin{aligned}
\label{eq:move_gen}P^{(N_p)}_{\mathrm{\tiny trial}}(M\to M') &=&P^{\mbox{\tiny coll}}_{\mbox{\tiny trial}}(M\to M') P_{z_p V_f}(N_p)\\
\nonumber &=&P^{\mbox{\tiny coll}}_{\mbox{\tiny trial}}(M\to M') \frac{(z_p V_f)^{N_p}}{N_p!} e^{-z_p V_f},\end{aligned}$$ where $P^{\mbox{\tiny coll}}_{\mbox{\tiny trial}}(M\to M')$ is symmetric in $\Delta \vec r_{c,i} \leftrightarrow -\Delta \vec r_{c,i}$. In Eq. , we have used the definition of the Poisson distribution $P_{z_p V_f}(N_p)$ with average $z_p V_f$, the number of depletants in the free volume. We impose the following acceptance probability $$P^{(N_p)}_{\mathrm{acc}} (M\to M') = \mathrm{min}(1,e^{-\beta\Delta H_{cc}}) e^{-\beta H_{cp}^{'(N_p)}}.
\label{eq:pacc}$$
Figure \[fig:algorithm\] contains a graphical summary of the algorithm. Here, a square colloid is moved from configuration $M$ to configuration $M'$, by some translation and/or rotation, and depletants are placed in the free volume. As we detail below in Sec. \[sec:optimized\_algorithm\], the sampling can be restricted to the circle (or sphere, in three dimensions) containing the colloid in the new colloid position. By using rejection sampling, any depletants falling into the excluded volume at the old position are ignored. Depletants that overlap [*only*]{} in the new configuration lead to a rejection of the colloid move.
![Depletant positions (disks) considered for rejection of a colloid move (shaded squares). The difference between configurations $M$ and $M'$ is the position of the dark shaded colloid. When moving the colloid to the new position, depletants are randomly inserted into the circumsphere of the excluded volume, and depletants that only overlap with the shape in the new configuration $M'$ lead to rejection. Depletants that overlap with the colloid in the old position or with surrounding colloids (light shaded square) are not considered.[]{data-label="fig:algorithm"}](algorithm){width="\columnwidth"}
Next, we show that the above scheme obeys detailed balance, which is required for correctly sampling the ensemble defined by Eq. in the statistical sense. The transition probability $\pi$ from the old configuration $M$ to the new configuration $M'$ obeys $$\begin{aligned}
\label{eq:transition_probability}
\pi_{M\to M'}&=&e^{-\beta \Xi\{\vec r_{c,i}\}} P^{(N_p)}_{\mbox{\tiny trial}}(M\to M') P_{\mathrm{acc}}^{(N_p)}(M\to M')\nonumber\\
\nonumber&=&e^{-\beta H_{cc}+z_p V_f} P^{(N_p)}_{\mbox{\tiny trial}}(M\to M') \frac{(z_p V_f)^{N_p}}{N_p!} \\
&&\times e^{-z_p V_f}\,\mathrm{min}(1,e^{-\beta \Delta H_{cc}}) e^{-\beta H_{cp}^{'(N_p)}}\end{aligned}$$
We require for detailed balance that $\pi_{M\to M'}=\pi_{M'\to M}$, and average over all realizations $\left(N_p, \{\vec r_{p,i}^{N_p}\}\right)$ of depletants, in the free volume $V_f$,
$$\begin{aligned}
\label{eq:avg_pi}\sum\limits_{N_p=0}^\infty \int_{V_f} \frac{d\vec r^{N}_{p,i}}{V_f^{N_p}} \pi_{M\to M'}
&=&e^{-\beta H_{cc}} P^{\mbox{\tiny coll}}_{\mbox{\tiny trial}}(M\to M')\\
\nonumber&&\times \mathrm{min}(1,e^{-\beta \Delta H_{cc}})\\
&&\times \sum\limits_{N_p=0}^{\infty}\, \frac{(z_p V_f)^{N_p}}{N_p!} \int_{V_f}\frac{d\vec r^{N_p}_{p,i}}{V_f^{N_p}}
e^{-\beta H_{cp}^{'(N_p)}}.\nonumber\end{aligned}$$
Note that in order to obtain Eq. , we observe that the Poisson distribution is normalized in such a way so as to cancel out the depletant contribution, $e^{V_f z_p}$ to the ensemble weight. The integrand in the last line of Eq. is non-zero exactly for $\vec r_{p,_i} \in V_f'$; hence, after performing the summation over $N_p$, the transition probability becomes $$\begin{aligned}
\label{eq:detailed_bal_product}
\pi_{M \to M'} &=& e^{-\beta H_{cc}} P^{\mbox{\tiny coll}}_{\mbox{\tiny trial}}(M\to M')\, \mathrm{min}(1,e^{-\beta \Delta H_{cc}})\nonumber
\\
&& e^{z_p \mu(V_f \cap V_f')},\end{aligned}$$ where the volume $\mu(V_f \cap V_f')$ is the intersection of the free volume $V_f$ in the old configuration and the free volume $V_f'$ in the new configuration. This term arises because of the integration domain in Eq. . Because of the symmetry of the Metropolis criterion, $$e^{-\beta H_{cc}} \mathrm{min(1,e^{-\beta \Delta H_{cc}})} = e^{-\beta H_{cc}'} \mathrm{min}(e^{-\beta\Delta H'_{cc}},1)$$ and the symmetry property of the set intersection, the product in Eq. is symmetric under the exchange $M\leftrightarrow M'$. Consequently, our integration scheme obeys detailed balance.
Improved formulation {#sec:optimized_algorithm}
--------------------
The above integration scheme conveys the general idea of the algorithm. However, this algorithm is impractical to implement as is in an actual program, because it would require computation of the free volume $V_f$ in the entire simulation box for every single colloid move. Without loss of generality, we can restrict the sampling volume $V_f$ for depletants to a smaller volume $V_0
\supseteq V_{\mathrm{excl}}' \ \backslash V_{\mathrm{excl}}$, i.e. containing the excluded volume $V_{\mathrm{excl}}^{'}$ of the colloids in the system in the new configuration minus the excluded volume $V_{\mathrm{excl}}$ in the old configuration. The improved scheme is the same as the old scheme (Sec. \[sec:basic\]), as are the move generation and acceptance probabilities, with the exception that $V_f$ is replaced by $V_f \cap V_0$. The proof of detailed balance is only slightly more complicated for this algorithm.
We rewrite the ensemble weight $$\begin{aligned}
\nonumber\Pi_{\{\vec r_{c,i}\}}&=& e^{-\beta H_{cc}-\beta H_{\mathrm{eff}}}\\
\nonumber&=& e^{-\beta H_{cc}+z_p V_f}\\
&=&e^{-\beta H_{cc} + z_p \left[\mu(V_f \cap V_0) + \mu (V_f \cap \overline{V_0})\right]},
\label{eq:ensemble_weight_V0}\end{aligned}$$ where $\overline{V_0}$ denotes the complement $V\backslash V_0$ with respect to the simulation volume $V$. Using Eq. , integrating over $V_0\cap V_f$ and using transformations analogous to Eqs. -, the transition probability $M\to M'$ averaged over the number of test depletants and their positions becomes $$\begin{aligned}
\nonumber
\pi_{M\to M'}&=&e^{-\beta H_{cc}}\mathrm{min}\left(1,e^{-\beta \Delta H_{cc}}\right) P^{\mbox{\tiny coll}}_{\mbox{\tiny trial}}(M\to M')\\
&&e^{z_p \left[\mu(V_f\cap V_0 \cap V_f')+\mu(V_f\cap\overline{V_0})\right]}.
\label{eq:transition_prob_V0}\end{aligned}$$
It is straightforward to show that this transition probability is symmetric for forward and reverse moves. Since $V_0 \supseteq V_{\mathrm{excl}}'\backslash V_{\mathrm{excl}}$, it follows that $$\overline{V_0} \subseteq \overline{V_{\mathrm{excl}}'\backslash V_{\mathrm{excl}}} \subseteq \overline{V_{\mathrm{excl}}'} \cup V_{\mathrm{excl}} = V_f' \cup V_{\mathrm{excl}}$$ and therefore $\overline{V_0} = \overline{V_0} \cap (V_f' \cup V_{\mathrm{excl}})$. Hence, applying the distributive law, $$V_f \cap \overline{V_0} = V_f \cap \overline{V_0} \cap (V_f' \cup V_{\mathrm{excl}}) = V_f \cap \overline{V_0} \cap V_f',
\label{eq:VfinvV0}$$ because $V_f \cap V_{\mathrm{excl}} = \emptyset$.
Using Eq. we rewrite the transition probability Eq. as $$\begin{aligned}
\nonumber
\pi_{M\to M'} &=& e^{-\beta H_{cc}}\mathrm{min}\left(1,e^{-\beta \Delta H_{cc}}\right)
P^{\mbox{\tiny coll}}_{\mbox{\tiny trial}}(M\to M')\\
&&e^{z_p \left[\mu(V_f \cap V_f' \cap V_0) + \mu(V_f \cap V_f' \cap \overline{V_0})\right]},\end{aligned}$$ and because the measures in the exponent are taken from disjoint sets we can simplify this equation as $$\begin{aligned}
\pi_{M\to M'} &=& e^{-\beta H_{cc}}\mathrm{min}\left(1,e^{-\beta \Delta H_{cc}}\right)
P^{\mbox{\tiny coll}}_{\mbox{\tiny trial}}(M\to M')\nonumber\\
&&e^{z_p \mu(V_f \cap V_f')}
\label{eq:transition_prob_final}\end{aligned}$$ This is the same transition rate as Eq. , consequently our restricted sampling algorithm obeys detailed balance.
We may choose $V_0$ as the smallest region with $V_0 \supseteq
V_{\mathrm{excl}}'\backslash V_{\mathrm{excl}}$ that is convenient to sample from. E.g., we can sample in the excluded volume $V'_{\mathrm{excl},i}$ of the single moved colloid $i$ at the position of the new configuration $M'$ [*only*]{}, ignoring depletants that overlap with the colloid in the old configuration $M$. For anisotropic colloids, we will choose the circumsphere of diameter $d_{\mathrm{colloid}}+d_{\mathrm{depletant}}$ around the colloid in the new configuration, as done in Fig. \[fig:algorithm\].
We remark that a further possible optimization consists in restricting the sampling to the excluded volume [*shell*]{} of the moved colloid $V_{\mathrm{excl},i}\backslash V'_{\mathrm{core},i}$, and it can be shown, using steps analogous to above, that such a choice also fulfills detailed balance.
Configurational bias moves {#sec:configurational_bias}
--------------------------
The algorithm described above gives finite acceptance rates for translation step sizes $\delta \lesssim z_p^{-1} R^{-2}$, where $R$ is the size of the colloid, which is in general anisotropic. However, when there is more than one depletant in the excluded volume shell around the colloid particle on average, moves will be rejected most of the time. Equilibration of colloids in very dense depletant systems is therefore difficult.
![Computation of the configurational bias weight for the forward move. When a single moved colloid overlaps with a randomly inserted depletant in the new configuration $M'$, we attempt to reinsert it $n_{\mathrm{trial}}$ times such that it overlaps with the shape in the old configuration $M$. Valid insertion attempts are those where the depletant neither overlaps with a surrounding colloid nor with the colloid in the new position. The configurational bias weight is computed from the number of successful reinsertions, cf. Eq. .[]{data-label="fig:configurational_bias"}](configurational_bias){width="\columnwidth"}
To ameliorate this situation, we apply the configurational bias move of Biben, Bolhuis and Frenkel [@Biben1996; @Bolhuis1994] to implicit depletants, the idea of which we briefly summarize. Figure \[fig:configurational\_bias\] depicts the basic idea. For every depletant overlapping in the new configuration $M'$, we attempt to reinsert it $n_{\mathrm{trial}}$ times such that it overlaps with the shape in the old configuration $M$, but does not overlap with any other colloid. Such a cluster move obeys detailed balance because when performing the reverse move from $M'$ to $M$, the reinserted colloid will overlap in the old configuration. To correct for the configurational bias generated in this way [@Siepmann1992], we modify the acceptance probability $$P_{\mbox{acc}} = \min\left(1,
\prod\limits^{N_{\mathrm{overlap}}}_{i=1}\frac{N'_{\mbox{\tiny insert,i}}
(N_i+1)}{(N_{\mbox{\tiny insert,i}}+1) N_i} \right),
\label{eq:pacc_cb}$$ in which $N_{\mathrm{insert},i}$ and $N_{\mathrm{\tiny insert},i}'$ are the number of times the overlapping depletant $i$ could be reinserted without overlap into the old and new configuration, respectively. The numbers $N_i,
N_i' \le n_{\mathrm{trial}}$, count the valid insertion attempts in which the depletant overlaps with the moved shape in the old (new) configuration, without overlapping in the other. All other insertion attempts are ignored. The increment of one ($N_{\mathrm{insert}}+1$) is necessary because the depletant the colloid was overlapping with originally counts as a successful reinsertion attempt for the reverse move.
Parallel implementation {#sec:parallel}
-----------------------
An important feature of our algorithm is that the depletant insertions are independent and can be performed in parallel. We exploit this feature to implement the algorithm on the GPU. Some details of the GPU implementation are described in App. \[app:gpu\].
In addition, depletants are inserted only in a local neighborhood of the particle, reflecting the short-ranged nature of the depletion interaction. This means the parallelization scheme for particle based Monte Carlo that has recently been introduced within the Hard Particle Monte Carlo (HPMC) framework [@Anderson2013; @Anderson2015] in HOOMD-blue [@Anderson2008; @Glaser2015a; @hoomd-url] can be generalized to our implicit depletion algorithm. HPMC uses a checkerboard decomposition to allow parallelization of the MC simulation on a graphics processor (GPU). The checkerboard is colored in such a way that simultaneously active cells are separated by a layer of inactive cells of width $d_{\mathrm{colloid}}+d_{\mathrm{depletant}}$, which allows the active cells to be updated independently. Particles are not allowed to move outside their cells. The checkerboard coloring is permuted randomly. In order to maintain ergodicity, the grid lines are randomly shifted. HPMC also runs on the CPU, using an efficient tree-based particle data storage for overlap checks in combination with a sequential algorithm. Both the CPU and the GPU code path can be combined with spatial domain decomposition [@Glaser2015a], using the same same concept of an inactive layer for parallel execution. A reference implementation of the algorithm described in this paper will be released open-source as part of HOOMD-blue[@hoomd-url].
Validation {#sec:validation}
==========
Equation of state of the penetrable hard sphere model
-----------------------------------------------------
To validate our method, we compare results for hard spheres with the previously obtained results by Dijkstra et al. [@Dijkstra2006]. We note that even though their implicit algorithm for depletion does not obey detailed balance, it relies on minimizing errors from the violation of detailed balance through increasing the discretization of the MC integration step, which is a trade-off between accuracy and performance. In order to obtain an accurate equation of state, Dijkstra et. al had to restrict themselves to fairly small systems of $N=128$ spheres. Fig. \[fig:validation\_small\] compares results obtained with our algorithm (filled symbols) to those from Fig. 2 of Ref. (stars). We show the measured free volume fraction $\phi_p$ available to the penetrable hard spheres of same size, as a function of the reservoir volume fraction $\phi_p^r$ for different colloid volume fractions $\phi_c$ at constant simulation volume. For a system size of $N=128$ colloids, our and Dijkstra’s results are in essentially perfect agreement, mutually validating both algorithms (top panel). However with our new algorithm we can easily perform simulations for a larger system of $N=1000$ spheres. We do see slight deviations from the results for the $N=128$ system (lower panel), particularly at high depletant reservoir densities $\phi^r_p$, indicating the presence of finite size effects for this system size.
![Equation of state of spheres in penetrable hard sphere depletants. Plotted is the measured free volume $\phi_p$ available to the penetrable hard spheres of size ratio $q = d_{\mathrm{dep}}/ d_{\mathrm{colloid}} = 1$ vs. the reservoir volume fraction $\phi_p^r$ of the depletants, for different hard sphere volume fractions $\phi_c = 0.01\dots 0.3$ (filled symbols). Data by Dijkstra et al. [@Dijkstra2006] for $N=128$ is shown as asterisks. [*Upper panel*]{}: equation of state for $N=128$ colloids, [*lower panel*]{}: $N=1000$. The shown data includes error bars taking into account only statistically independent samples [@Flyvbjerg1989a].[]{data-label="fig:validation_small"}](validation_small "fig:"){width="\columnwidth"} ![Equation of state of spheres in penetrable hard sphere depletants. Plotted is the measured free volume $\phi_p$ available to the penetrable hard spheres of size ratio $q = d_{\mathrm{dep}}/ d_{\mathrm{colloid}} = 1$ vs. the reservoir volume fraction $\phi_p^r$ of the depletants, for different hard sphere volume fractions $\phi_c = 0.01\dots 0.3$ (filled symbols). Data by Dijkstra et al. [@Dijkstra2006] for $N=128$ is shown as asterisks. [*Upper panel*]{}: equation of state for $N=128$ colloids, [*lower panel*]{}: $N=1000$. The shown data includes error bars taking into account only statistically independent samples [@Flyvbjerg1989a].[]{data-label="fig:validation_small"}](validation "fig:"){width="\columnwidth"}
Coexistence curve of the penetrable hard sphere model
-----------------------------------------------------
We also tested the capability of our algorithm to equilibrate hard sphere systems at gas-liquid coexistence, and especially near the critical point. We carried out Gibbs ensemble simulations of hard spheres in penetrable hard sphere depletants[@Panagiotopoulos1988]. These types of simulations require insertion of the colloid at random positions in the simulation box, which is nearly impossible for high depletant fugacities. To overcome this difficulty, we resort to the configurational bias scheme discussed in Sec. \[sec:configurational\_bias\] and originally introduced in the context of the Gibbs ensemble of hard spheres with depleting rods in Ref. . For every exchange of a colloid between boxes, depletants are randomly inserted at the new position, and overlapping depletants are attempted to be reinserted in the old box. The move is accepted with the probability that accounts for the configurational bias weight.
In Fig. \[fig:gibbs\_ensemble\] we compare the coexistence curve thus obtained to published data by Vink and Horbach [@Vink2004]. Those authors did not use the Gibbs ensemble, but performed direct simulation in the grand-canonical ensemble of the colloids and depletants in a single box. Their method is advantageous to sample the gas-liquid separation, which takes place at intermediate densities $\phi_c \lesssim 0.4$, because it relies exclusively on particle insertion and deletion at random positions in the simulation box. Thus, in this regime their scheme can be at least as efficient as single particle moves, if the particle deletions are combined with depletant insertions, and vice versa. However, the grand-canonical method is not easily applicable to solid phases, for which particle insertion in a crystal lattice is nearly impossible. Our method, in contrast, computes depletion interactions for single-particle translations and rotations.
As shown in Fig. \[fig:gibbs\_ensemble\], our data for the total system size $N=256$, corresponding to the larger of the two system sizes studied by Vink and Horbach, generally reproduces their data for a depletant-colloid size ratio of $q=0.8$, at which many-body effects are important. However, we see some scatter in our data, which is likely a consequence of surface effects that make it notoriously hard to study coexistence near the critical point in Gibbs ensemble simulations [@Smit1989a; @Frenkel2001]. Vink and Horbach improved their sampling using the umbrella method and thermodynamic integration. Overall, however, our data obtained without using advanced free energy techniques is in agreement with the published data, validating the method.
![Coexistence curve for phase separating hard spheres in the presence of penetrable hard sphere depletants. Spheres ($N=512$) of initial packing fraction $\phi_c=0.12$ are simulated in the semigrand Gibbs ensemble using at constant normalized depletant reservoir density $\phi_p^r \equiv (\pi/6)
d_{\mathrm{dep}}^3 z_p$ using implicit depletants ($n_{\mathrm{trial}}=100$), and the coexisting colloid volume fractions (squares) are obtained by fitting the peaks of the two-dimensional $N-V$ histogram [@Smit1989a]. Asterisks denote data from Ref. measured in the grand-canonical ensemble.[]{data-label="fig:gibbs_ensemble"}](gibbs_ensemble){width="\columnwidth"}
Results {#sec:results}
=======
Aggregation of hemispheres into superlattices
---------------------------------------------
Equilibrium data of anisotropic particles aggregating into crystals with depletants is scarce [@Rossi2015]. Here, we present new results on the hierarchical assembly of hemispheres into FCC/HCP-cluster phases. Hard hemispheres for self-assembly have been the subject of previous investigation. Marechal and Dijkstra predicted the stability of a cluster-FCC (fcc$^2$) phase for hemispheres, but they were unable to find it in self-assembly simulations of sufficient size[@Marechal2010a]. Cinacchi presented the phase diagram of hard spherical caps, which does not include an fcc$^2$ phase[@Cinacchi2013a]. Neither study involved depletants.
We analyze the phase behavior of hemispheres in the presence of penetrable hard sphere depletants. Figure \[fig:hemispheres\_q015\] shows the kinetic phase diagram as a function of depletant reservoir density $\phi_p^r$ and colloid density $\phi_c$, for a depletant-hemisphere diameter ratio of $q=0.15$. Remarkably, we observe the formation of the fcc$^2$ and hcp$^2$ phases at finite depletant densities $\phi_p^r \ge 0.30$, and the inset shows a snapshot of such a configuration of hemispheres. However, at zero depletant fugacity, which corresponds to the case studied previously, we did not observe any ordered phase, even after $6\times10^8$ MC sweeps. Instead, we find a cluster fluid. In the phase diagram, we find close-packed crystals with both HCP and FCC stacking, and we suspect the fact that both occur indicates that the free energy difference is small [@Frenkel1984a].
![Self-assembly of hemispheres into crystalline phases. Shown is the kinetic phase diagram for $N=512$ hemispheres obtained with implicit simulation of depletants as function of the depletant reservoir density $\phi_p^r$ and the colloid density $\phi_c$, at depletant-hemisphere diameter ratio $q=0.15$. [*Inset:*]{} Snapshot of the hcp$^2$ phase found for $\phi_c=0.575$ and $\phi_p^r=0.4$. Similar phase diagrams were obtained for $q=0.175$ and $q=0.125$ (not shown).[]{data-label="fig:hemispheres_q015"}](hemispheres_q015){width="\columnwidth"}
We compare the implicit method against two other schemes, an explicit grand-canonical ensemble for the depletants [@Frenkel2001], and a canonical ensemble with fixed concentration of depletants. Figure \[fig:aggregation\_hemispheres\] shows the number of hemisphere pairs that have formed after time $t$. Because Monte Carlo simulations do not have a time scale, we choose the wall-clock time of the simulation as an ad-hoc measure of time. By analyzing bond order, we found that the time scale of crystallization corresponds to the time when all 512 hemispheres in the simulation box have paired up. This event occurs earliest for the implicit depletion algorithm. The simulation with explicit grand-canonical depletants also orders at a later time. However, the simulation with fixed number of depletants does not equilibrate into an ordered phase within the wall-clock time limit of 48h or $7.8\times10^7$ sweeps. Our findings show that the implicit algorithm leads to the fastest assembly of hemispheres into cluster crystal phases.
![Aggregation kinetics of hemispheres. Shown is the number of spheres formed after simulation time $t$ (in hours), for a simulation with implicit depletants (diamonds), explicit grand-canonical depletants (squares) and canonical depletants (circles). Simulations where performed at colloid volume fraction $\phi_c=0.575$ and depletant reservoir density $\phi_p^r=0.40$ for a depletant-colloid diameter ratio of $q=0.175$, on eight cores of an Intel Xeon E5-2680 processor with spatial domain decomposition via MPI (single precision). A sphere is defined as two hemispheres with their face centers being closer than $0.2 d$ apart, where $d$ is the diameter of the (hemi-)sphere. In the canonical case, the constant number $N_p=4884$ of explicit depletant particles has been chosen to be the average number of depletants in the free volume of the grand-canonical simulations, after phase transformation.[]{data-label="fig:aggregation_hemispheres"}](hemispheres_kinetics){width="\columnwidth"}
Diffusivity of discoids with depletants
---------------------------------------
Ellipsoids are simple examples of anisotropic particles. Recently, discoids have been demonstrated to arrange into metastable strand structures at sufficiently high density of polymeric depletants [@Hsiao2015a]. Here, we investigate the diffusivity of discoids at depletant densities that do not lead to ordering. For Monte Carlo simulations with single particle moves, the diffusivity of the colloids in terms of mean square displacement per wall clock time is an effective measure of the speed of equilibration of the simulation. In our simulations, we tune the single particle step size for translation and rotation so as to yield an average acceptance rate of $20\%$.
The upper panel of Figure \[fig:bench\_ellipsoids\] shows the effect of $n_{\mathrm{trial}}$ on the diffusivity $D$ of discoids. The colloid particles are uniaxial ellipsoids with semi axes $a=b=0.5$ and $c=0.25$, the depletants are of radius $r=0.25$, and the simulations are performed in a dilute system at colloid density $\phi_c=0.01$ and depletant reservoir density $\phi_p^r=0.40$, below the coexistence density for metastable clusters [@Hsiao2015a]. From the graph, it can be clearly seen that using configurational bias moves with a modest value of $n_{\mathrm{trial}} \gtrsim 10$ speeds up the equilibration by almost three orders of magnitude compared to not using configurational bias moves. The effect is dramatic and similar in magnitude between running the simulation on the CPU vs. the GPU. At peak diffusivity, there is a slight advantage to using the GPU, compared to CPU socket performance. For higher values of $n_{\mathrm{trial}}$, the performance drops off slowly, as a result of the increased computational effort to carry out the depletant reinsertions, while the effect of increasing the step size due to a higher acceptance ratio is weaker. We note that we carried out simulations with finite values of $n_{\mathrm{trial}}$ at higher colloid densities as well (data not shown) and found the effect to be less pronounced at these densities.
We further measure the performance at different colloid densities $\phi_c$ between the dilute regime and the regime of a dense liquid, for the same parameters as above, with $n_{\mathrm{trial}}=0$ (Fig. \[fig:bench\_ellipsoids\], lower panel). For simulations with implicit depletants, either using the CPU or the GPU, the performance depends only slightly on the colloid volume fraction, directly confirming the beneficial effect of implicit calculation of the interaction in the dilute system, where the number of depletants would be very high with an explicit treatment. Indeed, the performance of the explicit depletant simulations in the grand-canonical ensemble drops noticeably when going from $\phi_c=0.50$ towards lower densities, and the system becomes practically impossible to equilibrate when $\phi_c < 0.30$. Looking at GPU vs. CPU performance, we note that GPUs are advantageous for very dilute systems, but do not provide better performance when the system is dense in colloids. This is because the checkerboard parallelization scheme implemented for performing the colloid moves on the GPU (Sec. \[sec:parallel\] and Ref. ) requires a large simulation box to operate efficiently.
![Diffusivity of discoids in penetrable hard sphere depletants. [*Upper panel:*]{} Diffusion coefficient vs. the number $n_{\mathrm{trial}}$ of configurational bias swaps, for a simulation of $N=500$ discoids on 12 CPU cores (Intel Xeon E5-2680v3) using MPI (squares) and a single NVIDIA K20X GPU (circles). For simulation parameters, see main text. The diffusivity is obtained from fitting the linear mean square displacement MSD as function of the wall-clock time $t$ (in seconds). [*Lower panel:*]{} Diffusion coefficient vs. colloid density $\phi_c$ ($n_{\mathrm{trial}} = 0$), for a simulation on a NVIDIA K80 GPU (squares), on 8 cores of an Intel Xeon E5-2680v3 (circles), and for a simulation of explicit depletants in the grand-canonical ensemble (diamonds), on the same hardware.[]{data-label="fig:bench_ellipsoids"}](D_ntrial "fig:"){width="\columnwidth"} ![Diffusivity of discoids in penetrable hard sphere depletants. [*Upper panel:*]{} Diffusion coefficient vs. the number $n_{\mathrm{trial}}$ of configurational bias swaps, for a simulation of $N=500$ discoids on 12 CPU cores (Intel Xeon E5-2680v3) using MPI (squares) and a single NVIDIA K20X GPU (circles). For simulation parameters, see main text. The diffusivity is obtained from fitting the linear mean square displacement MSD as function of the wall-clock time $t$ (in seconds). [*Lower panel:*]{} Diffusion coefficient vs. colloid density $\phi_c$ ($n_{\mathrm{trial}} = 0$), for a simulation on a NVIDIA K80 GPU (squares), on 8 cores of an Intel Xeon E5-2680v3 (circles), and for a simulation of explicit depletants in the grand-canonical ensemble (diamonds), on the same hardware.[]{data-label="fig:bench_ellipsoids"}](bench_phic_expl "fig:"){width="\columnwidth"}
Conclusion {#sec:conclusion}
==========
We have presented an efficient algorithm to implicitly simulate depletion interactions between anisotropic colloids. The algorithm is implemented on parallel multi-core processors and graphics processing units. Combined with a parallel Monte Carlo scheme [@Anderson2013; @Anderson2015], the algorithm offers a way to tackle large scale simulations of hard shapes with depletants. The scheme may be readily generalized to soft interactions between the colloid and the depletant, such as the Hertz potential [@Rovigatti2015]. We stress that even though the algorithm is parallel, already its serial implementation offers significant speed-ups over algorithms that do not use cluster moves, for dilute systems of colloids, because only depletants in the neighborhood of every particle are considered. Nevertheless, the method works perfectly well for the fluid-solid transition.
We see applications for our method in the simulation of anisotropic colloid phase behavior. Even without depletants, polyhedra have been shown to order into a multitude of different structures [@Damasceno2012]. With depletion interactions, additional phases can be stabilized[@Henzie2012; @Young2013a; @Rossi2015; @Karas2015a]. The algorithm can also be used to study the aggregation of entropically patchy colloids into colloidal polymer chains, held together by strong depletion bonds [@Ashton2015]. In this context, it would be interesting to study solutions as well as melts of such colloidal polymers. An interesting open question concerns whether depletant entropy can stabilize not only close-packed but also open ordered structures [@Mao2013]. In protein crystallization, depletant polymers are commonly used as precipitants. An important limitation of our algorithm is that it treats only non-interacting depletants, and the validity of that approximation remains to be investigated for specific systems. In contrast to enthalpically patchy models, our algorithm does not require implementation of shape-specific attractive patches to study aggregation of colloids, and the algorithm is therefore highly robust and generic.
We are thankful to Werner Krauth for a discussion that led to the development of this algorithm. We also thank Michael Engel for fruitful discussions and careful reading of the manuscript.
This material is based upon work supported in part by the U.S. Army Research Office under Grant Award No. W911NF-10-1-0518 and by a Simons Investigator award from the Simons Foundation to Sharon Glotzer. This research used the Extreme Science and Engineering Discovery Environment [@Towns2014] (XSEDE), which is supported by National Science Foundation grant number ACI-1053575; XSEDE award DMR 140129. The Glotzer Group at the University of Michigan is an NVIDIA GPU Research Center. Hardware support by NVIDIA Corp. is gratefully acknowledged.
GPU implementation {#app:gpu}
==================
In the GPU implementation, we perform the colloid trial moves in the active cells[@Anderson2015] and the depletant insertions in different kernels. To insert depletants, we draw a random number of depletants for every moved colloid, as described in Sec. \[sec:basic\]. We use a one-to-one mapping between depletants and thread groups of size $n\le n_{\mathrm{max}}$. Here, $n_{\mathrm{max}}=32$ is the maximum number of threads that can perform overlap checks synchronously, and we tune $n$ at run-time. When any thread detects an overlap between the depletant and any particles in the old configuration, the depletant is ignored. In the other case, if the depletant overlaps with the moved colloid, that colloid move is flagged for rejection.
When the configurational bias scheme is used ($n_{\mathrm{trial}} > 0$), a second kernel with a similar thread mapping is launched, however, depletants are assigned to whole thread blocks of size $s \le 1024$, which is an auto-tuned parameter, so that the bias weights of different reinsertions belonging to the same depletant can be summed in shared memory.
[48]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1038/nmat1949) [****, ()](\doibase
10.1038/ncomms2694) [****, ()](\doibase 10.1021/jacs.5b04641) [****, ()](\doibase 10.1038/nature08906) [****, ()](\doibase 10.1038/nmat3178) [****, ()](\doibase 10.1038/nchem.1651) [****, ()](\doibase 10.1038/ncomms5445) [****, ()](\doibase 10.1038/ncomms4593) [****, ()](\doibase
10.1126/science.1220869) [****, ()](\doibase 10.1038/nmat2959) [****, ()](\doibase 10.1126/science.1128045) [****, ()](\doibase 10.1021/nl403149u) [****, ()](\doibase 10.1038/nmat2870) [****, ()](\doibase 10.1038/nature12739) [****, ()](\doibase 10.1039/c0sm01246g) [ ()](\doibase 10.1073/pnas.1415467112) [****, ()](\doibase 10.1063/1.1740347) [****, ()](\doibase 10.1002/anie.201306009) [****, ()](\doibase 10.1021/nn4057353) [**** (), 10.1073/pnas.1418159111](\doibase 10.1073/pnas.1418159111) [****, ()](\doibase 10.1063/1.4919299) [****, ()](\doibase 10.1039/C5SM01043H) @noop [ ()]{} [****, ()](\doibase
10.1103/PhysRevLett.81.2268) [****, ()](\doibase
10.1063/1.1673203) @noop [ ()]{} [****, ()](\doibase 10.1063/1.467953) [****, ()](\doibase 10.1088/0953-8984/8/50/008) [****, ()](\doibase
10.1088/0305-4470/28/23/001) [****, ()](\doibase
10.1063/1.4824137) [****, ()](\doibase 10.1063/1.1773771) [****, ()](\doibase 10.1103/PhysRevE.73.041404) @noop [****, ()]{} [****, ()](\doibase
10.1016/j.jcp.2013.07.023) @noop [ , ()]{} [****, ()](\doibase 10.1016/j.jcp.2008.01.047) [****, ()](\doibase 10.1016/j.cpc.2015.02.028), @noop [“,” ]{} [****, ()](\doibase
10.1063/1.457480) [****, ()](\doibase
10.1080/00268978800100361) [ ()](http://www.tandfonline.com/doi/abs/10.1080/00268978900102641) @noop [**]{} (, ) [****, ()](\doibase 10.1103/PhysRevE.82.031405), [****, ()](\doibase 10.1063/1.4822038) [****, ()](\doibase 10.1063/1.448024) [****, ()](\doibase 10.1039/c4sm02218a) [****, ()](\doibase 10.1038/nmat3496) [****, ()](\doibase 10.1109/MCSE.2014.80)
|
---
abstract: 'We compute the quark–gluon vertex in quenched lattice QCD, in the Landau gauge using an off-shell mean-field ${\mathcal{O}}(a)$-improved fermion action. The complete vertex is computed in two specific kinematical limits, while the Dirac-vector part is computed for arbitrary kinematics. We find a nontrivial and rich tensor structure, including a substantial infrared enhancement of the interaction strength regardless of kinematics.'
address:
- 'School of Mathematics, Trinity College, Dublin 2, Ireland'
- 'Nuclear Theory Center, Indiana University, Bloomington IN 47408, USA'
- 'Centre for the Subatomic Structure of Matter, Adelaide University, Adelaide, SA 5005, Australia'
author:
- 'Jon-Ivar Skullerud, Patrick O. Bowman Ay[ş]{}e K[i]{}z[i]{}lersü, Derek B. Leinweber, Anthony G. Williams'
title: 'Quark–gluon vertex in arbitrary kinematics[^1]'
---
mathmargin = 0pt
INTRODUCTION
============
Over the past few years, substantial progress has been made in our understanding of the nonperturbative correlation functions (propagators and vertices) of the fundamental fields of QCD and their relation to the phenomena of colour confinement and dynamical chiral symmetry breaking. It has recently become evident that at least in Landau gauge, a detailed knowledge of the structure of the quark–gluon vertex is essential for an understanding of the dynamics of quark confinement and chiral symmetry breaking, which is encoded in the quark Dyson–Schwinger equation (DSE), relating the quark propagator $S(p)$ to the gluon propagator and the quark–gluon vertex $\Gamma_\mu(p,q)$, where $p$ and $q$ are quark and gluon momenta respectively. The overall shape of the gluon propagator is now quite well known, both from lattice QCD [[@Bonnet:2001uh; @Bowman:2004jm]]{} and from studies of the coupled ghost–gluon Dyson–Schwinger equations. It is now clear that if this is fed into the quark DSE together with a bare or QED-like vertex, the resulting quark propagator will not exhibit a sufficient degree of chiral symmetry breaking. Several of the contributions at this conference have addressed this issue, in various ways [@Holl:2004qn; @Iida:2003xe; @Fischer:2004ym; @Tandy:2004rk].
The quark–gluon vertex is related to the ghost sector through the Slavnov–Taylor identity (STI), $$\begin{split}
q^\mu\Gamma_\mu(p,q) &= G(q^2)
\Bigl[(1-B(q,p+q))S^{-1}(p)\\
& - S^{-1}(p+q)(1-B(q,p+q))\Bigr] \, ,
\end{split}
\label{eq:sti}$$ where $G(q^2)$ is the ghost renormalisation function and $B^a(q,k)$ is the ghost–quark scattering kernel. In particular, if the ghost propagator is infrared enhanced, as both lattice [@Bloch:2003sk] and DSE studies [@Fischer:2002hn] indicate, the vertex will also be so. This provides for a consistent picture of confinement and chiral symmetry breaking at the level of the Green’s functions of Landau-gauge QCD, where the same infrared enhancement that is responsible for confinement of gluons, provides the necessary interaction strength to give rise to dynamical chiral symmetry breaking in the quark sector.
Confinement of quarks is still not understood in this picture, however. If the effective interaction between a quark and an antiquark by way of exchange of a nonperturbative gluon is to give rise to a linearly confining potential, the quark–gluon vertex must contain an infrared enhancement over and above that contained in the ghost self-energy. In the STI, this would be encoded in a non-trivial ghost–quark scattering kernel. Confirming or refuting this picture is a major challenge for current lattice and DSE studies of Landau-gauge QCD.
Another area where the quark–gluon vertex may be of interest is that of effective charges. Although ‘the running coupling’ is not a meaningful concept beyond perturbation theory, since there is no known way of nonperturbatively connecting two different ‘schemes’, process-dependent effective charges may be defined non-perturbatively and be phenomenologically useful. The interaction between quarks and gluons may be a natural starting point for many of the physically interesting processes.
Here we will present results of a lattice investigation into the quark–gluon vertex [[@Skullerud:2002ge]]{}. This consists of a determination of the full structure of the vertex at two particular kinematical points (the [*soft gluon point*]{} where the gluon has zero momentum, and the [*quark reflection point*]{} where the incoming and outgoing quark momenta are equal and opposite), as well as a determination of the dominant, vector part of the vertex in arbitrary kinematics.
FORMALISM
=========
We denote the outgoing quark momentum $p$ and the outgoing gluon momentum $q$. The incoming quark momentum is $k=p+q$. In the continuum, the quark–gluon vertex can be decomposed into four components $L_i$ contributing to the Slavnov–Taylor identity and eight purely transverse components $T_i$: $$\begin{split}
\Gamma_\mu(p,q) & =
\sum_{i=1}^{4}\lambda_i(p^2,q^2,k^2)L_{i,\mu}(p,q) \\
&\phantom{=} + \sum_{i=1}^{8}\tau_i(p^2,q^2,k^2)T_{i,\mu}(p,q) \, .
\end{split}
\label{eq:decomp}$$ In euclidean space, the components $L_i$ and $T_i$ are given by [[@Skullerud:2002ge]]{}$$\begin{aligned}
{2}
L_{1,\mu} =& \gamma_\mu &\qquad
L_{2,\mu} =& -{\not\!P}P_\mu \\
L_{3,\mu} =& -iP_\mu &\qquad
L_{4,\mu} =& -i\sigma_{\mu\nu}P_\nu \notag \\
T_{1,\mu} =& -i\ell_\mu &\qquad
T_{2,\mu} =& -{\not\!P}\ell_\mu \notag\\
T_{3,\mu} =& {\not\!q}q_\mu - q^2\gamma_\mu \notag \\
T_{4,\mu} =& -i\bigl[q^2{\sigma_{\mu\nu}}P_\nu& + 2q_\mu{\sigma_{\nu\lambda}}p_\nu &k_\lambda\bigr]
\notag\\
T_{5,\mu} =& -i\sigma_{\mu\nu}q_\nu &\qquad
T_{6,\mu} =& (qP)\gamma_\mu - {\not\!q}P_\mu\!\! \\
T_{7,\mu} =& -\frac{i}{2} (qP){\sigma_{\mu\nu}}& P_\nu
- iP_\mu{\sigma_{\nu\lambda}}& p_\nu k_\lambda \notag\\
T_{8,\mu} =& -\gamma_\mu\sigma_{\nu\lambda}p_\nu k_\lambda&
- {\not\!p}k_\mu + {\not\!k}p_\mu& \notag\end{aligned}$$ where $P_\mu\equiv p_\mu+k_\mu$, $\ell_\mu\equiv
(pq)k_\mu-(kq) p_\mu$. In Landau gauge, for $q\neq0$, we are only able to compute the transverse projection of the vertex, $\Gamma^P_\mu(p,q) \equiv P_{\mu\nu}(q)\Gamma_\nu(p,q)$, where $P_{\mu\nu}(q) \equiv \delta_{\mu\nu}-q_{\mu}q_{\nu}/q^2$ is the transverse projector. Since the vertex will always be coupled to a gluon propagator which contains the same projector, this is also the only combination that appears in any application. The four functions $L_{i,\mu}$ are projected onto the transverse $T_{i,\mu}$, giving rise to modified form factors $$\begin{aligned}
{2}
\lambda'_1 &= \lambda_1 - q^2\tau_3 &\, ; \qquad
\lambda'_2 &= \lambda_2 - \frac{q^2}{2}\tau_2 \, ;\\
\lambda'_3 &= \lambda_3 - \frac{q^2}{2}\tau_1 & \, ;\qquad
\lambda'_4 &= \lambda_4 + q^2\tau_4 \, .\notag\end{aligned}$$ The lattice tensor structure is more complex, and (\[eq:decomp\]) is only recovered in the continuum. The form factors also receive large contributions from lattice artefacts at tree level; the procedure we apply in correcting for these is described in [[@Skullerud:2003qu]]{}.
In QED, the four form factors $\lambda_i$ are completely determined by the fermion propagator $S^{-1}(p) = i{\not\!p}A(p^2) + B(p^2)$: $$\begin{aligned}
\lambda_1(p^2,q^2,k^2) &= {\frac{1}{2}}\left(A(p^2)+A(k^2)\right)\, ;
\label{eq:bc1} \\
\lambda_2(p^2,q^2,k^2) &= -{\frac{1}{2}}\frac{A(p^2)-A(k^2)}{p^2-k^2} \, ;
\label{eq:bc2} \\
\lambda_3(p^2,q^2,k^2) &= \frac{B(p^2)-B(k^2)}{p^2-k^2} \, ;
\label{eq:bc3} \\
\lambda_4(p^2,q^2,k^2) &= 0 \, .
\label{eq:bc4}\end{aligned}$$ By comparing our results with these forms we can get an idea of the importance of the nonabelian contributions to the STI, and in particular the quark–ghost scattering kernel.
RESULTS
=======
We have analysed 495 configurations on a $16^3\times48$ lattice at $\beta=6.0$, using a mean-field improved SW action with a quark mass $m\approx115$ MeV. This is part of the UKQCD data set described in [@Bowler:1999ae]; further details can also be found in [@Skullerud:2002ge]. A second quark mass $m\approx60$ MeV has also been studied, but as the mass dependence was found to be almost negligible [[@Skullerud:2003qu]]{}, we do not show those results here.
Soft gluon point $q=0$
----------------------
At $q=0$ the vertex reduces to $$\begin{split}
\Gamma_\mu(p,0) =& \lambda_1(p^2){\gamma_\mu}\\
& - 4\lambda_2(p^2){\not\!p}p_\mu
- 2i\lambda_3(p^2)p_\mu \, ,
\end{split}
\label{eq:decomp-asym}$$ where for brevity we write $\lambda_i(p^2,0,p^2)=\lambda_i(p^2)$. In this specific kinematics, the QED expressions (\[eq:bc1\])–(\[eq:bc3\]) become[^2] $$\begin{gathered}
\lambda_1^{\text{QED}}(p^2) = A(p^2) \, ; \label{eq:bc01} \\
\lambda_2^{\text{QED}}(p^2) =
-{\frac{1}{2}}\frac{\mathrm{d}}{\mathrm{d}p^2}A(p^2) \, ; \label{eq:bc02} \\
\lambda_3^{\text{QED}}(p^2) =
\frac{\mathrm{d}}{\mathrm{d}p^2}B(p^2) \, .\label{eq:bc03}\end{gathered}$$
In Fig. \[fig:q0\] we show the dimensionless quantities $\lambda_1,
4p^2\lambda_2$ and $2p\lambda_3$ as a function of momentum $p$. We also show the abelian forms (\[eq:bc01\])–(\[eq:bc03\]) which have been obtained from fitting lattice data for the quark propagator [@Skullerud:2003qu; @Bowman:2002kn]. All these form factors have been renormalised at 2 GeV, requiring $\lambda_1(4\ \mathrm{GeV}^2,0,4\ \mathrm{GeV}^2)=1$.
![The renormalised, dimensionless form factors $\lambda_1,
4p^2\lambda_2$ and $2p\lambda_3$ at the soft gluon point, as a function of $p$, for $m=115$ MeV. Also shown are the corresponding abelian (Ball–Chiu) forms (\[eq:bc01\])–(\[eq:bc03\]), derived from the quark propagator.[]{data-label="fig:q0"}](q0_all.eps){width="\colw"}
We find that while $\lambda_3$ is quite close to its abelian form, both $\lambda_1$ and $\lambda_2$ are significantly enhanced. Since the ghost self-energy would contribute the same prefactor (in this kinematics, a constant) to all three form factors compared to the abelian form, this points to a nontrivial structure in the quark–ghost scattering kernel. However, the singular nature of the soft gluon point along with our small lattice volume make it difficult to draw firmer conclusions.
Quark reflection point $p=-k$
-----------------------------
At the quark reflection point $p=-k, q=-2p$ only $\lambda'_1$ and $\tau_5$ survive, and the projected vertex is $$\begin{split}
\Gamma^P_\mu(-q/2,q) =&
\lambda'_1(q^2)\bigl({\gamma_\mu}-{\not\!q}q_\mu/q^2\bigr) \\
& - i\tau_5(q^2){\sigma_{\mu\nu}}q_\nu \, ,
\end{split}$$ where in this section we write $\{\lambda'_1,\tau_5\}(q^2/4,q^2,q^2/4)
= \{\lambda'_1,\tau_5\}(q^2)$. The dimensionless quantities $\lambda'_1(q^2), q\tau_5(q^2)$ are shown in Fig. \[fig:refl\]. These form factors have been renormalised requiring $\lambda'_1(1\mathrm{GeV}^2,4\mathrm{GeV}^2,1\mathrm{GeV}^2)=1$.
![The renormalised, dimensionless form factors $\lambda_1'$ and $q\tau_5$ at the quark reflection point, as a function of the gluon momentum $q$, for $m=115$ MeV.[]{data-label="fig:refl"}](refl_all.eps){width="\colw"}
$\lambda'_1$ shows the same qualitative behavour as $\lambda_1$ at the soft gluon point, with a quite strong infrared enhancement. We see that $\tau_5$, which has rarely if ever been included in vertex models used in DSE studies, is quite sizeable, indeed comparable in magnitude to the dominant component $\lambda'_1$. It will be interesting to study the effect of including this part of the vertex in future DSE studies.
General kinematics
------------------
The general lattice tensor structure, even for the Dirac-vector part of the vertex alone, is very complicated and makes a full determination of the vertex very difficult with this lattice action. However, in the special case where we choose both the quark and gluon momentum vectors to be ‘perpendicular’ to the vertex component, i.e. if we compute $\Gamma_\mu(p,q)$ with $p_\mu=q_\mu=0$, this structure simplifies considerably. There is no loss of generality provided rotational symmetry is restored in the continuum. Here, we will only study the leading, vector part of the vertex, but the other components may also be determined in principle. In continuum notation, we compute $$\begin{aligned}
\begin{split}
\frac{1}{4}{\operatorname{tr}}{\gamma_\mu}&\Gamma^P_\mu(p,q) =
\Bigl(1-\frac{q_\mu^2}{q^2}\Bigr)\lambda'_1 \\
& + \frac{2}{q^2}\Bigl[(pq)k_\mu-(kq)p_\mu\Bigr](p_\mu+k_\mu)\lambda'_2 \\
& - [k^2-p^2-(k_\mu^2-p_\mu^2)]\tau_6
\end{split} \\
=\,& \lambda'_1 - (k^2-p^2)\tau_6 \equiv \lambda'' \, .\end{aligned}$$
![The unrenormalised form factor $\lambda'_1$ in the quark-symmetric kinematics $p^2=k^2$ (upper surface), as a function of quark momentum $p$ (long axis) and gluon momentum $q$ (short axis) in units of GeV. Statistical uncertainties are illustrated by the lower surface.[]{data-label="fig:qsym"}](Lambda1_symm.ps){width="\colw"}
Of particular interest is the quark-symmetric limit, where the two quark momenta are equal in magnitude, $p^2=k^2$. In this case, $\tau_6$ is also eliminated, i.e. $\lambda''_1(p^2,q^2,p^2) =
\lambda'_1(p^2,q^2,p^2)$. Note that both the soft gluon and the quark reflection kinematics discussed previously are specific instances of this more general case. The details of the lattice implementation of this, including the tree-level correction, will be described elsewhere [@Skullerud:2004xx]. In Fig. \[fig:qsym\] we show $\lambda'_1$ as a function of the two remaining independent momentum invariants. The data become quite noisy as $q$ is increased, and also exhibit some ‘spikes’ and ‘troughs’ which at present we assume to be numerical noise and lattice artefacts.
By interpolating the points in Fig. \[fig:qsym\], we may reach the totally symmetric point where $p^2=k^2=q^2$. This kinematics has a history of being used to define a momentum subtraction (MOM) scheme [@Celmaster:1979km]. We show our results in Fig. \[fig:symmetric\]. Again we find a strong infrared enhancement.
![The unrenormalised form factor $\lambda'_1$ at the totally symmetric kinematics $p^2=k^2=q^2$, as a function of the momentum $p$.[]{data-label="fig:symmetric"}](lambda1_sym.eps){width="\colw"}
![The unrenormalised form factor $\lambda''_1$ for gluon momentum $q=0.555$ GeV (top) and $q=0.873$ GeV (bottom), as a function of quark momenta $p$ and $k$. The lower surfaces denote the statistical uncertainties.[]{data-label="fig:asym1"}](Lambda1_asym_qave3_02.ps "fig:"){width="\colw"} ![The unrenormalised form factor $\lambda''_1$ for gluon momentum $q=0.555$ GeV (top) and $q=0.873$ GeV (bottom), as a function of quark momenta $p$ and $k$. The lower surfaces denote the statistical uncertainties.[]{data-label="fig:asym1"}](Lambda1_asym_qave3_04.ps "fig:"){width="\colw"}
![As Fig. \[fig:asym1\], but for gluon momentum $q=1.193$ GeV and 2.321 GeV.[]{data-label="fig:asym3"}](Lambda1_asym_qave3_07.ps "fig:"){width="\colw"} ![As Fig. \[fig:asym1\], but for gluon momentum $q=1.193$ GeV and 2.321 GeV.[]{data-label="fig:asym3"}](Lambda1_asym_qave3_16.ps "fig:"){width="\colw"}
Finally, figs. \[fig:asym1\] and \[fig:asym3\] show $\lambda_1''$ in general kinematics, for four different fixed values of $q$, as a function of the two quark momenta $p$ and $k$. We expect all form factors to be symmetric in $p^2$ and $k^2$ ($\tau_6$ on its own is antisymmetric, but is multiplied by $p^2-k^2$), and this is also what the figures show, within errors. The broadening of the data surface as $q$ grows is simply a reflection of the increase in available phase space.
The same qualitiative features as were found in the more specific kinematics, are reproduced here. At low $q$, we see a clear infrared enhancement, which disappears as $q$ grows, reflecting the fact that at high momentum scales, only the logarithmic behaviour (which is too weak to be seen in these data) remains. At the same time, the level of the surface sinks, which reflects the infrared enhancement of $\lambda_1''$ also as a function of gluon momentum.
DISCUSSION AND OUTLOOK
======================
We have determined the complete tensor structure of the quark–gluon vertex at two kinematical points, as well as the leading component in arbitrary kinematics. At the soft gluon point, we have observed significant and non-uniform deviations from the abelian form which has previously been the basis for DSE studies. At the quark reflection point, we find a significant contribution from the ‘chromomagnetic’ form factor $\tau_5$, which has previously been ignored. In general kinematics, we find an infrared enhancement in all momentum directions; we hope to be able to quantify this enhancement more clearly by fitting the data in the infrared region to functional forms in the three momentum variables $(p^2,k^2,q^2)$.
It is interesting to compare these results with recent calculations based on nonperturbative extensions of the one-loop vertex [@Bhagwat:2004kj; @Fischer:2004ym]. Both these studies agree very well with our results for $\lambda_1$ and $\lambda_3$, while finding substantially lower values for $\lambda_2$. Since all these calculations must be considered preliminary, too much emphasis should not be placed on this. It is worth noting that $\lambda_2$ is inherently noisy, as it mixes with $\lambda_1$, which must be subtracted in order to obtain the data shown in Fig. \[fig:q0\].
These results have been obtained on a rather small lattice, and with a discretisation that gives rise to quite large tree-level lattice artefacts which must be corrected for. We therefore expect systematic errors to be large for large momenta. To obtain more reliable results, and to extend this study to the full vertex structure at all kinematics, it would be desirable to employ an action which is known to have smaller and more tractable tree-level artefacts. The Asqtad action has been employed successfully in computing the quark propagator [@Bowman:2002bm], and unlike the SW action, only $\lambda_1$ and possibly $\lambda_2$ are non-zero at tree level, so tree-level correction will not be needed for the remaining form factors. This action is also computationally cheap, making large lattice volumes feasible. Another possibility is to use overlap fermions, which have the advantage of retaining an exact chiral symmetry, which protects all the odd Dirac components of the vertex at tree level.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work has been supported by the Australian Research Council and the Irish Research Council for Science, Engineering and Technology. JIS is grateful for the hospitality of the Centre for the Subatomic Structure of Matter, where part of this work was carried out. We thank Reinhard Alkofer, Christian Fischer and Craig Roberts for stimulating discussions.
[10]{}
F. D. R. Bonnet, P. O. Bowman, D. B. Leinweber, A. G. Williams and J. M. Zanotti, Phys. Rev. [**D64**]{}, 034501 (2001) \[hep-lat/0101013\]. P. O. Bowman, U. M. Heller, D. B. Leinweber, M. B. Parappilly and A. G. Williams, hep-lat/0402032. A. H[ö]{}ll, A. Krassnigg and C. D. Roberts, nucl-th/0408015; M. S. Bhagwat, A. H[ö]{}ll, A. Krassnigg, C. D. Roberts and P. C. Tandy, nucl-th/0403012. H. Iida, M. Oka and H. Suganuma, hep-ph/0312328; H. Iida, these proceedings. C. S. Fischer, F. Llanes-Estrada and R. Alkofer, hep-ph/0407294. P. C. Tandy, nucl-th/0408037. J. C. R. Bloch, A. Cucchieri, K. Langfeld and T. Mendes, Nucl. Phys. [**B687**]{}, 76 (2004) \[hep-lat/0312036\]. C. S. Fischer and R. Alkofer, Phys. Lett. [**B536**]{}, 177 (2002) \[hep-ph/0202202\]. J. Skullerud and A. K[i]{}z[i]{}lers[ü]{}, JHEP [**09**]{}, 013 (2002) \[hep-ph/0205318\]. J. I. Skullerud, P. O. Bowman, A. K[i]{}z[i]{}lers[ü]{}, D. B. Leinweber and A. G. Williams, JHEP [**04**]{}, 047 (2003) \[hep-ph/0303176\]. UKQCD, K. C. Bowler [*et al.*]{}, Phys. Rev. [**D62**]{}, 054506 (2000) \[hep-lat/9910022\]. P. O. Bowman, U. M. Heller, D. B. Leinweber and A. G. Williams, Nucl. Phys. Proc. Suppl. [**119**]{}, 323 (2003) \[hep-lat/0209129\]. J.-I. Skullerud [*et al.*]{}, Quark-gluon vertex in general kinematics, in preparation.
W. Celmaster and R. J. Gonsalves, Phys. Rev. [**D20**]{}, 1420 (1979). M. S. Bhagwat and P. C. Tandy, hep-ph/0407163. P. O. Bowman, U. M. Heller and A. G. Williams, Phys. Rev. [**D66**]{}, 014505 (2002) \[hep-lat/0203001\].
[^1]: Talk by JIS at QCD Down Under, Adelaide, Australia, 10–19 March 2004.
[^2]: In [[@Skullerud:2003qu]]{} the expressions for $\lambda_2$ and $\lambda_3$ had the wrong sign. We are grateful to Craig Roberts for bringing these errors to our attention. In the same paper, the lattice data for $\lambda_3$ and for $\tau_5$ at the quark reflection point also had the wrong sign.
|
---
abstract: 'In this paper we investigate how observational effects could possibly bias cosmological inferences from peculiar velocity measurements. Specifically, we look at how bulk flow measurements are compared with theoretical predictions. Usually bulk flow calculations try to approximate the flow that would occur in a sphere around the observer. Using the Horizon Run 2 simulation we show that the traditional methods for bulk flow estimation can overestimate the magnitude of the bulk flow for two reasons: when the survey geometry is not spherical (the data do not cover the whole sky), and when the observations undersample the velocity distributions. Our results may explain why several bulk flow measurements found bulk flow velocities that *seem* larger than those expected in standard $\Lambda$CDM cosmologies. We recommend a different approach when comparing bulk flows to cosmological models, in which the theoretical prediction for each bulk flow measurement is calculated specifically for the geometry and sampling rate of that survey. This means that bulk flow values will not be comparable between surveys, but instead they are comparable with cosmological models, which is the more important measure.'
author:
- |
P. Andersen$^{1,2}$[^1], T. M. Davis$^{2,3}$, C. Howlett$^{2,4}$\
$^{1}$Dark Cosmology Centre, University of Copenhagen, Copenhagen, Denmark.\
$^{2}$ARC Centre of Excellence for All-sky Astrophysics (CAASTRO)\
$^{3}$School of Mathematics & Physics, The University of Queensland, St. Lucia, Brisbane, 4072, Australia.\
$^{4}$International Centre for Radio Astronomy Research, The University of Western Australia, Crawley, WA 6009, Australia.
bibliography:
- 'BulkFlow.bib'
title: 'Cosmology with Peculiar Velocities: Observational Effects'
---
cosmology: large-scale structure of Universe – cosmology: observations – cosmology: theory – cosmology : dark energy
Introduction
============
The term bulk flow in the context of cosmology refers to the average motion of matter in a particular region of space relative to the dipole subtracted cosmic microwave background (CMB) rest frame. One reason why bulk flows are interesting to cosmologists is that by measuring them we can learn more about the composition of the universe, the laws of gravity, and whether our current cosmological model is a good representation of the actual underlying dynamics.\
A bulk flow is induced by density fluctuations, and thus the bulk motion we observe should match what we expect from the density distribution. The density distribution is in turn determined by cosmological parameters such as the strength of clustering, through $\sigma_8$, and the matter density, $\Omega_{\rm M}$. The magnitude of bulk flows can be predicted from theory given a model and set of cosmological parameters (e.g. $\sigma_8$ and $\Omega_{\rm M}$), some initial conditions (such as a fluctuation amplitude at the end of inflation), and a law of gravity (such as general relativity). If the observed bulk flow was to deviate from that predicted by theory, that would indicate that one or more of the given inputs is incorrect.\
Currently tension exists in measurements of the bulk flow, with some measurements in apparent agreement with that predicted by $\Lambda$CDM while others are not [@2008ApJ...686L..49K; @2009MNRAS.392..743W; @2010MNRAS.407.2328F; @2012MNRAS.419.3482A; @2015MNRAS.447..132W]. Relieving this tension is important if we are to gain physical insight into the nature of dark energy and dark matter.\
The field of using large scale bulk flows to constrain cosmology has historically been limited by systematics due to the limited quality and quantity of the data available. Modern datasets now include peculiar velocity measurements of thousands of galaxies with moderate precision and hundreds of type Ia supernovae (SNe) with excellent precision. These have inspired a new generation of bulk flow studies. As these new datasets become increasingly abundant and precise, it is prudent to investigate the observational effects that may bias a bulk flow measured from one of these datasets.\
One such effect is undersampling of the surveyed volume. Undersampling is especially relevant for estimates utilising a small number of distance indicators, like many recent estimates of the bulk flow done with observations of type Ia SNe . Attempts at addressing sampling issues have been proposed, see e.g. [@2009MNRAS.392..743W], [@2012ApJ...761..151L] or [@2011ApJ...732...65W]. Another such effect is the geometry of a survey – namely whether the survey covers the whole sky or a narrow cone. Methods such as the minimum variance method proposed by [@2009MNRAS.392..743W] attempt to weight arbitrarily shaped survey geometries so that the bulk flow they calculate approximates what would have been measured if the distribution of data was spherical. Other effects, besides observational, might also play an important role. See e.g. [@2015JCAP...12..033H] where the effects of velocity correlations between supernova magnitudes are included in the data covariance matrix, and are found to have a significant impact on the constraints from a derived bulk flow estimate.\
The bias that might arise from estimating the bulk flow magnitude with a small number of peculiar velocities, effectively undersampling the surveyed volume, and with a non-spherical distribution of measurements, is the focus of this paper. We utilise data from the Horizon Run 2 [HR2; @2011JKAS...44..217K] simulation to investigate how strong a bias undersampling introduces for various survey volumes, from spherically symmetric surveys, to hemispherical and narrow cone surveys. We focus on the Maximum Likelihood (ML) estimator of the bulk flow, as it is computationally cheap to perform, easy to interpret and used widely in the literature. Additionally, for a limited test case, we investigate how successful the Minimum Variance (MV) [@2009MNRAS.392..743W] estimator is at alleviating the bias that comes from undersampling. The ML and MV estimators are described in Appendix \[app:mlemv\], where we take the opportunity to clarify some typographic errors and undefined terms in the original papers that can lead to confusion.\
In section \[sec:hori2\] we introduce the HR2 simulation. Then in section \[sec:lintheory\] we summarise the theoretical footing of large scale bulk flows, and provide an expansion beyond the usual spherical assumptions so that the theory is also valid for non-spherical geometries. The theoretical estimate is established as the benchmark against which we test the effects of undersampling. Then in section \[sec:samplingeffects\] we analyse the effects of undersampling on the Maximum Likelihood estimator, for a spherical, hemispherical and narrow cone geometry. Finally in section \[sec:discussion\] we discuss our findings and the implications for future work using large scale bulk flows in cosmology.\
Throughout this paper when we refer to the theoretically most likely bulk flow magnitude it will be denoted the *most probable* bulk flow magnitude, $V_p$, to avoid confusion with bulk flows from the Maximum Likelihood estimator.
Simulation: Horizon Run 2 {#sec:hori2}
=========================
Throughout this paper we use the Horizon Run 2 (HR2) cosmological simulation [@2011JKAS...44..217K] to investigate how observational effects, in particular non-spherical survey geometries and undersampling, can influence bulk flow measurements in a $\Lambda$CDM universe. We choose this simulation for the following reason: the bulk motions of galaxies are primarily sensitive to large scale density perturbations, meaning that the bulk flow measured in apparently distinct patches drawn from a single simulation can remain significantly correlated. The HR2 simulation, containing 216 billion particles spanning a $(7.2h^{-1}\mathrm{Gpc})^3$ volume, is large enough that we can be confident our bulk flow measurements are effectively independent. The above simulation parameters result in a mass resolution of $1.25\times 10^{11}h^{-1}\mathrm{M}_{\odot}$, which allows us to recover galaxy-size halos with a mean particle separation of $1.2h^{-1}\,\mathrm{Mpc}$. The power spectrum, correlation function, mass function and basic halo properties match those predicted by WMAP5 $\Lambda$CDM [@2009ApJS..180..330K] and linear theory to percent level accuracy.\
To generate our measurements we first draw spherical subsamples of radius $1h^{-1}\,\mathrm{Gpc}$ from the full HR2 dataset. The origin of each subset is chosen randomly, so that some will be chosen in higher than average density regions and some in lower than average density regions, incorporating the effects of cosmic variance. Knowledge of our local galactic surroundings could have been folded into the selection of origins, so that the subsets chosen would more closely represent the local environment that we find ourselves in. We have not done this, which means that the results of this work are the zero-knowledge results with no assumptions made about our position in the cosmological density field. In essence, we are comparing our one measurement of the bulk flow of our local universe to the distribution of bulk flows that $\Lambda$CDM would predict. It would also be enlightening to investigate whether there are any aspects of our local universe that would bias such a measurement, as @2015JCAP...07..025W did for supernova cosmology. However, that is beyond the scope of this paper.\
The HR2 subsets consist of approximately 3.1$\cdot 10^{6}$ dark matter haloes, each with six dimensional phase space information. Unfortunately a mock galaxy survey that fills the entire volume of the simulation does not exist, so in our analysis we assume that each DM halo corresponds to one galaxy. The smallest of the DM haloes are of a mass comparable to that of a galaxy, but the largest DM haloes of the HR2 simulation have a mass that would be equivalent to hundreds of galaxies. Effectively we are grouping galaxies in massive clusters into just one datapoint with the same probability of being subsampled as any other galaxy.\
Fortunately, a limited number of mock SDSS-III [@2011AJ....142...72E] galaxy catalogues have been produced for the HR2 simulation, which allow us to test how this assumption may affect our results. In Appendix \[app:mockvsdm\] we perform an analysis of the bulk flow magnitude distribution of galaxies from one such mock catalogue, and compare the distributions derived from the DM halo velocities. Our analysis shows that the distributions are similar, and, as such, treating each halo as an individual galaxy has minimal effect on our results.\
To look at the effect of undersampling and non-spherical geometries, we wish to compare the actual bulk flow magnitude of a given number of galaxies within some volume, to the magnitude recovered using the ML and MV estimators. Although a real survey only has peculiar velocity information along the line-of-sight direction, both of these estimators attempt to reconstruct the 3D distribution of velocities and estimate the bulk flow. In this sense a fair comparison is then between the output of these estimators and the most probable bulk flow measured using the full 3D velocity vector for each galaxy. The method we use to determine the most probable bulk flow magnitude as well as the upper and lower 1-$\sigma$ limit for a particular subsample of the simulation is the following:
1. [Randomly place a geometry in the simulation.]{}
2. [Of total $N$ galaxies within the geometry, randomly draw $n$.]{}
3. [Derive the actual bulk flow vector of the $n$ galaxies, using the 3D velocity vector for each object.]{}
4. [Store the magnitude of the bulk flow vector.]{}
5. [Repeat the above process until the resulting distribution has converged.]{}
Analogous to the method above we can determine the most probable bulk flow magnitude and 1-$\sigma$ upper and lower bounds for a specific bulk flow estimator, e.g. the ML estimator applied in section \[sec:samplingeffects\]:
1. [Randomly place a geometry in the simulation.]{}
2. [Of total $N$ galaxies within the geometry, randomly draw $n$.]{}
3. [For the $n$ galaxies compute the line-of-sight velocities.]{}
4. [Apply the ML estimator to the line-of-sight velocities and derive the ML bulk flow vector.]{}
5. [Store the magnitude of the ML bulk flow vector.]{}
6. [Repeat the above process until the resulting distribution has converged.]{}
The uncertainty associated with each peculiar velocity measurement is calculated as in Appendix A of [@2011ApJ...741...67D], the implications of this are discussed in Appendix \[app:pecveluncer\]. When determining upper and lower 1-$\sigma$ bounds we apply an equal likelihood algorithm, so that the 1-$\sigma$ limits are the equal likelihood bounds that encapsulate 68.27% of the normalised distirbution.
Linear Theory {#sec:lintheory}
=============
Under the assumption of the cosmological principle, that the universe is statistically isotropic and homogeneous, and assuming Gaussian density fluctuations, the velocity field at any given location can be treated as Gaussian random variate with zero mean and variance given by the velocity power spectrum $P_{vv}(k)$. Hence the bulk flow vector measured within some volume can also be described as a Gaussian random variate with zero mean and variance $$\sigma^2_V(\boldsymbol{r}) = \int \frac{\mathrm{d}^{3}k}{(2 \pi)^3} P_{vv}(k) |\widetilde{W}(\boldsymbol{k};\boldsymbol{r})|^2.$$ Assuming isotropy, this becomes $$\begin{aligned}
\label{eq:rmsvar}
\sigma^2_V(\boldsymbol{r}) & = \frac{1}{2 \pi^2} \int_{k=0}^{\infty} \mathrm{d}k\,k^{2} P_{vv}(k) |\widetilde{W}(k;\boldsymbol{r})|^2, \notag \\
\implies \sigma^2_V(R) & = \frac{H_0^2 f^2}{2 \pi^2} \int_{k=0}^{\infty} \mathrm{d}k P(k) \widetilde{W}(k;R)^2,\end{aligned}$$ where the Hubble constant, $H_0$, growth rate, $f$, and velocity and matter power spectra $P_{vv}(k)$ and $P(k)$ define a particular cosmology. The second equality of Eq. \[eq:rmsvar\], which is commonly associated with the RMS velocity expected for a bulk flow vector [@2002coec.book.....C] follows from the assumption of a spherically symmetric window function and the linear approximation that $P_{vv} = H_0^2 f^2k^{-2}P_{\theta\theta}(k) \approx H_0^2 f^2k^{-2}P(k)$, where $P_{\theta\theta}$ is the power spectrum of the velocity divergence field (See Chapter 18 of [@2002coec.book.....C] for a review of the relationship between the density, velocity divergence and velocity fields, and [@2012MNRAS.427L..25J] for measurements of $P_{\theta\theta}$ from simulations). As can be seen in Fig. 1, $P_{\theta\theta}(k) = P(k)$ is typically a good assumption on the large scales probed by bulk flow measurements.\
![Window functions for the geometries used in this paper plotted along with the matter-matter and velocity divergence power spectra from [copter]{}. $P(k)$ is the matter power spectrum, and $P_{\theta \theta}(k)$ the velocity divergence power spectrum. The geometries used are equal volume spherical cones with opening angles $\theta$, ranging from fully spherical $\theta=\pi$ to a very narrow cone with $\theta = \pi/8$[]{data-label="fig:windowvspower"}](Figures/window_vs_power.pdf){width="\linewidth"}
In Eq. \[eq:rmsvar\] $\widetilde{W}(k;\boldsymbol{r})$ is the Fourier transform of the window function, $W(\boldsymbol{r})$, for the geometry of the specific survey making that bulk flow measurement. The window function is a function of both $k$ and the volume in which the bulk flow is being measured. It measures how sensitive we are to measuring the statistical fluctuations at a particular scale. If the window function is large for a particular $k$ it means that we are highly sensitive to measuring fluctuations at the scale $k$ represents. The window function will be dependent on the geometry of the measurements taken to derive the bulk flow, and is therefore unique for each particular survey. For a fully spherical geometry of radius $R$ the window function takes the form $$\label{eq:windowspherical}
\widetilde{W}(k;R) = \frac{3(\sin{kR} - kR\cos{kR})}{(kR)^3}.$$ How strongly the window function of a particular survey will deviate from this spherical case will be determined by the geometry of the survey in question. Example window functions for conical geometries with a variety of opening angles are shown in Fig. \[fig:windowvspower\]. How these were calculated is detailed in section \[sec:nonspherical\].\
To calculate all the theoretical values of $\sigma_{V}$ in this paper we use a velocity divergence power spectrum, generated with the implementation of Renormalised Perturbation Theory [@2006PhRvD..73f3519C] in the [copter]{} code [@2009PhRvD..80d3531C]. A linear [camb]{}[^2] [@2000ApJ...538..473L; @2012JCAP...04..027H] matter transfer function with the same cosmological parameters as HR2 and WMAP5 was used as input. From this [copter]{} produces both a non-linear matter power spectrum as well as a non-linear velocity divergence power spectrum. We found that the difference between using the [copter]{} velocity divergence power spectrum, the non-linear matter power spectrum, or the linear power spectrum was negligible, except for very narrow or small geometries where effects at $k \gtrsim 0.05$ become important. In Fig. \[fig:windowvspower\] we can see that for the geometries used in our analysis the differences when using the three power spectra are small as the spectra only differ in the regime where the window function vanishes. Nonetheless, throughout this paper we use the [copter]{} velocity divergence power spectrum as that is most appropriate when working with bulk flows.\
To calculate the theoretical most probable bulk flow magnitude $V_{p}(R)$ we use the fact that the peculiar velocity distribution is Maxwellian [@2012ApJ...761..151L] with RMS velocity $\sigma_V$, which gives us a probability distribution for the bulk flow amplitude of the form [@2002coec.book.....C] $$p(V)\mathrm{d}V = \sqrt{\frac{2}{\pi}}\left(\frac{3}{\sigma_V^2}\right)^{3/2} V^2 \exp{\left(-\frac{3V^2}{2\sigma_V^2}\right)} \mathrm{d}V.
\label{eq:Maxwellian}$$ For this distribution the maximum probability value is then given by the relation $$\label{eq:vbulkml}
V_{p}(R) = \sqrt{2/3} \, \sigma_V(R).$$ When referring to the theoretical most probable bulk flow magnitude throughout this paper, it is this value based on a Maxwellian distribution of velocities that we are referencing. We confirmed that the velocities of halos in the HR2 simulation do indeed follow a Maxwellian distribution.\
It is important to note that while the most probable bulk flow magnitude is a discrete value, it is still a value from a distribution with a variance. Optimally the theoretical distribution should be compared to an observed distribution of bulk flow magnitudes, but this is not practical in most situations. The best we can do is to compare our measured bulk flow magnitude with the most probable bulk flow magnitude from theory, but importantly remember to account for the variance on our theoretical prediction in our statistics.
Non-Spherical Geometries {#sec:nonspherical}
------------------------
As well as investigating the effects of undersampling on a spherical geometry, we wish to additionally develop a theoretical estimate for non-spherical geometries, that is we wish to break the assumption of spherical symmetry used to derive Eq. \[eq:windowspherical\]. For uniformly distributed surveys the window function takes the form $$\label{eq:windowintegral}
\widetilde{W}(\boldsymbol{k};\mathbf{r}) = \frac{1}{V}\int_V \exp{(i\mathbf{k}\cdot \mathbf{r})}\mathrm{d}\mathbf{r},$$ where $\exp{(i\mathbf{k}\cdot \mathbf{r})}$ can be expanded to [@2002coec.book.....C], $$\exp{(i\mathbf{k}\cdot \mathbf{r})} = \sum_{l,m} j_l(kr)i^l(2l+1)\mathcal{P}_l^{|m|}(\cos{\theta})\exp{(im\phi)},$$ where $\mathcal{P}_l^{|m|}$ are the Associated Legendre Polynomials. The integral of Eq. \[eq:windowintegral\] then becomes $$\begin{aligned}
\begin{split}
\int_V & \exp{(i\mathbf{k}\cdot \mathbf{r})}\mathrm{d}\mathbf{r} \\&= \sum_{l,m}i^l(2l+1) \int_0^{\phi_{max}} \exp{(im\phi)}\mathrm{d}\phi \\ &\int_0^{\theta_{max}}\mathcal{P}_l^{|m|}(\cos{\theta})\sin{\theta}\mathrm{d}\theta \int_0^R j_l(kr)r^2\mathrm{d}r
\label{eq:sphersum}
\end{split}\end{aligned}$$ which for the spherical case where $(\theta_{max},\phi_{max})=(\pi,2\pi)$ reduces to Eq. \[eq:windowspherical\]. For a spherical cone geometry, with radius related to volume and opening angle by $$r = \left(\frac{3 V}{2 \pi (1-\cos{\theta})} \right)^{1/3},
\label{eq:spherconeradius}$$ we can set $\phi_{max}=2\pi$ but let $\theta_{max}$ vary in the interval $(0;\pi]$. Regardless of the values of $l$, all terms of $m$ vanish except for the $m=0$ term. Therefore for non-spherical geometries we have to sum over $l$ to infinity. Although this approach is theoretically correct, in practice we would sum over $l$ only until the function value had converged to within computational accuracy. This is however very impractical since the complexity of the terms increase rapidly with $l$ making it difficult to include terms above $l\approx 20$. Unfortunately, we find that for our geometries that are very non-spherical only using terms $l\leq20$ is not sufficient to guarantee convergence. Hence this approach is still only practical for geometries close to a sphere.\
Another approach to solving the window function for a given $k$ is to reformulate the volume integral in Cartesian coordinates $$\widetilde{W}(k;\mathbf{r}) = \frac{1}{V}\int_0^{X}\int_0^{Y}\int_0^{Z}w(x,y,z)e^{i(kx+ky+kz)}\mathrm{d}x\mathrm{d}y\mathrm{d}z.
\label{eq:windowcartesian}$$ The triple integral is over a cube that is at least large enough to contain the volume $V$ from Eq. \[eq:windowintegral\]. The $w(x,y,z)$ function is introduced, defined as being one inside the volume and zero otherwise, which makes sure the volume integrated over is conserved. The conversion to Cartesian coordinates makes it simpler to solve the integral numerically. It should be noted that even though we only consider rotationally symmetric windows with constant number density in this study, the above equation can be extended to include surveys of arbitrary geometry and non-constant number density, simply by choosing a suitable function $w(x,y,z)$.\
Based on Eq. \[eq:windowcartesian\] we developed two pieces of code to solve the problem numerically, one calculating the integral using MCMC methods and the other applying a trapezoidal volume integral.[^3] The independence of the two codes is used to confirm the validity of the results; the outputs from the two codes are consistently within 3% percent of one another.\
To see how this theoretical prediction compares with the actual underlying bulk flow of the HR2 simulation, we plot the most probable bulk flow magnitude as well as the upper and lower 1-$\sigma$ limits as a function of geometry in Fig. \[fig:bulkangle\]. The geometry in this case is a spherical cone where the opening angle $\theta$ is varied. It is worth noting that the volume of the geometry is kept constant as the opening angle $\theta$ is varied. This is achieved by varying the radial extent of the geometry along with $\theta$ according to Eq. \[eq:spherconeradius\]. Keeping the volume constant helps keep the simulation and theoretical results almost constant as $\theta$ is varied. For all opening angles we see that our theoretical value matches that measured from the simulations extremely well.\
Geometry and Sampling Effects {#sec:samplingeffects}
=============================
In this section we present how non-spherical geometries and undersampling of the cosmological volume can impact the results of the ML and MV estimators. We use the theoretically predicted most probable bulk flow magnitude as a benchmark; the closer the estimator comes to replicating the theoretical distribution the better.\
![The most probable measured bulk flow magnitude as a function of opening angle for a spherical cone geometry. The tested geometries vary in opening angle from the fully spherical situation where $\theta=\pi$, over a hemisphere to the most narrow geometry tested being $\theta = \pi/16$. The distributions from the simulation, theory, MV estimator, and ML estimator are shown with the dashed line being the most probable bulk flow magnitude and the colored band showing the upper and lower 1-$\sigma$ limits. For both the ML and MV estimators the sampling was fixed at $n=500$. For the MV estimator the ideal radius $R_I$ was set to 50 Mpc $h^{-1}$.[]{data-label="fig:bulkangle"}](Figures/bulk_angle.pdf){width="\linewidth"}
We first investigate the scenario where we use a fixed number of objects ($n=500$) and compare both the performance of the ML and MV estimator. The results can be seen in Fig. \[fig:bulkangle\]. Both the ML and MV estimator have a bias towards measuring larger bulk flow magnitudes on average than the actual underlying bulk flows. As the survey geometry becomes narrower, however, this bias increases, with the most narrow geometry having the strongest bias. The behaviour of the ML and MV estimators is very similar.\
In the narrow cone regime, both the ML and MV estimators predict significantly larger most probable bulk flows than would be expected from theory. Hence, incorrectly accounting for non-spherical geometries in the ML and MV estimators could potentially lead one to conclude they had measured a larger bulk flow than would be expected in a $\Lambda$CDM universe.\
![Distributions of ML bulk flow magnitudes for various sampling rates, $n$. The top/middle/bottom distributions correspond to a geometry with opening angle $\frac{\pi}{8}$/$\frac{\pi}{2}$/$\pi$. The volume is kept constant as opening angle is varied, resulting in a radius of 631/267/210 Mpc $h^{-1}$. []{data-label="fig:cosmicsamplingcombined"}](Figures/cosmic_sampling_combined.pdf){width="\linewidth"}
Next we investigate how the sampling rate can create biases in the most probable bulk flow calculated using the ML and MV estimates for a fixed geometry. For the values $n \in [50, 500, 1000, 2000, 4000]$ and opening angles $\theta \in [\pi/8, \pi/2, \pi]$, corresponding to a narrow spherical cone, a hemisphere, and a full sphere, we apply the ML estimator as described in section \[sec:hori2\]. The results can be seen in Fig. \[fig:cosmicsamplingcombined\]. There are two noteworthy trends from this plot. The first is that for all geometries the estimated most probable value is shifted to be 1-$\sigma$ away from the actual most probable value when the sampling is less than $n \lesssim 500$. The second is that this effect is stronger for narrow geometries, in our case the geometry with opening angle $\theta = \pi/8$ is much more adversely affected by undersampling than the hemispherical or spherical case. What this means in practice is that estimates of the bulk flow magnitude that utilise a small number of peculiar velocities are likely to be biased by undersampling effects in such a way that we would measure on average a larger bulk flow magnitude than the actual underlying bulk flow being probed. Of particular interest is the fact that this remains true even for spherical geometries if the number of objects is small.\
The most probable bulk flow velocities for the distributions in Fig. \[fig:cosmicsamplingcombined\], as well as a few additional configurations of sampling rate and opening angles, are listed in Table \[tab:samplinggeometryeffectscombined\]. The absolute differences between the most probable bulk flow values derived from simulation and theory are also listed. This absolute difference is an indicator of how strong a bias we might expect in the distribution of bulk flows derived for a particular sampling rate and survey geometry. A small absolute difference between most probable bulk flow velocities from simulation and theory indicates that the sampling rate is sufficient, and that minimal bias is to be expected for that particular survey geometry. In using Table \[tab:samplinggeometryeffectscombined\] it is important to note that not only the most probable bulk flow velocity, $V_p$, is shifted towards larger values. Rather, the entire distribution of bulk flow velocities is shifted, including the one and two sigma limits. Looking at, e.g., line seven of Table \[tab:samplinggeometryeffectscombined\] where $n=50$ and $\theta=0.125 \,\pi$ we see that even though the theory predicts something close to $\sim$100 km s$^{-1}$ a measured bulk flow value of $\sim$500 km s$^{-1}$ is still within the one sigma confidence limits, and hence is still well within the expectations of a $\Lambda$CDM cosmology.\
The cause for the bias from poor sampling is the increased variance of the bulk flow velocity components; in Fig. \[fig:cosmicvelocomponent\] the $x$-components of the bulk flow velocities from the top panel of Fig. \[fig:cosmicsamplingcombined\] are plotted for the various sampling rates. When sampling decreases variance increases, which in turn causes the most probable bulk flow value to shift according to Eq. \[eq:vbulkml\]. Note that $\sigma_{V}$ in Eq. \[eq:vbulkml\] denotes the variance of the bulk flow *vector*, which is equal to the RMS because the distribution of bulk flow vectors is Gaussian. The variance in any one Cartesian component of the bulk flow vector is then $\sigma_{V}/\sqrt{3}$. For the bulk flow [*magnitude*]{} $\sigma_{V}$ refers only to the RMS, due to the relationship between Maxwellian and Gaussian distributions. The variance of the bulk flow magnitude is then given by[^4] $\sigma^2 = \sigma_{\rm V}^2(1-\frac{8}{3\pi})$.\
Another way to illustrate this effect is to imagine a large volume where the galaxies within obey the cosmological principle such that if you sum over the velocities of all $N$ galaxies you will derive a bulk flow magnitude of exactly zero. This will be true even if only the line-of-sight components of the peculiar velocities are observed. If then only $n < N$ peculiar velocities are observed, it is very likely that a non-zero bulk flow magnitude will be measured, and since a magnitude can only ever be positive we are now dealing with some non-zero positive number. We might redraw a new set of $n$ galaxies and derive a different magnitude, but it is still going to be some non-zero positive number. If $n \approx N$ then we are likely to measure a magnitude that is closer to zero than if we only draw $n \ll N$ galaxies. In other words, undersampling *always* increases our RMS velocity and skews the most probable measured magnitude towards larger values.
![The distribution of the $x$-components of the bulk flows from the top panel of Fig. \[fig:cosmicsamplingcombined\] where the opening angle is $\theta=\pi/8$. Poorer sampling leads to a larger variance in the Gaussian-distributed velocity components, which in turn causes the most probable bulk flow to shift to a larger value.[]{data-label="fig:cosmicvelocomponent"}](Figures/cosmic_velo_component.pdf){width="\linewidth"}
---------- ----------------- ---------------------- ------------- ------------------------------------------- -----------------------------
$ $ $\theta$ V$_p$ 68% Limits $\mid$V$_p$ - V$_{p,\mathrm{theory}}\mid$ Sample Density
$ $ $ $ km s$^{-1}$ km s$^{-1}$ km s$^{-1}$ $(h^{-1}\mathrm{Mpc})^{-3}$
n : 8000 0.125 $ \, \pi$ 132 $^{+73}_{-61}$ 71 - 205 26 200$ \times 10^{-6}$
n : 4000 0.125 $ \, \pi$ 136 $^{+75}_{-63}$ 73 - 210 29 100$ \times 10^{-6}$
n : 2000 0.125 $ \, \pi$ 142 $^{+79}_{-66}$ 75 - 221 35 50$ \times 10^{-6}$
n : 1000 0.125 $ \, \pi$ 153 $^{+86}_{-72}$ 80 - 238 46 25$ \times 10^{-6}$
n : 500 0.125 $ \, \pi$ 173 $^{+97}_{-81}$ 92 - 269 66 12$ \times 10^{-6}$
n : 100 0.125 $ \, \pi$ 257 $^{+143}_{-119}$ 137 - 399 150 2$ \times 10^{-6}$
n : 50 0.125 $ \, \pi$ 326 $^{+203}_{-166}$ 160 - 528 219 1$ \times 10^{-6}$
n : 8000 0.5 $ \, \pi$ 131 $^{+74}_{-62}$ 68 - 205 21 200$ \times 10^{-6}$
n : 4000 0.5 $ \, \pi$ 131 $^{+74}_{-61}$ 69 - 204 22 100$ \times 10^{-6}$
n : 2000 0.5 $ \, \pi$ 132 $^{+75}_{-63}$ 69 - 207 22 50$ \times 10^{-6}$
n : 1000 0.5 $ \, \pi$ 133 $^{+76}_{-63}$ 69 - 208 23 25$ \times 10^{-6}$
n : 500 0.5 $ \, \pi$ 136 $^{+77}_{-64}$ 72 - 213 27 12$ \times 10^{-6}$
n : 100 0.5 $ \, \pi$ 158 $^{+89}_{-74}$ 84 - 247 49 2$ \times 10^{-6}$
n : 50 0.5 $ \, \pi$ 180 $^{+96}_{-81}$ 99 - 275 70 1$ \times 10^{-6}$
n : 8000 1.0 $ \, \pi$ 110 $^{+58}_{-49}$ 61 - 168 3 200$ \times 10^{-6}$
n : 4000 1.0 $ \, \pi$ 110 $^{+58}_{-49}$ 61 - 167 3 100$ \times 10^{-6}$
n : 2000 1.0 $ \, \pi$ 111 $^{+59}_{-49}$ 61 - 169 2 50$ \times 10^{-6}$
n : 1000 1.0 $ \, \pi$ 113 $^{+59}_{-50}$ 62 - 171 0 25$ \times 10^{-6}$
n : 500 1.0 $ \, \pi$ 116 $^{+61}_{-51}$ 64 - 176 3 12$ \times 10^{-6}$
n : 100 1.0 $ \, \pi$ 138 $^{+72}_{-61}$ 77 - 210 25 2$ \times 10^{-6}$
n : 50 1.0 $ \, \pi$ 159 $^{+83}_{-70}$ 89 - 241 46 1$ \times 10^{-6}$
---------- ----------------- ---------------------- ------------- ------------------------------------------- -----------------------------
Discussion & Conclusion {#sec:discussion}
=======================
After reviewing linear theory we showed how it can be expanded to be valid for non-spherical geometries, developing code that numerically calculates the theoretical bulk flow magnitude for any arbitrary survey geometry. To test the validity of the developed code, the derived theoretical bulk flow magnitude was compared to that of a variety of spherical cone geometries in the Horizon Run 2 (HR2) cosmological simulation and found to be within 5% or better agreement for all tested geometries.\
However, when simulating more realistic surveys and applying the Maximum Likelihood (ML) estimator we found that undersampling effects severely bias measurements of the bulk flow magnitude when a small number ($n \lesssim 500$) of peculiar velocities are used in the bulk flow estimate. On average, undersampling pushes the measured bulk flow to higher values, with the bias being amplified when narrower survey geometries are used.\
For our fixed volume of 40$\cdot 10^6 (h^{-1}\mathrm{Mpc})^3$ using 500 SNe corresponds to a sampling density of $\sim13 \, \mathrm{SNe}/10^6 (h^{-1}\mathrm{Mpc})^3$. Hence we expect undersampling could affect many recent measurements of the of the bulk flow magnitude utilising type Ia SNe as a distance indicator (i.e, ) where the number of supernovae are well below 300 and the sampling density is also well below $13\; \mathrm{SNe}/10^6 (h^{-1}\mathrm{Mpc})^3$.\
Without a detailed analysis of each of the previous bulk flow estimates, which is beyond the scope of this paper, it is hard to determine whether or not a particular result is affected by undersampling. However, some examples that might deserve attention include e.g. where the SNe are subdivided into four shells, and for the SNe in each shell a bulk flow is estimated. We would expect the bulk flow to converge to the CMB frame as we go to higher redshifts and larger volumes, and yet find that in shells of both increasing redshift and increasing volume there is no clear trend in the magnitude of the bulk flow. Instead, the trend they see could potentially be explained by undersampling. Their bins contain varying numbers of supernovae, namely $n=[128, 36, 38, 77]$, in which they find bulk flows of $V_{\rm p}=[243, 452, 650, 105]$[kms$^{-1}$]{}. So there is a trend by which the bins with fewer supernovae find larger bulk flows (e.g. compare the middle two bins with the outer two bins).\
Similarly, @2012MNRAS.420..447T provide two measurements of the ML bulk flow: one with all 245 SNe from the First Amendment compilation, the other with a subset of 136 SNe that excludes the nearby ones (excludes $z<0.02$). Naive expectations would suggest that the sample focussing on higher redshift SNe should be closer to converging on the CMB and thus have a lower bulk flow, however they find the opposite. The higher-redshift-only sample has a higher bulk flow, but since it has fewer data points than the full sample that would be consistent with our finding that undersampling overestimates the bulk flow.\
Both and @2012MNRAS.420..447T found bulk flows that exceeded the predicted flow based on known density distributions in the nearby universe, so whether the estimates are inflated by undersampling is potentially an interesting question (although neither claimed significant deviation from $\Lambda$CDM). While we have selected these two as the most significant examples that could be affected by the sampling biases we discuss in this paper, we note that this trend is pervasive, as no other samples show significant opposing trends. Some show slight reduction in bulk flow with smaller samples, but it is much less significant than the positively correlated examples above (and much smaller than the uncertainties), e.g. in [@2011MNRAS.414..264C] increasing the sample from 61 to 109 SNe increases the estimated bulk flow from 250 [kms$^{-1}$]{}to 260 [kms$^{-1}$]{}, an effect of less than 5%.\
For bulk flow estimates where the typical number of observed peculiar velocities in a survey is $n \gtrsim 3000$, i.e., most estimates using the Tully-Fisher or Fundamental Plane relation [@2011ApJ...736...93N; @2014MNRAS.437.1996M; @2015MNRAS.447..132W; @2016MNRAS.455..386S], we found no bias from undersampling. It is however important to note that the analysis of this paper assumes type Ia SNe are used as distance indicators, and therefore the uncertainties in each distance measurement are small (Appendix \[app:pecveluncer\]). The typically larger uncertainties derived from Tully-Fisher or Fundamental Plane estimates would increase the variance in the individual bulk flow components, which in turn could mean we require larger numbers of objects to avoid biases than is found here.\
Effects from uneven sampling have previously been discussed in the literature. One example is Eq. 10 of [@2012ApJ...761..151L] where a method of dividing the measured peculiar velocities by their selection function is proposed. In [@1982ApJ...258...64A] and [@2007ApJ...661..650H] Monte Carlo simulations of observations are used to better understand systematic effects, including sampling effects. Other works [@2011ApJ...732...65W; @2009MNRAS.392..743W] develop new estimators such as the Weighted Least Squares (WLS), the Coefficient Unbiased (CU), or the Minimum Variance (MV) estimators, with the MV estimator being the most popular alternative to the ML estimator. The MV estimator is constructed in part to account for sampling bias (with the motivation to be able to compare measurements of bulk flow between surveys); in our work we found that the MV estimator suffered the same bias as the ML estimator, again with the bias increasing for narrower geometries.\
A number of recent papers compare a measured bulk flow directly to a $\Lambda$CDM prediction based on linear theory and an assumption of spherical symmetry. For example [@2011MNRAS.414..264C], [@2011JCAP...04..015D], and [@2016MNRAS.455..386S] plot bulk flow measurements as a function of redshift compared to a generic $\Lambda$CDM prediction. Our analysis suggests that such a comparison between bulk flows derived from different surveys, and therefore different survey geometries and sampling rates, is potentially problematic.\
In [@2012ApJ...759L...7P] the HR2 simulation was used to show that the size of the large scale structure known as the Sloan Great Wall (SGW) is in agreement with what we statistically expect from $\Lambda$CDM cosmology, something that had previously been disputed. Similarly, as early as [@1982ApJ...258...64A] simulations were being used to compare measured bulk flows to theoretical predictions. Analogous to their arguments, our study highlights the importance of considering the full distribution of bulk flow magnitudes from theory, including sampling effects, rather than focusing on only the most probable bulk flow magnitude. That is, we propose that bulk flows should not be compared to the prediction from linear theory, but with the bulk flow magnitude distribution derived from a cosmological simulation using the method described above, with the actual survey geometry given as input.
Acknowledgements {#acknowledgements .unnumbered}
================
Parts of this research were conducted by the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020. The Dark Cosmology Centre was funded by the DNRF.
ML and MV Bulk Flow Estimators {#app:mlemv}
==============================
To compare the measured bulk flow with theoretical predictions, it is necessary to have a method to turn the individually observed peculiar velocities into a bulk flow. In this paper we focus on two estimators, the Maximum Likelihood (ML) and the Minimum Variance (MV) estimators. In the original paper introducing the MV estimator [@2009MNRAS.392..743W] there were a few typographic errors and unexplained terms; for completeness and to help others avoid confusion the procedures used to carry out the ML and MV estimators in this work are explained in this appendix.
Maximum Likelihood
------------------
The ML estimator is by far the easiest of the two to implement and is computationally much cheaper than the MV estimator. The result of the ML estimator is a vector containing the velocity components corresponding to each of the three spatial dimensions. Each of the three components is given by a sum over the individual peculiar velocity components multiplied by some weight. The sum has the form $$u_i = \sum_{n} w_{i,n} S_n$$ where $i$ is the placeholder for either the $x$, $y$, or $z$ index and the sum goes over all $n$ peculiar velocities. $S_n$ is the $n$’th measured peculiar velocity, $w_{i,n}$ is the associated weight for that peculiar velocity and $u_i$ is the calculated bulk flow where again $i=(x,y,z)$. This equation holds true for both the ML and the MV estimators. Where they differ is how they go about calculating the $w_{i,n}$ weights.\
For the ML estimate the weights are given by $$w_{i,n} = \sum_j \frac{ \hat{x}_j \cdot \hat{r}_n}{(\sigma_n^2 + \sigma_\star^2)}A_{ij}^{-1}.$$ The sum is over the $j=(x,y,z)$ components, and $\hat{x}_j \cdot \hat{r}_n$ is the projection of the unit vector $\hat{r}$ pointing from the observer to the galaxy in question. $\sigma_n$ is the uncertainty on the velocity of the $n$’th measurement, and $\sigma_\star$ is a constant of order 250 km s$^{-1}$ meant to account for the non-linear flows on smaller scales. Finally $A_{ij}^{-1}$ is the inverse of matrix $A_{ij}$ given by $$A_{ij} = \sum_n \frac{(\hat{x}_i \cdot \hat{r}_n)(\hat{x}_j \cdot \hat{r}_n)}{(\sigma_n^2 + \sigma_\star^2)}.$$ In practise when calculating the ML weights the first step is to calculate the $A_{ij}$ matrix, taking advantage of the symmetry $A_{ij} = A_{ji}$. The inverted matrix $A_{ij}^{-1}$ is then computed, and the weights $w_{i,n}$ are calculated. This is a fairly simple process, and is cheap in computation time needed.
Minimum Variance
----------------
For the Minimum Variance estimator, first an ideal survey is constructed by generating $x$,$y$,$z$ coordinates uniformly randomly in the range $[-4R_I;4R_I]$ and then drawing points according to the distribution $n(r) \propto r^2 \exp{(-r^2/2R_I^2)}$. This constructed ideal survey is spherically symmetric and isotropic. It is constructed such that the window function of the MV method is sensitive in the range where we wish to probe the bulk flow, namely on scales of $R_I$. In order to stay consistent $R_I$ will be set to 50 Mpc $h^{-1}$ in this work, unless otherwise stated. The number of points in the constructed ideal survey is set to 1200 throughout this work. It was found that increasing the number of points in the ideal survey beyond 1200 did not contribute to the stability of the MV method but only served to increase the already considerable computation time.\
For readability matrix notation is used so that $w_{i,n}$ becomes column matrix $\mathbf{w}_i$ of $n$ elements. $\mathbf{w}_i$ is computed with $$\mathbf{w}_i = (\mathbf{G} + \lambda \mathbf{P})^{-1} \mathbf{Q}_i.$$ $\mathbf{G}$ is a symmetric square $n$ by $m$ matrix where $n$ and $m$ correspond to the $n$’th and $m$’th measurement. The matrix $\mathbf{G}$ is the covariance matrix for the individual velocities $S_n$ and $S_m$. In linear theory we can write the matrix elements $G_{nm}$ as a sum of two terms $$\begin{aligned}
G_{nm} &= \langle S_n S_m \rangle \\
&= \langle v_n v_m \rangle + \delta_{nm}(\sigma_\star^2 + \sigma_n^2).\end{aligned}$$ The second term is known as the noise term and is the Kronecker delta function; 0 for $n \neq m$ but $\sigma_\star^2 + \sigma_n^2$ when $n=m$. The first term is the geometry term which is given by $$\langle v_n v_m \rangle = \frac{\Omega_m^{1.1}H_0^2}{2\pi^2} \int \mathrm{d}k \, P(k) \, f_{mn}(k)$$ where $H_0$ is the Hubble constant in units[^5] of $h$ km s$^{-1}$ Mpc$^{-1}$, and $\Omega_m^{1.1}$ is the growth of structure parameter $f^2 \approx \Omega_m^{1.1}$. $P(k)$ is the matter power spectrum, which in this work is calculated using [copter]{} [@2009PhRvD..80d3531C; @2000ApJ...538..473L; @2012JCAP...04..027H]. The function $f_{mn}(k)$ is the angle averaged window function which is explicitly given as $$\label{eq:angavwinfuncnasty}
f_{mn}(k) = \int \frac{\mathrm{d}^2 \hat{k}}{4\pi} (\hat{\mathbf{r}}_n \cdot \hat{\mathbf{k}})(\hat{\mathbf{r}}_m \cdot \hat{\mathbf{k}}) \times \mathrm{exp}[ik\hat{\mathbf{k}} \cdot (\hat{\mathbf{r}}_n - \hat{\mathbf{r}}_m)].$$ Although Eq. \[eq:angavwinfuncnasty\] is often quoted in the literature as the function used to calculate $f_{mn}(k)$ it is far from being a practical expression and in reality the expression used is from [@2011PhRvD..83j3002M] who showed that we can express the angle averaged window function as $$\label{eq:fnmk}
f_{mn}(k) = \frac{1}{3} \mathrm{cos}( \alpha(j_0(kA) - 2j_2(kA))) + \frac{1}{A^2} j_2(kA)r_n r_m \mathrm{sin}^2(\alpha)$$ where $$A = (\,r_n^2 + r_m^2 - 2 r_n r_m \mathrm{cos}(\alpha) \, )^{0.5}$$ and $\alpha$ is the angle between the $n$’th and $m$’th galaxy given by $$\alpha = \mathrm{arccos}(\hat{\mathbf{r}}_n \cdot \hat{\mathbf{r}}_m).$$ The $j_0(x)$ and $j_2(x)$ functions are spherical Bessel functions given by $$j_0(x) = \frac{\mathrm{sin}(x)}{x} \,\, , \, \, j_2(x) = \left( \frac{3}{x^2} - 1 \right) \frac{\mathrm{sin}(x)}{x} - \frac{3 \, \mathrm{cos}(x)}{x^2}.$$ Putting all this together gives us the $G_{nm}$ elements. Finding the $P_{nm}$ elements of $\mathbf{P}$ is then fairly simple as it is simply the $k=0$ limit of $f_{nm}$ which is $$P_{nm} = \frac{1}{3} \mathrm{cos}(\alpha).$$ The principal idea of the MV method is to minimise the variance between the bulk flow measured by the galaxy survey and the bulk flow that would be measured by an ideal survey. The $\mathbf{G}$ and $\mathbf{P}$ matrices are the components of the weight that take as input the measured data. The last component, the $\mathbf{Q}$ matrix, takes as input the position and peculiar velocities from the galaxies of the constructed ideal survey. It is calculated in much the same way as the $G_{nm}$ elements with the $Q_{i,n}$ elements being given by $$Q_{i,n} = \sum_{n'=1}^{N'} w'_{i,n'} \langle v_{n'} v_n \rangle$$ and $$\langle v_{n'} v_n \rangle = \frac{\Omega_m^{1.1} H_0^2}{2\pi^2} \int \mathrm{d}k \, P(k) \, f_{n'n}(k),$$ where $f_{n'n}(k)$ is analogous to Eq. \[eq:fnmk\] but with the difference that $n'$ and $n$ run over the galaxies in the constructed ideal survey, in contrast to $n$ and $m$ that run over the galaxies from the actual observed galaxies of our survey. The ideal weights $w'_{i,n'}$ will be given by $$w'_{i,n'} = 3 \frac{\hat{\mathbf{x}}_i \cdot \hat{\mathbf{r}}_n}{N_{\rm ideal}}$$ where $N_{\rm ideal}$ is the total number of galaxies in the constructed ideal survey.\
The final step is to solve for the value of $\lambda$, which is a Lagrange multiplier inherent from the minimisation process. It enforces the normalisation constraint $$\sum_m \sum_n w_{i,n} w_{i,m} P_{nm} = \frac{1}{3}.$$ A simple method to solve for $\lambda$ is to vary $\lambda$ and calculate the above sum, until a value for $\lambda$ that makes the above equality true is found.\
Calculating the MV bulk flow vector is a rather involved process and is orders of magnitude more expensive computationally than the ML estimator. In this work the analysis is done using mainly the ML estimator, with the MV estimator only being tested in a more limited scenario. If computation time was no concern then the full analysis could be carried out for the MV estimator as well.\
The implementation of the MV estimator used in this work is based on that of Dr. Morag Scrimgeour which is available at https://github.com/mscrim/MVBulkFlow.
Mock Galaxy Surveys versus Dark Matter Halos {#app:mockvsdm}
============================================
As explained in section \[sec:hori2\] the full HR2 dataset consists of DM halos, not individual galaxies. To test that this does not affect our results, we apply a mock SDSS-III galaxy catalogue produced from the HR2 cosmological DM halo simulation. This mock catalogue lies in a sphere with radius 1 Gpc $h^{-1}$ and origin at ($x,y,z$) = (1.8, 1.8, 1.8) Gpc $h^{-1}$. From the full HR2 DM halo simulation we slice a sphere that also has radius 1 Gpc $h^{-1}$ and origin at ($x,y,z$) = (1.8, 1.8, 1.8) Gpc $h^{-1}$. The distributions of bulk flow magnitudes using the ML estimator are then calculated for both the SDSS-III mock catalogue and the sliced sphere of DM halos. The distributions are shown in Figure \[fig:compvarcomp\] and the most probable and RMS values are shown in Table \[tab:sampcompvarcomp\]. We see that for the same number of galaxies per bulk flow, $n$, the distributions look very similar. From Figure \[fig:compvarcomp\] and Table \[tab:sampcompvarcomp\] we can see that the distributions of bulk flow magnitudes, as well as their most probable values and RMS values, are in good agreement. This shows that it is indeed possible to use the DM halos of the full HR2 simulation to perform our analysis, including investigating the effects of survey geometry on the measurements of bulk flow magnitudes.
$ $ SDSSIII Mock DM Halo
----- -------------- ---------
: Most probable bulk flow with upper and lower 1-$\sigma$ bounds for bulk flow magnitude distributions of SDSSIII mock survey galaxy catalogue and DM halo slice of the full HR2 simulation, for varying number of galaxies per bulk flow calculation, $n$. The numbers should be compared across horizontally. All the numbers are within 0.1 $\sigma$ of each other, which shows that using DM Halos gives comparable results to using a mock galaxy catalogue.[]{data-label="tab:sampcompvarcomp"}
![ML bulk flow magnitude distributions for SDSS-III mock galaxy catalogue subsamples and DM halo subsamples, both taken from the same position in the full HR2 simulation. The bulk flow magnitude distributions for the DM halo subsamples are labelled ‘DM Halo’, with the distributions for the SDSS-III mock catalogue samples labelled ‘Mock’. The individual pairs of bulk flow magnitude distributions (e.g. $n=500$, $n=100$ and $n=50$) all show similar behaviour in their bulk flow velocity distributions.[]{data-label="fig:compvarcomp"}](Figures/compvarcomp.pdf){width="\linewidth" height="2.5in"}
Estimating Peculiar Velocity Measurement Uncertainty {#app:pecveluncer}
====================================================
To estimate the peculiar velocity measurement uncertainty, $\sigma_{v,Ia}$, as a function of redshift we follow the approach of [@2011ApJ...741...67D]. Using the terminology of [@2011ApJ...741...67D] the measurement uncertainty is $$\sigma_{v,Ia} = c \cdot \sigma_z = c \cdot \sigma_\mu \cdot \frac{\ln{(10)}}{5} \frac{\bar{z}(1 + \bar{z}/2)}{1 + \bar{z}}$$ where $c$ is the speed of light in vacuum, $\bar{z}$ is the recession redshift and $\sigma_\mu$ is the uncertainty on the distance modulus measurement. To obtain an estimate for the peculiar velocity measurement uncertainty one has to assume a value for $\sigma_\mu$, we have chosen to set $\sigma_\mu=0.1$ throughout this paper, as it is the optimistic value of $\sigma_\mu$ that modern type Ia SNe surveys can achieve, although it is a bit lower than what was possible for legacy surveys where a value of $\sigma_\mu = 0.15$ would be more appropriate. To reiterate the point made in section \[sec:discussion\], using a larger uncertainty in the peculiar velocity measurements will only increase the variance in each component of the bulk flow vector, and any potential biases. Hence by adopting an optimistic error, we are in fact being conservative in our estimates of potential biases.
[^1]: email: [](mailto:perandersen@dark-cosmology.dk)
[^2]: http://camb.info/readme.html
[^3]: For details and link to the source code see https://github.com/per-andersen/MV-MLE-BulkFlow
[^4]: To derive this use $p(V)dV$ from equation \[eq:Maxwellian\] in the definition of variance $\sigma^2\equiv\int_0^\infty p(V)(v-\bar{v})^2dV$, where the standard integral $\int_0^{\infty}x^ne^{-bx^2}dx = \frac{(2k-1)!!}{2^{k+1}b^k}\sqrt{\frac{\pi}{b}}$ with $n=2k$ and $b>0$ comes in handy; note $(2k-1)!! \equiv \Pi_{i=1}^k(2i-1) = \frac{(2k)!}{2^kk!}$.
[^5]: Which is always 100, per definition of $h = (H_0 / 100)$ km$\,$s$^{-1}\,$Mpc$^{-1}$.
|
---
abstract: 'When matter orbits around a central mass obliquely with respect to the mass’s spin axis, the Lense-Thirring effect causes it to precess at a rate declining sharply with radius. Ever since the work of Bardeen & Petterson (1975), it has been expected that when a fluid fills an orbiting disk, the orbital angular momentum at small radii should then align with the mass’s spin. Nearly all previous work has studied this alignment under the assumption that a phenomenological “viscosity" isotropically degrades fluid shears in accretion disks, even though it is now understood that internal stress in flat disks is due to anisotropic MHD turbulence. In this paper we report a pair of matched simulations, one in MHD and one in pure (non-viscous) HD in order to clarify the specific mechanisms of alignment. As in the previous work, we find that disk warps induce radial flows that mix angular momentum of different orientation; however, we also show that the speeds of these flows are generically transonic and are only very weakly influenced by internal stresses other than pressure. In particular, MHD turbulence does not act in a manner consistent with an isotropic viscosity. When MHD effects are present, the disk aligns, first at small radii and then at large; alignment is only partial in the HD case. We identify the specific angular momentum transport mechanisms causing alignment and show how MHD effects permit them to operate more efficiently. Lastly, we relate the speed at which an alignment front propagates outward (in the MHD case) to the rate at which Lense-Thirring torques deliver angular momentum at smaller radii.'
author:
- 'Kareem A. Sorathia, Julian H. Krolik'
- 'John F. Hawley'
bibliography:
- 'Bib.bib'
title: 'MHD Simulation of a Disk Subjected to Lense-Thirring Precession'
---
Introduction {#sec:int}
============
There are many reasons why accretion disks around rotating masses may not be aligned with the spin axis of the central mass. If the matter is supplied from a companion star, the orbital plane of the binary may be oblique; if the matter is supplied from the interstellar medium, its mean orbital plane may similarly be inclined; if the matter is supplied from a tidally-disrupted star, the orbital plane should be entirely uncorrelated with the black hole spin. Whatever the origin of the misalignment, general relativity predicts that there is a torque exerted on the orbiting material. To lowest post-Newtonian order, the torque on an orbiting ring of radius $r$ and angular momentum ${\bf L}$ is $2 (G/c^2)\vs \times {\bf L}/r^3$ when the spin angular momentum of the central mass is ${\bf J}$; the result is precession about the central mass’s spin axis at a rate $\omega = 2G |\vs|/(r^3 c^2)$. The precession rate is most interesting, of course, when $r$ is not an extremely large number of gravitational radii $r_g \equiv GM/c^2$, so the effect is normally associated with black holes, or perhaps neutron stars.
Because the torque grows so rapidly with smaller $r$, it has been supposed ever since the truly seminal paper of [@BP75] that the strong differential precession at small radii will induce internal disk friction, causing the inner part of the disk to settle into the equatorial plane of the central mass’s rotation. [@PP83] invented an angular momentum-conserving formalism to encompass this picture, in which they pointed out that local disk warps can be smoothed hydrodynamically because the warps create radial pressure gradients by shifting neighboring rings vertically relative to one another. Radial fluid motions are then induced, which can mix the differently-oriented angular momenta of the adjacent rings. The question that arises, however, is how to relate the radial velocities to the radial pressure gradients. [@PP83] proposed that when the disk is very thin, the flow velocities would be limited by the same “viscosity" accounting for angular momentum transport in flat disks, but operating isotropically on all shears. Conversely, they argued, when the disk is relatively thick, this “viscosity" would damp bending waves. [@Pringle92] constructed a simpler vehicle for analyzing this geometrically complicated problem, heuristically separating the angular momentum transport into two pieces. The first of these was the radial transport of angular momentum due directly to the action of the isotropic viscosity. The second was a lumped-parameter description of the evolution of local warps, in which they were supposed to be smoothed diffusively. Following [@PP83], [@Pringle92] argued that the diffusivity for local warps would scale inversely with the putative isotropic viscosity. [@O99] developed a nonlinear theory linking the mutual scaling of the two transport coefficients. When the viscosity is normalized to the local pressure via the dimensionless coefficient $\alpha$, [@O99] confirmed the expected inverse scaling for small values of $\alpha$, but found a somewhat more complicated relation for larger values, provided $\alpha < 1$. It is therefore natural to describe the magnitude of the diffusion coefficient in the same pressure-normalized fashion, scaling it in terms of $\alpha_2$ [@LP07]. [@NP00] performed SPH simulations in which the numerical diffusion of the algorithm provided an effective isotropic viscosity and found behavior more or less in keeping with these expectations, but [@LP07] and [@Lodato10], using an explicit isotropic viscosity, argued that $\alpha_2$ was limited to be $\lesssim 3$, and also found that a diffusive description did not well match the evolution of their simulations when the warp was “nonlinear" (see § \[sec:warp\] for the definition of “nonlinear" in this context). Perhaps more surprisingly, the recent SPH simulations of [@Nixon12] develop sharp breaks in the disk profile when the degree of misalignment is large.
There is, however, a fundamental worry concerning this entire approach: the assumption that an isotropic viscosity acts in accretion disks. For more than fifteen years [@mri91; @hgb; @brand95; @shgb96; @bh98], it has been clear that the angular momentum transport governing accretion is [*not*]{} due to any sort of viscosity, but rather to MHD turbulence driven by the magneto-rotational instability. These stresses, although related in the mean to orbital shear, are far from isotropic [@shgb96; @HGK11], do not scale linearly with the shear, and do not respond in any direct way to fluctuating shears [@Pessah08]. In fact, when radial motions are sheared vertically in a magnetized orbiting plasma, they are unstable when, as is the case here, the vertical scale of the shear is longer than the distance an Alfven wave travels in a dynamical time [@mri91]. All these contrasts call into question whether, or under what circumstances, MHD-derived stresses might either limit radial flows or damp bending waves. To date, only one numerical simulation has been used to investigate the effects produced by Lense-Thirring torques on disks with internal MHD turbulence [@Fragile07], but its interpretability was limited by the nearness of the disk to the innermost stable circular orbit, the disk’s relatively large scale height, and the difficulty of adequately resolving the MHD turbulence. Thus, the applicability of this central assumption to the theory is still unclear.
In addition to this concern, there is also another reason to revisit the dynamics of the Bardeen-Petterson problem. Despite all the effort devoted to its study, there is still no clear understanding of angular momentum flows during the process of inner-disk alignment. If the angular momentum given the disk material by the Lense-Thirring torque remained with the material initially receiving it, the matter would simply precess around the central mass’s spin axis while very gradually drifting inward (this was, in fact, the way [@BP75] originally envisioned it). On the other hand, if hydrodynamic effects redistribute the angular momentum given the disk by the torques [@PP83], where does it go? Could the MHD turbulence carry the unaligned angular momentum a substantial distance? Moreover, why should that redistribution lead to alignment? After all, averaged over many precession periods, the net integrated angular momentum due to the torque goes to zero.
To begin answering these questions, we have performed a new MHD simulation of a disk evolving under the influence of Lense-Thirring torques. In its definition, we have made the strategic decision to forego a fully relativistic treatment so that the computational effort can be focused on resolving the MHD turbulence in a reasonably thin disk and running for a sufficiently long time, rather than on the dynamical complications of general relativity. It therefore assumes Newtonian dynamics except for a single term expressing the gravito-magnetic torque to lowest post-Newtonian order. We have also chosen parameters such that the precession rate in the middle of the disk is slower than the orbital period, but rapid enough that the mid-point of the disk (if unencumbered by hydrodynamics) would precess through a full rotation over the course of the simulation. The Newtonian approximation is an advantage here, too, because such a comparatively rapid precession rate is found in a genuine relativistic context only in the region not far outside the ISCO, where the warp dynamics would be obscured by a mass inflow rate comparable to the precession frequency.
Simulations {#sec:sims}
===========
The simulation code we employ is a contemporary translation (in Fortran-95) of the 3D finite-difference MHD code [*Zeus*]{} [@zeus1; @zeus2]. The magnetic field is updated using the “method of characteristics constrained transport (MOCCT)" algorithm to maintain zero divergence to machine accuracy [@HawleyStone95]. The [*Zeus*]{} code solves the standard equations of Newtonian fluid dynamics, but we augment its momentum equation with a term of the form $\rho \vv \times \vh$ to represent the gravitomagnetic force per unit mass, where $\rho$ is the mass density, $\vv$ is the fluid velocity, and $$\vh = \frac{2\vs}{r^3} - \frac{6(\vs \cdot \vr)\vr}{r^5}.$$ Here $\vs$ represents the magnitude and direction of the spin vector of the central mass and $r$ is spherical radius.
In this paper we report two simulations, one employing full 3D MHD, but the other purely hydrodynamic so that we may identify the special properties due to MHD through contrasting the two. The initial condition for the MHD simulation is a hydrostatic torus orbiting a point-mass in Newtonian gravity (see [@Hawley00]) defined by the parameters $q=1.65$, $r_{in} = 7.5$, $r_{\rm M} = 10$, $\rho_{\rm M} = 100$, and $\Gamma = 5/3$. That is, in this initial state the orbital frequency $\Omega \propto R^{-q}$ for cylindrical radius $R$, and the disk extends from an inner radius $r_{\rm in}$ to an outer radius $r_{\rm out} \simeq 14$. Its pressure maximum is found at $r_{\rm M}$, where the density is $\rho_{\rm M}$. We assume an adiabatic equation of state with index $\Gamma$. This combination of parameters results in a disk whose aspect ratio $H/R \simeq 0.06$–0.1 over its entire radial extent when $H$ is defined as $\sqrt{2}c_{s0}/\Omega$, for $c_{s0}$ the isothermal sound speed. Equivalently, the angle subtended by one scale-height is $\simeq 4^\circ$–$6^\circ$. The initial magnetic field in the disk is a set of nested poloidal field loops defined by the vector potential $$A_{\phi} = \rho - \rho_{C},$$ where $\rho_{C} = 0.1$ and $\vb = \nabla \times \vec{A}$. The field is scaled so that the volume-integrated ratio of the gas to magnetic pressure, is initially $25$.
At the beginning of the MHD simulation (which we call [`BP-m`]{}), we perturb the pressure with random fluctuations whose rms amplitude is $\simeq 1\%$. From these perturbations, the magneto-rotational instability grows, and we follow its development [*without any external torques*]{} for 15 orbits at $r_{\rm M}$. At this point, the MHD turbulence is fully saturated. In addition, the internal stresses due to the anisotropy of the turbulence have led to significant disk spreading. Roughly a third of the initial disk mass is lost, mostly via accretion through the inner boundary of the simulation (at $r = 4$). In addition, dissipation associated with the artificial bulk viscosity necessary to describe shocks properly has heated the gas so that $H/R \simeq 0.12$–0.2 across most of its extent.
The Lense-Thirring torque is turned on only at this point, when the MHD turbulence has saturated. For reasons we explain momentarily, we choose the spin-axis of the central mass to lie two initial scale-heights ($12^{\circ}$) away from the initial orbital axis. We set the magnitude of this torque so that $\omega(r_{\rm M})/\Omega(r_{\rm M}) = 1/15$ for precession frequency $\omega$ and orbital frequency $\Omega$. In terms of the exigencies of simulation, this is a very natural choice: $\omega \ll \Omega$ through all the disk except its innermost rings, but $\omega$ is not so small that to follow a precession period would take a prohibitively large amount of computer time. Regrettably, in terms of actual physics this is a very unnatural (and nominally inconsistent) choice because such a ratio is achieved in real life only when $r/r_g = \{6.1(a/M)\sin\theta[1 + \sqrt{1 + 0.133/\sin\theta}]\}^{2/3}$, where $\theta$ is the inclination angle between the orbital angular momentum and the central mass’s spin angular momentum. Newtonian dynamics are, of course, a very poor approximation in such a location. We justify it, however, on the grounds that what is most important to our effort to elucidate disk response to Lense-Thirring torque is the ability to explore the consequences of significant precession while assuring that it is still significantly slower than the orbital frequency. For similar reasons, we also ignore the relativistic contribution to the apsidal precession rate, even though one contribution to it is independent of black hole spin and larger than the Lense-Thirring rate, while another contribution is comparable to the Lense-Thirring precession frequency. There are also two more reasons to ignore the relativistic apsidal precession. It is not the only mechanism causing the radial epicyclic frequency to differ from the orbital frequency; radial pressure gradients also do this, and are likely to be at least as large, especially at the large distance from the black hole at which the Bardeen-Petterson transition radius is most likely to fall in real disks. In addition, because only part of the apsidal precession is proportional to black hole spin, our Newtonian approximation makes it impossible to scale the apsidal precession frequency with the Lense-Thirring (nodal) precession frequency. We then follow the evolution of the disk for 15 orbits at $r_{\rm M}$, i.e., one full precession period at $r_{\rm M}$, or $T = 30\pi/\Omega_M$ (throughout the remainder of this paper, we will describe time in units of a “fiducial orbit", the orbital period at $r_M$). Over the course of this torqued phase of evolution, $\simeq 10\%$ of the disk mass is accreted through the inner radial boundary. In addition, the heating associated with a large number of weak shocks increases its scale height by $\simeq 10\%$. The contrasting hydrodynamic simulation (called [`BP-h`]{}) begins from an initial condition whose radial profiles of midplane density and midplane scale-height match the azimuthally-averaged values in the MHD simulation immediately before the torque is turned on. The vertical density structure is what would be expected in hydrostatic equilibrium: $$\rho(R,z) = \rho_{0}(R) \exp \left[ \frac{-z^2}{H^2(R)} \right],$$ where $R$ is cylindrical radius and $z$ is vertical distance away from the disk plane. The local pressure is simply $H^2(R)\Omega^2(R)\rho/2$. The velocity field is chosen so that it is Keplerian in the disk midplane, but the disk rotates on cylinders. Although the disk so defined is in vertical equilibrium, there are unbalanced radial pressure gradients, but they are relatively small.
For the hydrodynamic simulation, the torque begins immediately. Like the MHD simulation, it is run for 15 orbits at $r_{\rm M}$.
Simulating a warped accretion disk using a grid-based method presents certain challenges. In a warped disk it is guaranteed that at least some orbital velocities are oblique to the grid coordinates, and this obliquity of the strongly supersonic flow must cause at least some numerical dissipation (see [@SKH13] for numerical experiments quantifying these effects). We have made several choices designed to minimize this dissipation. Following the results of the numerical experiments in [@SKH13], we adopt a spherical grid. We also align the initial orbital plane of [`BP-m`]{} with the equatorial plane of the coordinates in order to minimize numerical dissipation during the 15 orbits in which the MHD turbulence grows. This is also why we chose a relatively small inclination between the spin-axis and the initial orbital axis, and, as we are about to discuss, used as fine a resolution as possible.
The spatial domain for both simulations was $(r,\theta,\phi) \in [4,28] \times \pi [0.2,0.8] \times 2\pi [0,1]$. Recent work on convergence in MHD disk simulations [@HGK11; @Sorathia12] has shown that at least 32 ZPH (Zones Per vertical scale Height) are required to approach convergence in flat disks. It is possible that the additional complexity of external torques and disk warping raise that standard, but no systematic studies yet exist to determine whether they do and to what degree. On the other hand, a sufficiently long simulation even with this minimal resolution requires a large amount of computer time. We have therefore chosen to just meet that standard (and our MHD simulation still consumed $1.3 \times 10^6$ processor-hours). Our spherical grid had $(288,384,1024)$ cells in the radial, polar, and azimuthal directions respectively. In order to maximize our effective resolution in the regions we most care about, we space the cells logarithmically in the radial direction (i.e., $\Delta r/r$ is constant), uniformly in the azimuthal direction, and employ a polynomial spacing in the polar dimension (Eqn. 6 of @NKH10, with $\xi = 0.65$ and $n = 13$). This sort of polynomial spacing focuses cells near the equatorial plane, giving a resolution of more than $32$ ZPH within $\pm {{20}^{\circ}}$ of the midplane. At the pressure maximum of the torus, $r_M = 10$, the cell dimensions in the radial and polar directions are approximately equal, while the azimuthal cell dimensions are about a factor of 2 larger. We present detailed resolution quality data for this simulation in the Appendix.
The requirements for hydrodynamic resolution are not nearly so demanding because a purely hydrodynamic simulation is not turbulent. Although the flow is complicated, it is laminar, and has rather little structure on small spatial scales. Consequently, [`BP-h`]{} is well-resolved when run on a very similar grid to [`BP-m`]{}, but one whose cell dimensions are exactly twice as large in each dimension.
Boundary conditions for the two simulations are also identical. For hydrodynamic quantities, we use zero-gradient extrapolations and enforce an outwardly-directed velocity in the ghost zones. For the magnetic field, we set the transverse field components in the ghost zones to zero and require the normal component to satisfy the divergence-free constraint.
Results {#sec:results}
=======
At a qualitative level, the MHD and purely hydrodynamic simulations appear to resemble one another strongly. As we will emphasize throughout the remainder of this paper, the dominant mechanisms underlying the Bardeen-Petterson effect are [*hydrodynamic*]{}, not magneto-hydrodynamic. That being said, MHD does create an important difference between them whose consequences have numerous implications: the MHD system is turbulent, whereas the HD system is laminar.
Precession {#sec:precession}
----------
The ultimate driver of the entire process is Lense-Thirring torque. To describe it, as well as the rest of the angular momentum flow, we use a Cartesian coordinate system $(x,y,z)$ oriented to the central mass’s spin axis. That is, the $z$-direction is defined by the spin axis. The $x$ direction in this system is defined so that the initial disk angular momentum is in the $x$-$z$ plane with $L_x < 0$ and $L_z > 0$. In terms of these coordinates, the torque ${\bf G}$ has only two non-zero components, $G_x$ and $G_y$. Their dependence on radius and time is shown in Figures \[fig:Gx\] (the $x$ component) and \[fig:Gy\] (the $y$ component).
![Color contours (see color bar) of $G_x$ as a function of radius and time. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:Gx"}](fig1a.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of $G_x$ as a function of radius and time. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:Gx"}](fig1b.ps "fig:"){width="60.00000%"}\
Both runs begin with $G_y < 0$ and relatively strong within $r \simeq 10$. Likewise in both cases $G_y$ passes through zero and changes sign 5–6 orbits after the torque begins, becoming positive at later times. The only contrast in this regard is that $G_y$ at late times is rather smaller in the MHD case. Similarly, a short time after the torque begins, $G_x$ becomes positive in both cases, particularly for $r \lesssim 10$, but diminishes in magnitude with increasing time. A few orbits before the end of both simulations, $G_x$ also changes sign, a couple of orbits earlier in [`BP-m`]{} than [`BP-h`]{}. In both cases, too, the time-integral of $G_x$ is dominated by $r \lesssim 10$.
![Color contours (see color bar) of $G_y$ as a function of radius and time. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:Gy"}](fig2a.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of $G_y$ as a function of radius and time. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:Gy"}](fig2b.ps "fig:"){width="60.00000%"}\
These trends reflect the progress of disk precession and alignment. We measure the precession angle at radius $r$ by $\arctan{[(\partial L_y/\partial r)/(\partial L_x/\partial r)]}$, where the angular momentum partial derivatives are the angular momentum integrated on spherical shells at radius $r$. We measure the (mis)alignment angle $\beta$ by $\arctan{[(\partial L_\perp/\partial r)/(\partial L_z/\partial r)]}$, where $L_\perp^2 =
L_x^2 + L_y^2$. The sign change in $G_y$ is associated with precession through an angle of $\pi/2$, while the decrease in magnitude in the torque is a signature of disk alignment. Figure \[fig:phi\_prec\] shows the precession in greater detail. Like the torque, of course, the precession rate in the two simulations is overall similar. However, they are by no means identical. When MHD effects operate, the mean disk precession is slightly faster than when they are absent. The largest precession angle at any radius found in [`BP-m`]{} after 15 orbits is $\simeq 1.4\pi$, but only $\simeq 1.1\pi$ in [`BP-h`]{}. Especially in the HD case, the precession is not far from solid-body, a result previously seen in other simulations [@NP00; @Fragile05; @Fragile08]. After 2–3 orbits of torque, the color contours for [`BP-h`]{} run almost flat across the radius–time plane. Differential precession is weak in an absolute sense in [`BP-m`]{}, but nonetheless noticeably stronger than in [`BP-h`]{}. For the first $\simeq 5$ orbits, compared to [`BP-h`]{} it precesses more rapidly at small radii, but more slowly at large. These contrasts diminish over time. At the end of the simulation, the contrast in precession angle across the entire radial span even in [`BP-m`]{} is only $\simeq 0.4\pi$, even though the precession phase difference between test-particles at $r=10$ and $r=20$ would have been $15\pi/8$, and between $r=5$ and $r=10$, $15\pi$! The rate of this approximate solid-body precession corresponds to the test-particle precession frequency at $r \simeq 11.5$, slightly outside the pressure maximum, and rather close to the radius corresponding to the mean specific angular momentum of the disk.
On the other hand, although the end-result is nearly solid-body precession, there are noticeable departures from rigid precession, particularly in [`BP-m`]{}(a possible explanation for why the MHD case is farther from solid-body precession than the HD case will be presented in § \[sec:warp\]). As expected, the sense of the contrast is almost monotonic—outer rings precess more slowly than inner rings. This sense is not without exception, however—in the inner disk there can be small departures from monotonicity at the $\simeq 0.1\pi$ level.
![Color contours (see color bar) of precession angle in units of $\pi$ as a function of radius and time. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:phi_prec"}](fig3a.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of precession angle in units of $\pi$ as a function of radius and time. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:phi_prec"}](fig3b.ps "fig:"){width="60.00000%"}\
Over much of the region where the disk departs from solid-body precession, the slope of the contours of fixed precession angle is very nearly constant at $\simeq 1.5$ length units per fiducial orbital period. Because the disk is close to precessing at a single rate, this near-constant slope translates to a near-constant twist rate: $\partial \phi/\partial r \simeq 0.14$ radians per radial length unit.
Local warping {#sec:warp}
-------------
The degree of local warp can be quantified in terms of $$\hat\psi \equiv \frac{|d{\bf l}/d\ln r|}{H/r},$$ where ${\bf l}(r) \equiv {\bf L}(r)/|{\bf L}(r)|$ is the direction of the angular momentum ${\bf L}$ at radius $r$. When we compute $\hat\psi$, we use the actual value of $H$ at that location and time. As shown initially in [@NP00] and discussed at greater length in [@SKH13], the magnitude of this quantity relative to unity gives a good indication of the degree of nonlinearity of the warp. That is, the radial contrast in pressure across a distance $\sim r$ becomes order unity when $\hat\psi \sim 1$ so that the speed of the corresponding radial flow becomes transonic.
[@SKH13] found a further significance for $\hat\psi \gtrsim 1$: the rate at which warps decay as a result of the angular momentum mixing associated with these radial flows increases sharply when $\hat\psi$ becomes greater than unity. That finding also applies to these simulations. Despite the strong radial-dependence of the precession frequency, $d{\bf l}/d\ln r$ never exceeds $\simeq 0.6$ in either simulation; test-particle precession would have made this figure $\simeq 50$ between $r=5$ and $r=10$ by the end of the simulation.
Figure \[fig:psihat\] shows how the warp parameter varies as a function of radius and time in both simulations. One way to view this pair of figures is to focus on behavior as a function of time at a fixed radius. From this perspective, we see that $\hat\psi$ oscillates between quite small and $\simeq 2.5$ on timescales of order a fiducial orbit. In other words, the warp induced by the radially-varying external torque appears to exhibit a “stick-slip" behavior: the differential torquing builds the warp until $\hat\psi$ reaches this maximum value, and then the strong radial flows associated with such a nonlinear warp rapidly erase it. The warp grows larger at the outside of the disk, where the surface density is small. It is also noteworthy that the “stick-slip" oscillation is considerably more sharply defined in the HD case than in the MHD case; this is not surprising in view of the turbulence that is the hallmark of the latter, but nonexistent in the former.
The locations of strong warping propagate coherently outward over time. Although the correspondence is not perfect, the trajectory of the first and strongest pulse in both simulations is close to what would be expected for a bending wave. The time-width of this pulse defines a characteristic frequency, $\omega_* \sim 2$ radians per fiducial period. Bending waves more than a few times greater than the local precession frequency travel at half the local isothermal sound speed; bending waves with lower frequencies travel more slowly, becoming non-propagating when their frequency drops below the local precession frequency [@Lubow02]. Because the precession frequency reaches $\omega_*$ only for $r \lesssim 7$ and decreases rapidly outward ($\propto r^{-3}$), the asymptotic wave speed applies to most of the mode content for these pulses for all radii $\gtrsim 7$. We plot tracks defined by this speed in Figure \[fig:psihat\], where it appears to provide a reasonable approximation to the propagation of the first pulse in both [`BP-m`]{} and [`BP-h`]{} . That the first pulse in [`BP-h`]{} spreads slowly with radius indicates a spread in propagation speeds, suggesting the presence of wave components with frequencies ranging down from $\omega_*$ to $\gtrsim \omega(r)$.
In [`BP-h`]{}, but not in [`BP-m`]{}, there is also a rough correlation between the trajectory of that first pulse and the establishment of approximately solid-body precession. A clue to the origin of this contrasting behavior may be found in the much more irregular variation of warp magnitude along the trajectory of the pulse in [`BP-m`]{} than in [`BP-h`]{}. We suggest that the turbulence permeating the disk in [`BP-m`]{} disrupts smooth propagation of such a wave: the ratio $\left(\langle v_{Ar}v_{A\phi}\rangle/c_s^2\right)^{1/2} \simeq 0.2$–0.3 (the average is over spherical shells) immediately before the torques begin (here $v_{A(r,\phi)} \equiv B_{(r,\phi)}/\sqrt{4\pi\rho}$). Laminar magnetic field would probably be less effective in interfering with a bending wave because the magnetic tension on such long length scales (vertical wavelength $\simeq 2H$) is relatively weak; for example, the growth rate of an MRI mode with the vertical wavelength of the bending wave is only $\sim \Omega/7$. That solid-body precession is never quite achieved in [`BP-m`]{} will prove crucial in our analysis of the contrasting alignment behavior shown by these two simulations.
Later $\phat$ pulses, however, propagate substantially more slowly and decelerate outward. In fact, their speeds decrease steadily from each late pulse to the next. These observations suggest that these pulses are not driven by bending wave dynamics. As we have just seen, the speeds of bending waves are controlled by the isothermal sound speed and the relationship between their frequency and the local precession frequency. Neither the sound speed nor the local precession frequency is a function of time, while the time-dependence of the pulse widths suggests that $\omega_*$ increases slowly. Thus, the speeds of bending waves should vary little with time or possibly become even closer to the asymptotic value of half the isothermal sound speed, whereas the speeds of these pulses become progressively slower and slower at later times. Instead, the later pulses appear to be better described by differential precession twisting the disk from flat to a critical value of $\hat\psi$. In this essentially kinematic picture, the disk begins in a state in which it is locally flat (more precisely, it is flat near radius $r$ at time $t_0(r)$). The differential torques then build a warp without (at first) any coupling between adjacent rings of gas. Once the warp grows to the point at which $\hat\psi = \hat\psi_{\rm crit}$, neighboring rings couple through radial flows and, after about one local orbit, that region of the disk is once again flat. In this picture, the radius $r_{\rm crit}$ at which $\hat\psi = \hat\psi_{\rm crit}$ moves outward in time according to $$\label{eqn:precessray}
r_{\rm crit} = r_M\left\{\frac{6\pi}{H/r}\frac{\omega(r_M)}{\Omega(r_M)}
\frac{\sin\beta}{\hat\psi_{\rm crit}} \, \left[t - t_0(r)\right\} \right]^{1/3},$$ where $\beta$ is the angle between the orbital axis and the central mass’s spin-axis, and time is measured in fiducial periods (orbital periods at $r_M$). The dotted curve in Figure \[fig:psihat\] assumes $\hat\psi_{\rm crit} \simeq 2.5$, consistent with the most common value of $\hat\psi$ along these ridgelines and sets $t_0(r)$ to one local orbit after the peak warp induced by the bending wave passes that radius. The delay of one orbit is a rough approximation to the time required for warp relaxation from a $\phat$ of that magnitude [@SKH13]. In both cases, but especially for [`BP-h`]{}, this model does a fairly good job of reproducing the track followed by the second major pulse in $\phat$. It appears, therefore, that although the twist induced by the differential torques is initially propagated outward by a bending wave, subsequent twists—which also have considerably smaller amplitude—propagate purely kinematically.
![Color contours (see color bar) of $\hat\psi$. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}. The color scale goes to black for $\hat\psi \leq 0.8$ in order to emphasize the boundary between linear and nonlinear warps. In each panel, there are two superposed white curves. The solid one represents the trajectory of a bending wave (with speed one half the mass-weighted isothermal sound speed: [@Lubow02]), the dotted one shows the trajectory implied by eqn. \[eqn:precessray\] with $t_0(r)$ as described in the text. []{data-label="fig:psihat"}](fig4a.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of $\hat\psi$. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}. The color scale goes to black for $\hat\psi \leq 0.8$ in order to emphasize the boundary between linear and nonlinear warps. In each panel, there are two superposed white curves. The solid one represents the trajectory of a bending wave (with speed one half the mass-weighted isothermal sound speed: [@Lubow02]), the dotted one shows the trajectory implied by eqn. \[eqn:precessray\] with $t_0(r)$ as described in the text. []{data-label="fig:psihat"}](fig4b.ps "fig:"){width="60.00000%"}\
Alignment
---------
Figure \[fig:align\] shows the alignment angle $\beta$ (in units of $\pi$) in the two simulations as a function of radius and time. Alignment is the respect in which MHD makes the greatest difference. For the first several orbits, [`BP-h`]{} aligns significantly more quickly than [`BP-m`]{} at large radii, while at small radii the opposite is true. At later times, however, alignment virtually stops in [`BP-h`]{} , while continuing steadily in [`BP-m`]{}. As a result, whereas half the initial alignment at $r=7$ has been eliminated after $\simeq 5$ orbits in [`BP-m`]{}, that degree of alignment is not achieved even by the end of 15 orbits in [`BP-h`]{}; [`BP-h`]{} diminishes its misalignment by $\simeq 40\%$ at $6 < r < 9$ by $\simeq 4$ orbits, and then ceases to change alignment thereafter. By contrast, the entire range of radii interior to $r \simeq 15$ in [`BP-m`]{} has diminished its misalignment by at least half by the end of its 15 orbits, and the misalignment has been sanded down to $< 0.02\pi$ for all $r < 11$.
![Color contours (see color bar) of the inclination angle (in units of $\pi$) as a function of radius and time. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:align"}](fig5a.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of the inclination angle (in units of $\pi$) as a function of radius and time. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:align"}](fig5b.ps "fig:"){width="60.00000%"}\
Comparison of Figures \[fig:align\] and \[fig:psihat\] also shows a close correspondence between the regions where the inclination angle has been reduced to less than $\simeq H/r$ and regions where the warp is always in the linear regime. This is, of course, a natural consequence of the fact that when the inclination is $< H/r$, there cannot be radial contrasts in inclination or orientation any larger than that. What is more noteworthy about this region of permanently linear warp is that it is also the region where the inclination in [`BP-m`]{} continues to decline, whereas no such improvement in alignment occurs in [`BP-h`]{}. We will return to this point later.
MHD vs. HD {#sec:mhdvshd}
----------
As we have already pointed out, at least through the initial stages of alignment, MHD effects appear to be secondary to hydrodynamic effects, although the sense of that secondary contribution is to promote alignment. The data shown in Figure \[fig:forces\] illustrate explicitly the relative importance of magnetic and pressure forces. The disk curves up and down in these coordinates because at this radius and time it has already moved out of the equatorial plane of the grid. Compared at the same location during the time when the disk is aligning most rapidly, the radial gas pressure gradient is generally $\sim 10$–100 times larger than the radial magnetic pressure gradient, while the radial magnetic pressure gradient is $\sim 3$ – 10 times larger than the total magnetic tension force. At later times, the magnetic forces rise relative to the fluid forces, but only by a factor of 2–3. Thus, in terms of instantaneous forces, magnetic effects are always considerably weaker than hydrodynamic forces.
![Color contours on a logarithmic scale of the magnitudes of three force densities: the radial component of the gas pressure gradient (top panel), the radial component of the magnetic pressure gradient (middle panel), and the total magnetic tension (bottom panel). All are measured as a function of $\phi$ and $\theta$ on the $r=10$ shell at 4 orbits after the torque was turned on in [`BP-m`]{}.[]{data-label="fig:forces"}](fig6a.ps "fig:"){width="35.00000%"}\
![Color contours on a logarithmic scale of the magnitudes of three force densities: the radial component of the gas pressure gradient (top panel), the radial component of the magnetic pressure gradient (middle panel), and the total magnetic tension (bottom panel). All are measured as a function of $\phi$ and $\theta$ on the $r=10$ shell at 4 orbits after the torque was turned on in [`BP-m`]{}.[]{data-label="fig:forces"}](fig6b.ps "fig:"){width="35.00000%"}\
![Color contours on a logarithmic scale of the magnitudes of three force densities: the radial component of the gas pressure gradient (top panel), the radial component of the magnetic pressure gradient (middle panel), and the total magnetic tension (bottom panel). All are measured as a function of $\phi$ and $\theta$ on the $r=10$ shell at 4 orbits after the torque was turned on in [`BP-m`]{}.[]{data-label="fig:forces"}](fig6c.ps "fig:"){width="35.00000%"}\
The relative weakness of magnetic forces is enhanced by the fact that after the torque begins, the total magnetic energy in the disk declines sharply, falling by about a factor of 2 over the first 5 orbits of torque. From then until the end of the simulation, the total magnetic energy varies hardly at all. It is possible that some of the field loss is a numerical artifact, caused by a combination of newly-created flows oblique to the coordinate grid and an artificially large rate of magnetic reconnection as the radial flows driven by the disk warps push regions of oppositely-directed field toward one another. However, we believe that it is not entirely artificial. We have several reasons to think so. The first is that the degree of obliquity is never terribly large: an inclination of $12^\circ$ is not very great, and the numerical diffusion experiments of [@SKH13] found that even with a grid a factor of 4 coarser than ours, only $\simeq 0.5\%$ of the angular momentum was lost after 10 orbits of integration at a slightly greater inclination ($15^\circ$). The second is that before carrying out [`BP-m`]{}, we ran the same problem on a grid a factor of 2 coarser in each dimension. The magnetic energy loss after the initiation of the torque in that run was larger, but not by much, a factor of 3 decrease rather than a factor of 2. The third reason is that the radial flows, which sometimes lead to shocks, do contain regions where reconnection is driven by the fluid motions; in these cases, the local rate of reconnection may be resolution-dependent, but its end-result is not. Thus, some of the field loss we see is likely due to lack of resolution, but it probably does not account for the entire effect. It is also possible that the development of the magneto-rotational instability is altered in the presence of a warp.
Thus, although our simulation includes a full treatment of MHD turbulence, it turns out to have a relatively small effect on the magnitude of the radial flows primarily responsible for transporting misaligned angular momentum through the disk. The situation in even a magnetized disk in fact resembles quite closely that explored in [@SKH13], in which the relaxation of disk warps in a purely hydrodynamic context was studied. Just as was found in that paper, we find that when $\hat\psi > 1$, which is the generic situation when there is any substantial inclination, the magnitudes of these radial flows are primarily controlled by the fluid dynamics of order unity pressure contrasts in an orbital setting, i.e., quasi-free expansion limited by orbital mechanics. Despite the dominance of hydrodynamic effects over magnetohydrodynamic effects in most aspects of warped disk evolution, we have also seen that MHD appears both to accelerate alignment and to continue it longer. It is noteworthy that the purely hydrodynamic disk ceases alignment progress when its inclination reaches $\simeq 6^\circ$, here one scale-height (Fig. \[fig:align\]b). At that point (see also Fig. \[fig:psihat\]b), it becomes almost impossible for any warps to reach nonlinear amplitude. Consequently, the Reynolds stress responsible for radial mixing of unaligned angular momentum drops rapidly because it is a strongly increasing function of $\hat\psi$ when $\hat\psi \simeq 1$ [@SKH13]. On the other hand, when MHD effects are present, the turbulence they cause creates much short lengthscale structure in the velocity field, enhancing the angular momentum mixing rate. This mixing rate is considerably faster than the inflow rate associated with Reynolds stresses in a flat disk because the scale of the gradients is much smaller: $\sim 0.1r$ rather than $\sim r$.
We close this section with an examination of an assumption frequently made in other studies of warped disks: that the vertical shear of radial motions induces a stress that can be phenomenologically modeled as an isotropic “$\alpha$ viscosity" [@PP83]. Such a viscosity would create a viscous stress proportional to the shear (but, of course, with opposite sign) whose magnitude is $\sim \alpha p$ when $\partial v_r/\partial z \sim \Omega$. Some support was given to this hypothesis by [@Tork2000], who measured the decay of epicyclic motions in a numerical simulation of a vertically-stratified shearing box with MHD turbulence, although their conclusions were somewhat clouded by the limitations of their approximations and by their finding that larger amplitude motions were primarily damped by a different mechanism, the excitation of inertial waves.
Here we test a form of this hypothesis: that the Maxwell stress of MHD turbulence (whose $r$-$\phi$ component is responsible for accretion) acts in the same manner independent of the orientation of the shear. In this context, the relevant component of the Maxwell stress is $r$-$\theta$. We therefore compute the ratio $$\alpha_* = {B_r B_\theta \Omega \over 4\pi p \partial v_r/ \partial z}$$ on a sample spherical shell when alignment is beginning at that location. [^1] If the isotropic $\alpha$ viscosity hypothesis were true, $\alpha_*$ would be consistently positive and have magnitude $\sim 0.01$–0.1, similar to the ratio of the time-averaged and vertically-integrated $r$-$\phi$ component of the magnetic stress to the similarly averaged and integrated pressure. As Figure \[fig:alpha\] demonstrates quite clearly, neither of these expectations is confirmed. The quantity $\alpha_*$ is equally likely to be positive or negative, and its absolute magnitude in the disk body—including where the shear and Reynolds stress are greatest—is generally $\sim 10^{-5}$–$10^{-4}$. The mass-weighted mean $\langle \alpha_* \rangle \simeq 3 \times 10^{-5}$. When averaged over snapshots spanning 0.3–1 fiducial orbit (which is also the local orbital period for the data shown in Fig. \[fig:alpha\]), the magnitude of the shear decreases, but the radial pressure gradients induced by the warp preserve some overall consistency. Consequently, the shear diminishes only somewhat when averaged over short time intervals. On the other hand, time-averaging over even as brief a time as half a fiducial orbit (five snapshots) reduces the magnitude of the $r$-$z$ magnetic stress by almost an order of magnitude. To quantitatively calibrate the measured magnitude of $\alpha_*$, we note that the rms magnitude of the $r$-$z$ shear is $\simeq 2.6\Omega$ when weighted by mass and $\simeq 6.9\Omega$ when weighted by volume. A further sense of scale may be gleaned from the fact that both the $r$-$r$ and the $r$-$z$ components of the Reynolds stress are frequently $\sim (1$–$10)p$ (and the $r$-$z$ Reynolds stress has no more correlation with the corresponding shear than the same component of the Maxwell stress). In other words, the radial flow speeds are generally transonic, the lengthscale of vertical variation is several times smaller than the pressure scale height (i.e., turbulence is important), and the overall dynamics are dominated by pressure gradients and gravity, with only small contributions from any other sources of stress.
As a final comment, it is worth noting that in several respects this result can be understood on the basis of qualitative arguments. First, it is not surprising that the $r$-$z$ component of Maxwell stress should be much smaller in magnitude than the $r$-$\phi$ component when the magnetic fields are associated with MRI-driven turbulence. Simulations of MRI-driven turbulence in flat disks have consistently found that $|B_\phi| \sim 3|B_r| \sim 10|B_z|$ [@hgb; @shgb96; @HGK11]; even without allowance for the degree of correlation between these components, one would therefore expect the $r$-$z$ component to be an order of magnitude smaller than the $r$-$\phi$ component. Because the consistency of orbital shear imposes a strong correlation between $B_r$ and $B_\phi$, yet $r$-$z$ shear has no consistent value, one might also expect that the degree of correlation in $r$-$z$ would be much weaker than in $r$-$phi$. The lack of sign correlation can also be understood intuitively. Magnetic stresses in conducting fluids result from [*strain*]{} in the fluid, not shear. If the flow is oscillatory, strain is $\pi/2$ different in phase from shear, and such a phase offset would entirely eliminate any sign correlation. Fluctuations due to turbulence further diminish any direct tie between magnetic stress and fluid shear.
![Color contours (see color bar) of $\log_{10}(|\alpha_*|)$ as defined in text, measured on the same spherical shell and at the same time in [`BP-m`]{} as in Fig. \[fig:forces\]. In order to display these quantities on a logarithmic scale, we separate the data into two cases: where $\alpha_* > 0$ (upper panel) and where $\alpha_* < 0$ (lower panel). Black indicates a region where $\alpha_*$ has the opposite sign from the colored data in that panel.[]{data-label="fig:alpha"}](fig7a.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of $\log_{10}(|\alpha_*|)$ as defined in text, measured on the same spherical shell and at the same time in [`BP-m`]{} as in Fig. \[fig:forces\]. In order to display these quantities on a logarithmic scale, we separate the data into two cases: where $\alpha_* > 0$ (upper panel) and where $\alpha_* < 0$ (lower panel). Black indicates a region where $\alpha_*$ has the opposite sign from the colored data in that panel.[]{data-label="fig:alpha"}](fig7b.ps "fig:"){width="60.00000%"}\
Analysis
========
The alignment rate {#sec:align}
------------------
To understand these results, it is helpful to frame them in terms of the global angular momentum budget. Alignment is often described as due to “dissipation" associated with angular momentum diffusion [@PP83]. However, this description is a bit imprecise. Changing the direction of an angular momentum first and foremost requires a torque. The mechanism producing this torque may or may not be dissipative, and any dissipation involved may or may not be associated with a process described by a classical diffusion equation. What is truly essential is that new angular momentum introduced into the system must be brought to a location where it can cancel the misaligned angular momentum. More specifically, there are only three ways the angular momentum of a given disk region can change: by a divergence of Reynolds stress, a divergence of Maxwell stress, and an external torque. However, the Lense-Thirring torque by its very nature cannot change $|{\mathbf L_\perp}|$ at the location where the torque is exerted because it is always exactly perpendicular in direction. It follows that to align a ring at radius $r$, there must be a way to bring it angular momentum from a region with a precession phase [*different*]{} from that of radius $r$, where the Lense-Thirring torque has a component opposite in direction to ${\mathbf L_\perp}(r)$. It is possible for diffusive mixing to accomplish this, but it can also be accomplished by other means, and any mixing process must satisfy certain specific conditions. The region that is mixed must contain a large enough range of precession phase that some portion of it has a torque with a direction that can cancel ${\mathbf L_\perp}(r)$, but not so large that mixing leads to complete cancellation in the net torque. In addition, as we have already seen, magnetic forces are in general quite small compared to hydrodynamic forces, so the Maxwell stress contributes little to the alignment. Consequently, alignment must be due to divergences in the Reynolds stress. Moreover, because the interesting gradients are all in the radial direction, it makes sense to think only about radial angular momentum flows.
More formally, we define the radial angular momentum flux as $$S_{r,(x,y,\perp)} \equiv r^2 \int \, d\theta \sin\theta \, \int \, d\phi \, \rho v_r \ell_{(x,y,\perp)},$$ where $\ell_{x,y,\perp}$ is the local specific angular momentum in the $x$, $y$, or perpendicular (i.e., combining $x$ and $y$) direction. The magnitudes of these fluxes are shown in Figures \[fig:srx\] and \[fig:sry\]. The global shape of the radius and time dependence of the fluxes is similar in the two simulations, and in units of shell-integrated $\rho r v_{\rm orb}^2$, the magnitudes of the fluxes in both simulations are similar to those seen in [@SKH13] when $\phat \simeq 1$. In that previous paper, fluxes of this magnitude led to approximate disk flattening on orbital timescales; much the same result is seen in both of these new simulations.
However, there are also significant contrasts. Most importantly, in the hydrodynamic case, but not in the MHD simulation, there is a sequence of three large amplitude flux pulses of alternating sign. The first two are due to a transient in which the hydrodynamic disk relaxes from its initial state, which is not exactly in equilibrium; their effects nearly cancel. The third, although smaller in magnitude, is in the long-run more significant. It follows the track already seen in Figure \[fig:psihat\] that we interpreted as a bending wave. This track also corresponds to the flattening of the disk seen in Figure \[fig:phi\_prec\] and coincides with the last stage of alignment seen in Figure \[fig:align\]. In other words, it appears that this bending wave pulse effectively flattens the hydrodynamic disk, so that it precesses very nearly as a solid-body thereafter. Once the bending wave has passed, [`BP-h`]{} maintains a generally higher level of outward angular momentum flux than found in [`BP-m`]{} because it remains misaligned to a greater degree at small radius where the torques operate (see also Figs. \[fig:Gx\] and \[fig:Gy\]). By contrast, in [`BP-m`]{}, although there is an initial bending wave, it is partially disrupted, and is much less effective at flattening the disk, as demonstrated also by the generally higher levels of $\phat$ seen in the MHD panel of Figure \[fig:psihat\] between the tracks of the bending wave and the kinematic precession pulse.
![Color contours (see color bar) of the radial flux of the $x$ component of angular momentum. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:srx"}](fig8a.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of the radial flux of the $x$ component of angular momentum. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:srx"}](fig8b.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of the radial flux of the $y$ component of angular momentum. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:sry"}](fig9a.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of the radial flux of the $y$ component of angular momentum. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:sry"}](fig9b.ps "fig:"){width="60.00000%"}\
To gain a sense of scale, it is also useful to look at a normalized version of the angular momentum flux, $$\hat S_{r\perp} \equiv \frac{|S_{r\perp}|}{c_s \partial L_\perp/dr}.$$ This quantity captures the efficiency with which local fluid is able to pass along its angular momentum. Shown in Figure \[fig:srpnorm\], we see that this quantity is typically a few tenths; that is, if the mean flow rate is exactly the sound speed, the flux carries $\sim 15$–30% of the local angular momentum. Although the absolute level of the fluxes in the MHD case was always somewhat smaller than in the HD case, $\hat S_{r\perp}$ is always larger in MHD. In other words, the MHD case puts more of its available misaligned angular momentum into motion. The contrast is especially noticeable in locations where the swing into alignment is most rapidly taking place. On this basis, it might be reasonable to identify the magnitude of $\hat S_{r\perp}$ found here with $h/\Delta r$, where $\Delta r$ is the radial scale of the warp. We caution, however, that, as shown by [@SKH13], the actual functional relationship between $S_{r\perp}$ and the warp magnitude $\hat\psi$ is nonlinear, exhibits time delays, and also depends on the global character of the warp. Consistent with those results, $\hat S_{r\perp}$ varies by a factor $\sim 2$, both as a function of time and as a function of radius. Because $\hat S_{r\perp}$ is proportional to the ratio of angular momentum flux to the radial gradient of angular momentum direction, these fluctuations support the conclusion of our previous paper that simple diffusion model does not fully describe the behavior of this system.
![Color contours (see color bar) of $\hat S_{r\perp}$, the normalized radial flux of the perpendicular component of angular momentum. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:srpnorm"}](fig10a.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of $\hat S_{r\perp}$, the normalized radial flux of the perpendicular component of angular momentum. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:srpnorm"}](fig10b.ps "fig:"){width="60.00000%"}\
The next step in our inquiry is to examine the effect of divergence in the angular momentum flux. Comparing Figures \[fig:srx\] and \[fig:sry\] with their torque counterparts (Figs. \[fig:Gx\] and \[fig:Gy\], respectively), it is apparent that the fluxes are largest at radii considerably greater than where the torques are largest. In other words, the angular momentum delivered at small radii by the torques is collected and swept outward. Beyond $r \simeq 14$–17, where the fluxes peak, the transported angular momentum is deposited, a bit like silt dropping out of a slowing river. The distribution of the net rate of change in angular momentum can be seen in Figures \[fig:netx\] and \[fig:nety\]. Both the initial bending wave and the later, slower pulses seen in Figure \[fig:psihat\] can be discerned in the HD panel of Figure \[fig:netx\]. These pulse trains are much less apparent in the MHD case. One fact uniting the HD and MHD plots of Figures \[fig:netx\] and \[fig:nety\], however, is that in both cases the net rate of change in angular momentum in regions where the rate of angular momentum delivery by torque is high is in fact quite small. In other words, where the torque is delivered, increased outward angular momentum flux removes the great majority of it and transports it outward. This is why the local precession rate in the inner disk is substantially smaller than the test-particle model would predict and also why, especially in [`BP-m`]{}, much of that angular momentum is used for alignment rather than precession.
![Color contours (see color bar) of the net rate of change per unit time in the $x$ component of angular momentum in each radial shell. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:netx"}](fig11a.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of the net rate of change per unit time in the $x$ component of angular momentum in each radial shell. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:netx"}](fig11b.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of the net rate of change per unit time in the $y$ component of angular momentum in each radial shell. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:nety"}](fig12a.ps "fig:"){width="60.00000%"}\
![Color contours (see color bar) of the net rate of change per unit time in the $y$ component of angular momentum in each radial shell. Upper panel is [`BP-m`]{}; lower panel is [`BP-h`]{}.[]{data-label="fig:nety"}](fig12b.ps "fig:"){width="60.00000%"}\
The effects shown in Figures \[fig:netx\] and \[fig:nety\] can be summarized by the fact that $$\label{eqn:alignmentrate}
\frac{\partial^2 L_\perp}{\partial r\partial t} =
-\frac{\partial L_x}{\partial r}\frac{\partial S_{rx}}{\partial r} -
\frac{\partial L_y}{\partial r}\frac{\partial S_{ry}}{\partial r}.$$ If the ratio $(\partial S_{rx}/\partial r)/(\partial S_{ry}/\partial r)$ were equal to $G_x/G_y$, the perpendicular angular momentum would not change at all because that is the condition for precession. However, the ratio of the angular momentum deposition rates in the $x$ and $y$ directions does not necessarily match the ratio required for precession at that location. To accomplish alignment, all that is required is for $$\frac{\partial S_{sry}/\partial r}{\partial S_{rx}/\partial r} <
-\frac{\partial L_x/\partial r}{\partial L_y/\partial r},$$ where the RHS is the exact precession ratio. Alignment proceeds most rapidly where this inequality is most strongly satisfied.
Another way of putting the same point is to observe that alignment is achieved most efficiently when the vector $$\frac{\partial^2 {\bf L_\perp}}{\partial r\partial t} =
-\frac{\partial S_{rx}}{\partial r}\hat x
- \frac{\partial S_{ry}}{\partial r}\hat y$$ is exactly anti-parallel to $\partial {\bf L_\perp}/\partial r$. In principle the angle $\gamma$ between $-\partial {\bf L_\perp}/\partial r$ and the rate at which it is changed might be anywhere from 0 to $\pi$. The angle optimally efficient for alignment is $0$; the angle that produces pure precession is $\pi/2$. In both [`BP-h`]{} and [`BP-m`]{}, we find that during times of alignment $\langle\cos\gamma\rangle \simeq 0.5$ although there are sizable fluctuations around this value at specific times and locations. On the other hand, $\langle\cos\gamma\rangle$ decreases over time in [`BP-h`]{} from approximately this value during the first $\simeq 6$ orbits to close to zero during the remainder of the simulation.
In large part, the angle $\gamma$ is determined by the relative precession angles of the region where the torque occurs, which supplies the angular momentum for the outward flux, and the region where the angular momentum is deposited. For maximal alignment rate, the direction of the deposited angular momentum should be the direction of the torque exerted when the precession angle is $\pi/2$ in advance of the local precession angle. The value of $\langle\cos\gamma\rangle$ seen in [`BP-m`]{} indicates a precession angle difference closer to $\simeq \pi/6$ than $\pi/2$, but there is nonetheless sufficient offset to drive alignment. During early times in [`BP-h`]{} the situation is similar. At late times in [`BP-h`]{} however, $\langle\cos\gamma\rangle \simeq 0$ because the disk orientation is very nearly the same at all radii, and the time required to transport angular momentum from the small radii where the torques operate to larger radii is short compared to the solid-body precession period. In other words, having at least some warp in the disk is essential to alignment.
The detailed radial and time dependence of the net rate of change of misaligned angular momentum $\partial^2 L_\perp/\partial r\partial t$ in [`BP-m`]{} is shown in Figure \[fig:alignmentrate\]. Several things stand out in this plot. One is that the local rate of change of misaligned angular momentum is predominantly, but not exclusively, negative. That is, there are frequently moments when an individual ring becomes [*less*]{}, not [*more*]{} aligned, even though the long-term trend is toward alignment. Another point is that, not surprisingly, the largest part of the change in angular momentum is associated with the range of radii ($8 \lesssim r \lesssim 15$) with the greatest mass and therefore the greatest amount of misaligned angular momentum to change.
Perhaps more surprisingly, this figure is also marked by a large number of streaks indicating rapid outward motion. The white curve in the figure follows the path of an adiabatic sound wave directed radially outward. The very close correspondence between its slope in this diagram and the slopes of the streaks demonstrates clearly that these are the traces of sound waves. Although they are certainly not regular, there is a typical time interval between these waves, $\simeq 0.5$ fiducial orbits.
![Color contours (see color bar) of $\partial \ln L_\perp/\partial t$ in [`BP-m`]{}. The white curve shows the trajectory of a sound wave traveling radially outward.[]{data-label="fig:alignmentrate"}](fig13.ps "fig:"){width="60.00000%"}\
In [`BP-m`]{}, alignment is achieved beginning at small radii. Consequently, one can speak of the outward motion of an alignment front. On dimensional grounds, one might estimate the rate at which this front moves outward by the ratio of the rate at which unaligned angular momentum is given to the disk by torque to the magnitude of the local angular momentum requiring alignment. However, as the previous discussion would suggest, this estimate should be corrected by a factor $\langle \cos\gamma\rangle$. Our estimated rate of motion would then be $$\label{eqn:alignrate}
\frac{dr_f}{dt} = \langle\cos\gamma\rangle \frac{G(<r_f)}{dL_{\perp}(r_f)/dr},$$ where $r_f$ is the radius of the alignment front and $G(<r)$ is the magnitude of the torque integrated over the matter interior to $r$. At the order of magnitude level, $dr_f /dt \sim \Delta r_f \omega$, where $\Delta r_f$, the radial width of the alignment front, is $\sim r_f$ in [`BP-m`]{}. This estimate might also be further reduced by an allowance for some of the torque being deposited at radii between where it is given to the disk and the alignment front. However, if the transition region from an aligned inner disk to the inclined outer disk is reasonably narrow, this loss may not be very large. Confirmation of this guess is provided in Figure \[fig:alignadj\], where we show the track of an alignment front moving at the speed we estimate assuming $\langle \cos\gamma\rangle = 0.5$. As can be seen, it follows the contour of half-alignment quite well.
![Color contours (see color bar) of the inclination angle (in units of $\pi$) in [`BP-m`]{}. The white curve shows the path of the alignment front traveling at the speed indicated by Eqn. \[eqn:alignrate\].[]{data-label="fig:alignadj"}](fig14.ps "fig:"){width="60.00000%"}\
This model as stated implicitly assumes that the angular momentum given the disk by the external torques is instantaneously mixed radially across the entire transition region. In reality, of course, the mixing speed is of order the radial flow speed, which, we have argued, is roughly the sound speed in the presence of nonlinear warps. However, as shown by [@SKH13], the Mach number of these radial motions is quite sensitive to $\hat\psi$ and the radial width of the warp, so that the sound speed is at best a rather crude estimator of the mixing speed. Nonetheless, in the conditions of [`BP-m`]{}, the sound speed is $3$–$6 \times$ larger than the alignment front speed, so that our instantaneous delivery approximation has little effect. A slower sound speed might lead to a narrower transition region.
Alignment, stalled and completed
--------------------------------
As is readily apparent in Figure \[fig:align\], although [`BP-h`]{} diminishes its misalignment, it is never able to remove more than $\simeq 40\%$ of its tilt, whereas [`BP-m`]{} continuously eliminates the offset between its angular momentum direction and the central mass’s spin, achieving hardly any difference between the two throughout its inner radii by the end of the run. Given that even in [`BP-m`]{} magnetic forces are thoroughly dominated by pressure forces, what accounts for this contrast?
We suggest that the answer lies in a combination of two facts. First, as we have already mentioned, the HD case rapidly achieves a state in which it precesses nearly as a solid-body. In the MHD case, by contrast, turbulence interferes with the ability to enforce solid-body rotation. As a consequence, in [`BP-h`]{} but not [`BP-m`]{} the direction of the planar angular momentum brought to a given radius is close to the direction of precession torque. In the language of the preceding section, after the first $\sim 5$ orbits or so, $\cos\gamma \simeq 0$ in [`BP-h`]{}.
Second, as found by [@Sorathia12], the Reynolds stresses capable of mixing angular momentum radially are strongly increasing functions of disk warp when $\hat\psi > 1$. Below that level of warp, the radial pressure gradients are incapable of driving radial motions to speeds comparable to or greater than the sound speed; above that level, such speeds are generically attained. Comparing Figures \[fig:align\] and \[fig:psihat\], one can see that alignment drastically slows in the HD case when $\hat\psi$ drops below unity, in line with the expectation that when the tilt angle becomes $< H/r$, the warp-induced Reynolds stresses weaken. Because a purely hydrodynamic disk is always laminar, the alignment process therefore stops at this point. The MHD case differs because the magneto-rotational instability insures ubiquitous turbulence. Even where $\hat\psi$ is too small to drive strong radial flows, MHD turbulence nonetheless continues to mix neighboring regions. It is this process that allows MHD turbulence to complete the work of alignment after Reynolds stresses reduce the misalignment angle to be only of order the disk aspect ratio.
The inclination transition radius in an accreting disk
------------------------------------------------------
We have already estimated the alignment speed in terms of the torque integrated interior to some radius relative to the unaligned angular momentum at that radius. Presumably, if we had run simulation [`BP-m`]{} still longer, the alignment front would have propagated all the way out through our finite disk, and that would have been the end of further evolution. In real disks, however, the reservoir of matter with inclined angular momentum extends much farther out, and new unaligned matter is continually fed from the outside, while matter already in the disk gradually moves inward toward the central object. Because the alignment speed inevitably must diminish outward as the torques weaken, in such a disk an outwardly moving alignment front would eventually find itself moving so slowly relative to the inward flow of misaligned angular momentum that its motion relative to the central mass would be reduced to zero. Thus, in a disk with time-steady accretion, both in terms of mass inflow and orientation, and a central object with a mass very large compared to the accreted mass, the disk would bend from its initial orientation to the orientation of the central object’s angular momentum at a fixed transition radius where these two speeds cancel.
The local torque scales with the surface density and $\sin\beta$, for misalignment angle $\beta$. The local misaligned angular momentum does likewise. Because the alignment front propagation speed is proportional to the ratio between the integrated torque interior to a given radius and the misaligned local angular momentum, it is therefore $$\frac{d r_f}{dt} = \frac{2 \langle \cos\gamma \rangle a_* (GM)^2}{\sin\beta(r) c^3 r^{3/2} \Sigma(r)}
\int_0^r \, dr^\prime \sin\beta(r^\prime)\Sigma(r^\prime)/{r^\prime}^{3/2}$$ for black hole spin parameter $a_* \equiv a/M$.
In real disks, fresh misaligned angular momentum can be brought inward either by accretion of new material or by warp-induced radial flows; gravitational interaction with a binary companion or the mass of the outer disk may also contribute [@Tremaine2013]. Outward motion of the alignment in mass terms can then be brought to a halt in terms of position when that inward speed matches the outward progress of the alignment front. Parameterizing the characteristic timescale of the inward advection of misaligned angular momentum by $t_{\rm in}$, we find that the time-steady position of the inclination transition can be estimated as $$R_T/r_g = \left[ 2\langle \cos\gamma\rangle a_* \Omega(R_T) t_{\rm in}
\int_0^1 \, dx \, x^{-3/2} \frac{\sin\beta(x)}{\sin\beta(R_T)}
\frac{\Sigma(x)}{\Sigma(R_T)} \right]^{2/3},$$ where $r_g \equiv GM/c^2$ and the integral has been nondimensionalized by setting $x = r^\prime/r = r^\prime/R_T$.
The dimensionless integral may often have a value rather greater than unity. Partly this is due to an effect we have already pointed out, that the rapid increase inward of the precession frequency allows inner radii that are already nearly aligned to account for a significant part of the total torque. In addition, however, in some commonly-encountered accretion regimes, the surface density also increases inward (in time-steady accretion, for example, $\Sigma \propto x^{-3/5}$ when gas pressure dominates radiation pressure and the principal opacity is electron scattering). The dimensionless integral would then be rather greater than unity when the outermost part of the transition region lies at a radius a factor of a few or more greater than the innermost part. This happens, for example, in the later stages of [`BP-m`]{} when $r_f$ (as defined by the white curve in Fig. \[fig:alignadj\]) passes the radius of maximum surface density, $r \simeq 10$. From then onward, the dimensionless integral is $ > 1$, reaching $\simeq 5$ or more by the end of the simulation, when $r_f \simeq 22$. However, we caution that this is at best illustrative: the detailed shape of the alignment transition is likely to be influenced by a number of factors: in addition to the shape of the radial surface density profile, the disk thickness profile, and perhaps other variables may also matter.
If the dominant misaligned angular momentum inflow mechanism is accretion, the inflow speed $v_{\rm in} \simeq \alpha (h/r)^2 v_{\rm orb}$, where $\alpha$ is the usual ratio between integrated internal (Maxwell) stress and integrated pressure, $h/r$ is the local aspect ratio of the disk, and $v_{\rm orb}$ is the Keplerian orbital velocity. In this case, we find $$\label{eqn:RTinflow}
R_T/r_g = \left[ \frac{2\langle \cos\gamma\rangle a_*}{\alpha (h/R_T)^2}
\int_0^1 \, dx \, x^{-3/2} \frac{\sin\beta(x)}{\sin\beta(R_T)}
\frac{\Sigma(x)}{\Sigma(R_T)} \right]^{2/3}.$$ At the order of magnitude level, this estimate is consistent with the original estimate given by [@BP75] and [@Hatchett81], although there are also ways in which our estimate differs from theirs. In particular, we note the quantitative importance of the dimensionless integral in equation \[eqn:RTinflow\].
Not long after these original estimates, [@PP83] argued that the radial flows driven by warping should carry misaligned angular momentum much more rapidly than the mass flow of accretion. Moreover, in the model presented by that paper and elaborated by many since [@Pringle92; @NP00; @LP07], the inward mixing can be described as a diffusion process with effective diffusion coefficient $\alpha_2 \simeq 1/(2\alpha)$ when $\alpha \ll h/r$. In that case, the estimate for $R_T/r_g$ is (modulo the dimensionless integral) identical to that of eqn. \[eqn:RTinflow\], but multiplied by $\alpha^{4/3}$.
However, our previous study of purely hydrodynamic warp relaxation [@SKH13] demonstrated that, while the radial mixing of misaligned angular momentum qualitatively resembles diffusion, it differs from diffusion in a number of quantitative aspects; even in SPH simulations with an isotropic viscosity, the diffusion approximation appears to break down for nonlinear warps [@Lodato10]. In addition, as shown in Section \[sec:mhdvshd\] of this paper, there are no stresses limiting the radial motions in a fashion described by an isotropic “$\alpha$-viscosity"; consequently, there is no reason to expect the radial mixing to scale $\propto \alpha^{-1}$.
A better estimate of the inward mixing rate might be $\sim c_s^2/v_{\rm orb}(r_f/\Delta r_f)^2$, similar to the speed [@NP00] identify with the case in which $\alpha \sim h/r$. This estimate is also closer to the rate found by [@LP07] and [@Lodato10] when $\alpha$ was small; in that limit, their SPH simulations with an isotropic $\alpha$ viscosity indicated that $\alpha_2$ saturated at $\simeq 3$. The basis of our estimate is that, as found in Sec. \[sec:align\], $\hat S_{r\perp} \sim h/\Delta r_f$ because nonlinear warps generically create transonic radial flows, and they can travel a distance $\sim h$ in radius before being turned back by gravity. It must be recognized, however, that this is a very rough estimator, as it hides the fact that the magnitude of the misaligned angular momentum flux also depends on the shape of the transition region [@SKH13]. The simulation data presented here do not bear directly on the effectiveness of inward mixing because the adherence of the alignment front propagation to our model (as well as the detailed radial dependence of the angular momentum flux) demonstrates that inward radial mixing plays at most a minor role in [`BP-m`]{}. Thus, the best we can do here is place bounds on $R_T$: the estimate of equation \[eqn:RTinflow\] is likely a solid upper bound, while a rough lower bound is given by the same expression with $\alpha \sim 1$.
Conclusions {#sec:con}
===========
We have carried out the first calculation of the Bardeen-Petterson effect in which the internal stresses are grounded entirely in known physical mechanisms (i.e., Reynolds and Maxwell stresses), and the disk configuration is thin enough that warp relaxation can be separated from accretion. It is also the first calculation making use of physical internal stresses in which disk alignment is observed. As predicted early on by analytic arguments [@PP83], the heart of the mechanism is the creation of radial pressure gradients due to the warps induced by the radial gradient in the Lense-Thirring precession rate. These radial pressure gradients drive radial fluid flows that convey misaligned angular momentum with them. By radially mixing misaligned angular momentum, these flows help to bind together the disk, compelling it to rotate almost as a solid body. In addition, as these flows move outward through the disk, the direction of the misaligned angular momentum they carry can, given some departure from solid-body precession, become sufficiently opposed to the local direction that, when mixed, the result is a reduction in the net magnitude of misaligned angular momentum. In this fashion, the disk gradually aligns with the spin axis of the central mass, first in its inner portions and later at larger radii.
By contrasting a pair of matched simulations, one including MHD, the other including only pure hydrodynamics, we were able to highlight the effects due to MHD and clarify those depending only on hydrodynamics. When the local warp is nonlinear (the generic situation), the radial flows are always transonic in speed. Internal stresses other than pressure are much too small to significantly influence them; in particular, we find no evidence for anything resembling an “isotropic $\alpha$ viscosity" acting to limit these radial motions. Although the magnetic forces are always small compared to pressure forces, they nonetheless have a significant effect on both disk precession and the rate at which disks align with the angular momentum of the central mass they orbit. In particular, MHD effects cause more rapid alignment and more complete alignment—hydrodynamic alignment appears to stall when the offset angle falls to a value comparable to the disk aspect ratio $H/r$.
We believe there are two reasons for this contrast, both due to the omnipresent MHD turbulence. The first is that MHD turbulence disrupts the phase coherence of bending waves without necessarily damping them. By doing so, it prevents the enforcement of solid-body precession that occurs in the purely hydrodynamic case. As a result, the angular momentum delivered at small radii has a component directed [*antiparallel*]{} to the misaligned angular momentum at the rather larger radii where that angular momentum is ultimately deposited by the radial flows. This is the central mechanism of alignment. Outward carriage of “corrective" angular momentum is essential because the Lense-Thirring torques diminish so rapidly with increasing radius that an inner radius with only a small remaining inclination may nonetheless feel a greater torque than a radius only a factor of a few farther away that has a considerably larger inclination. The second effect of MHD turbulence is that it continues to mix unaligned angular momentum, even when the local warp is small enough (less than a scale height) that the radial motions induced by pressure gradients due to the warp are weak. Our detailed treatment of the disk’s internal dynamics also reveals that the flow of misaligned angular momentum within the disk is by no means smooth and regular. Radial fluxes of angular momentum are sharply increased when the radial gradients in the Lense-Thirring torque build a local disk warp whose angular contrast across a radius is at least as large as the disk scale height. The evolution of orientation at a fixed radius is therefore a sort of “stick and slip" process in which differential torque gradually builds local warp, which is then erased quickly when it becomes nonlinear. Radial propagation of angular momentum is further modulated by acoustic waves.
This picture suggests a model for the speed at which alignment in an initially-inclined disk moves outward: the speed of the alignment front $v_f \simeq 0.5 G(<r)/(dL_\perp/dr)$, where the factor 0.5 comes from computing the mean (anti-)alignment between the angular momentum brought outward from the torqued regions and deposited at the alignment front. In a time-steady disk, the outward motion is eventually brought to a halt by inflow of misaligned angular momentum due to a combination of radial mixing induced by disk warp and the accretion flow itself. To order of magnitude accuracy, we can use this picture to estimate where the transition from inclined to aligned orbits takes place in a time-steady disk. If all the uncertainties associated with the rate of inward misaligned angular momentum flux and the radial distribution of torque are wrapped into a single parameter $\Phi$, we suggest that its magnitude may be roughly bounded by $1 < \Phi < \alpha^{-2/3}$, where $\alpha$ is the usual time-averaged ratio of vertically-integrated $r$-$\phi$ stress to vertically-integrated pressure. The transition radius would then be found at $R_T \sim \Phi a_*^{2/3} (h/R_T)^{-4/3} r_g$.
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank Cole Miller and Steve Lubow for extensive and valuable discussions. This work was partially supported under National Science Foundation grants AST-1028111 and AST-0908326 (JHK and KAS), AST-0908869 (JFH), and NASA grant NNX09AD14G (JFH). The National Science Foundation also supported this research in part through XSEDE resources on the Kraken cluster through Teragrid allocation TG-MCA95C003.
Appendix
========
In Figure \[fig:quality\], we show the density-weighted MHD resolution quality factors as functions of radius in [`BP-m`]{} at two times, one during the saturated MHD turbulence immediately prior to the beginning of the torques ($t=14.5$) and one well into the evolution with torques ($t=17$). The quality factors are defined as $$Q_{x} = \frac{2\pi v_{A,x}}{\Delta x},$$ where $v_{A,x}$ is the Alfven speed restricted to the $x-$component of the magnetic field. Although the disk tilts out of the equatorial plane of the coordinate system as [`BP-m`]{} evolves past $t=15$, at $t=17$, only the smallest radii, $r \lesssim 6$ have changed their orientation in a noticeable way, so we make the approximation that the directions of the axes for the disk remain the coordinate axes.
At $t=14.5$, $\langle Q_z\rangle_{\rho} \simeq 12$–25, while $\langle Q_{\phi}\rangle_{\rho} \simeq 30$–50. Although the magnetic field is significantly weakened immediately after the torques begin, at $t=17$ the quality factors are still fairly good: $\langle Q_z\rangle_{\rho} \simeq 8$–20, while $\langle Q_{\phi}\rangle_{\rho} \simeq 20$–50. We also checked later times and found that they were not much different from $t=17$ in terms of these numbers. [@HGK11; @Sorathia12] recommended that both $Q_z$ and $Q_\phi$ should be $\gtrsim 10$ and preferably $\gtrsim 20$, but also remarked that a particularly large value of one could compensate for a smaller value of the other. On this basis, we regard the simulation as reasonably well-resolved throughout. However, it is possible, particularly when larger inclination angles are explored, that the quality factors required for resolving MRI-driven MHD turbulence may be different in warped disks.
![Density-weighted $Q_z$ (solid curve) and $Q_\phi$ (dashed curve) at times $t=14.5$ (upper panel) and $t=17$ (lower panel) in [`BP-m`]{}. We define $Q_\phi$ as the quality factor in the azimuthal direction in the coordinate equatorial plane.[]{data-label="fig:quality"}](fig15a.ps "fig:"){width="60.00000%"}\
![Density-weighted $Q_z$ (solid curve) and $Q_\phi$ (dashed curve) at times $t=14.5$ (upper panel) and $t=17$ (lower panel) in [`BP-m`]{}. We define $Q_\phi$ as the quality factor in the azimuthal direction in the coordinate equatorial plane.[]{data-label="fig:quality"}](fig15b.ps "fig:"){width="60.00000%"}\
[^1]: Because of the disk’s warp and twist, the $\theta$-direction is exactly normal to the disk only at large radii where there has been little precession or alignment. However, the relatively small initial misalignment angle ($12^\circ$) means the error due to imprecise identification of the disk normal is quite small compared to the magnitude of the effect we measure.
|
---
author:
- Dinakar Ramakrishnan
title: 'A mild Tchebotarev theorem for GL$(n)$'
---
Introduction {#introduction .unnumbered}
============
As it is well known, the Tchebotarev density theorem implies that two irreducible $\ell$-adic representations $\rho_\ell$, $\rho'_\ell$ of the absolute Galois group of a number field $K$ are isomorphic if the corresponding characteristic polynomials of Frobenius elements agree on a set $S$ of primes of density $1$. It is then natural to ask, in view of the Langlands conjectures, whether an analogous assertion holds for cuspidal automorphic representations of GL$_n({\mathbb A}_K)$. The object of this Note is to establish such an automorphic analogue for a simple, but useful, class of $S$ of density $1$. To be precise, we prove the following:
[**Theorem A**]{} *Let $K/k$ be a cyclic extension of number fields of degree a prime $p$, and let $\Sigma^1_{K/k}$ denote the set of primes $v$ of $K$ which are of degree $1$ over $k$. Suppose $\pi$, $\pi'$ are cusp forms on GL$(n)/K$ such that $\pi_v \, \simeq
\pi'_v$, for all but a finite number of $v$ in $\Sigma^1_{K/k}$. Then $\pi, \pi'$ are twist equivalent. More precisely, they have isomorphic base changes over the cyclotomic extension $K(\zeta)$, where $\zeta$ is a non-trivial $p$-th root of unity.*
We refer to the book [@AC] for facts on solvable base change for GL$(n)$ due to Arthur and Clozel.
When we say that $\pi, \pi'$ are twist equivalent, we mean $\pi' \simeq \pi\otimes \chi$ for a finite order character $\chi$ of (the idele classes of) $K$. In particular, if $n$ is relatively prime to $p-1$, or if the conductors of $\pi, \pi'$ are prime to $p$, we may conclude even that $\pi, \pi'$ are isomorphic (over $K$). When $p=2$, we thus get the following:
[**Corollary B**]{} *Let $K/k$ be a quadratic extension of number fields. Then any cuspidal automorphic representation $\pi$ of GL$_n({\mathbb A}_K)$ is determined (up to isomorphism) by its components $\pi_v$ for all (but a finite number of) places $v$ of degree $1$ over $k$.*
Clearly, Theorem A refines the strong multiplicity one theorem, which gives the desired global isomorphism if $\pi_v \simeq \pi'_v$ for all but a [*finite*]{} number of $v$. ([@JS]). For GL$(2)$, there is a stronger result known, requiring the isomorphism $\pi_v \simeq \pi'_v$ only for a set $S'$ of $v$ of density $> 7/8$ ([@Ra]). For GL(n) with $n>2$, we conjectured elsewhere that such a stronger result should hold with $7/8$ replaced by $1- 1/2n^2$, which is a theorem for $\pi$ attached to an $\ell$-adic representation $\rho_\ell$ by an elegant result of Rajan ([@Raj]). We are far from such a precise result for those cusp forms $\pi$ on GL$(n)$, $n \geq 3$, which are not known to be associated to such a $\rho_\ell$.
Given a finite cyclic extension $K/k$, if $G$, resp. $\tilde G$, is a reductive group over $k$, resp. $K$, such that $\tilde G =
G\times_k K$, let us say that a cuspidal automorphic representation $\pi$ of $G({\mathbb A}_k)$ admits a [*soft base change*]{} to $K$ if there is an automorphic representation $\Pi$ of $\tilde G({\mathbb A}_K)$ such that for all but a finite number of primes $v$ in $\Sigma_{K/k}^1$, we have $\Pi_v \simeq \pi_u$, where $u$ is the prime of $k$ below $v$. When $\tilde G$ is GL$(n)/K$, Theorem A says that a soft base change $\Pi$ is unique up to isomorphism when cuspidal. Theorem A has been used for $K/k$ quadratic and $G=U(n)$ by J. Getz and E. Wambach in their recent preprint. In a similar setup, it has been used by D. Whitehouse in his ongoing work concerning the pair $($GL$(2n)/k, $GL$(n)/K)$, again with $K/k$ quadratic.
Now a few words about the proof of Theorem A. A well known, basic theorem of Luo, Rudnick and Sarnak ([@LRS]), which is of importance to us, says that for any cusp form $\pi$ on GL$(n)/K$, the coefficient $a_v$ of $\pi$ at any unramified $v$ satisfies the bound $\vert a_v\vert <
(Nv)^{1/2-1/(n^2+1)}$. (What is essential for us is that $a_v$ is bounded in absolute value by $(Nv)^{1/2-t_n}$ for a fixed positive number $t_n$ independent of $v$, not the exact shape of $t_n$.) Feeding this into the framework of [@Ra], we see that it suffices, under our hypotheses, to prove that for all but a finite number of $v$ whose degree lies in $[2,(n^2+1)/2]$, $\pi_v$ and $\pi'_v$ are isomorphic. We cannot achieve this directly, but can show, using some Kummer theory, that it holds for the base changes $\pi_L, \pi'_L$ to a carefully chosen solvable extension $L$ of $K'=K(\zeta)$, which will be a compositum (over $K$) of a finite number of disjoint $p^r$-extensions $L^{(1)}, L^{(2)}, \dots$ with $2p^r>n^2+1$; each $L^{(j)}$ will be a nested chain of cyclic $p^2$-extensions (see section 4). From this data we prove by descent that $\pi_{K'}$ and $\pi'_{K'}$ are isomorphic. There is an added subtlety if $\pi_{K'}$ or $\pi'_{K'}$ is not cuspidal, and this forces us to work with isobaric sums of unitary cuspidal automorphic representations, which are analogues of semisimple Galois representations of pure weight. These steps together form the core of the argument.
We will investigate elsewhere this problem for more general extensions $K/k$.
We thank Peter Sarnak, Jayce Getz and David Whitehouse for their interest. Thanks are also due to the NSF for partial support through the grant DMS-0701089.
Basic Facts: A Review
=====================
Let $F$ be a global field with adèle ring ${\mathbb A}_F$. Let $\Sigma_F$ denote the set of all places of $F$. If $v\in\Sigma_F$ is finite, let $q_v$ denote the cardinality of the residue field at $v$. For $n \geq 1$, let $A_0(n,F)$ denote the set of isomorphism classes irreducible unitary, cuspidal automorphic representations of GL$(n,{\mathbb A}_F)$. Every $\pi$ representing a class in $A_0(n,F)$ is (isomorphic to) a tensor product $\otimes_v,
\pi_v$, where $v$ runs over all the places of $F$, such that each $\pi_v$ is an irreducible generic representation of GL$(n,F_v )$ and such that $\pi_v$ is unramified at almost all $v$. The strong multiplicity one theorem ([@JS]) asserts that, for any [*finite*]{} subset $S$ of $\Sigma_F$, $\pi$ is determined up to isomorphism by the collection $\{\pi_v \, | \, v\not\in S\}$.
For any irreducible, automorphic representation $\pi$ of $GL(n,{\mathbb A}_F),$ let $L(s, \pi) = L(s, \pi_{\infty})L(s, \pi_f)$ denote the associated [*standard*]{} $L-$function of $\pi;$ it has an Euler product expansion $$L(s,\pi) \, = \, \prod_v \, L(s, \pi_v),$$ convergent in a right-half plane. If $v$ is a finite place where $\pi_v$ is unramified, there is a corresponding semisimple (Langlands) conjugacy class $A_v(\pi)$ (or $A(\pi_v)$) in GL$(n,{\mathbb C})$ such that $$L(s,\pi_v) \, = \, {\rm {det}}(1-A_v(\pi)T)^{-1}|_{T=q_v^{-s}}.$$ One may find a diagonal representative diag$(\alpha_{1,v}(\pi), ...
, \alpha_{n,v}(\pi))$ for $A_v(\pi),$ which is unique up to permutation of the diagonal entries. Let $[\alpha_{1,v}(\pi), ...
, \alpha_{n,v}(\pi) ]$ denote the resulting unordered $n-$tuple. One knows (by Godement-Jacquet) that for any non-trivial cuspidal representation $\pi$ of GL$(n,{\mathbb A}_F)$, $L(s,\pi)$ is entire.
By Langlands’s theory of Eisenstein series, one has a sum operation $\boxplus$, called the isobaric sum ([@JS]): Given any $m-$tuple of cuspidal representations $\pi_1, ..., \pi_m$ of GL$(n_1,{\mathbb A}_F), ... ,$ GL$(n_m,{\mathbb A}_F)$ respectively, there exists an irreducible, automorphic representation $\pi_1 \boxplus ...
\boxplus \pi_m$ of GL$(n,{\mathbb A}_F),$ $n \, = \, n_1 + ... + n_m$, which is unique up to equivalence, such that for any finite set $S$ of places, $$L^S(s, \boxplus_{j=1}^m \pi_j) \, = \, \prod_{j=1}^m L^S(s,
\pi_j).$$ Call such a (Langlands) sum $\pi \simeq \boxplus_{j=1}^m \pi_j$, with each $\pi_j$ cuspidal, an [*isobaric*]{} representation.
Denote by $\mathcal A(n,F)$ the set, up to equivalence, of isobaric automorphic representations of GL$_n({\mathbb A}_F)$, and by $\mathcal A_u(n,F)$ the subset of isobaric sums of [*unitary*]{} cuspidal automorphic representations. If $\pi=\boxplus_{i=1}^m \pi_i$, resp. $\pi'=\boxplus_{j=1}^r \pi_j'$, is in $\mathcal A_u(n,F)$, resp. $\mathcal A_u(n',F)$, with $\pi_i, \pi'_j$ unitary cuspidal, we will need to consider the associated Rankin-Selberg $L$-function $$L(s, \pi \times \pi') \, = \, \prod_{i, j} \, L(s, \pi_i\times \pi_j),$$ with $$L(s, \pi_{i,v} \times \pi'_{j,v}) \, = \, {\rm {det}}(1-A_v(\pi_i)\otimes A_v(\pi'_j)T)^{-1}|_{T=q_v^{-s}}.$$
If $L(s)=\prod_{v\in\sum_\infty\cap\sum_f}L_v (s)$ is any global $L$-function and $Y$ a set of places of $F$, then we will denote by $L^Y(s)$ (resp. $L_Y(s)$) the product of $L_v (s)$ over all $v$ outside $Y$ (resp. in $Y$). We have the following basic result ([@JS]):
[**Theorem 1.1**]{} (Jacquet–Piatetski-Shapiro–Shalika, Shahidi) *Let $\pi=\boxplus_{i=1}^m \pi_i$, $\pi'=\boxplus_{j=1}^r \pi_j'$ be in $\mathcal A_u(n,F)$, with $\pi_i, \pi'_j$ unitary cuspidal . Suppose $Y$ is a finite set of places of $F$ containing the archimedean places such that $\pi, \pi'$ are unramified outside $Y$. Then $L^S(s, \pi \times \overline\pi')$ has a pole at $s=1$ iff for some $(i, j)$, $\pi_i$ is isomorphic to $\pi'_j$, in which case the pole is simple.*
Here $\overline\pi'$ denotes the complex conjugate representation of $\pi'$, which, by unitarity, is the contragredient of $\pi'$.
The general Ramanujan conjecture predicts that for any $\pi\in \mathcal A_u(F)$, $\pi_v$ is tempered at all $v$. In particular, if $v$ is a finite place where $\pi$ is unramified, the unordered $n$-tuple $\{\alpha_{1,v}(\pi), ...
, \alpha_{n,v}(\pi)\}$ representing $A_v(\pi)$ should satisfy $\vert \alpha_{i,v}\vert =1$ for every $i$. This is far from being proved, and the best known bound to date (for general $n$) is given by the following:
[**Theorem 1.2**]{} (Luo–Rudnick–Sarnak [@LRS]) *Let $\pi \in \mathcal A_u(n, F)$, and $v$ a finite place where $\pi$ is unramified, with $A_v(\pi)=\{\alpha_{1,v}(\pi), ...
, \alpha_{n,v}(\pi)\}$. Then for every $j\leq n$, one has $$\vert \alpha_{j,v}\vert \, < \, q_v^{\frac12-\frac{1}{n^2+1}}.$$*
To be precise, Luo, Rudnick and Sarnak only address the case of cusp forms. But for $\pi \in \mathcal A_u(n,F)$, any $\alpha_j(\pi)$ must be associated to a cuspidal isobaric constituent $\pi_i$ on GL$(n_i)/F$ with $n_i \leq n$, and so the assertion above follows immediately from [@LRS].
We will also need the following (weak) version of the base change theorem for GL$(n)$:
[**Theorem 1.3**]{} (Arthur–Clozel [@AC]) *Let $M/F$ be a finite extension of number fields obtained as a succession of cyclic extensions. Then for every $\pi \in \mathcal A_u(n,F)$, there exists a corresponding $\pi_M \in \mathcal A_u(n,M)$ such that for every finite place $v$ of $F$ where $\pi$ and $M$ are unramified, and for all places $w$ of $M$ dividing $v$, we have $$A_v(\pi)=\{\alpha_{1,v}, ...
, \alpha_{n,v}\} \, \implies \, A_w(\pi_{M})=\{\alpha_{1,v}^{f_v}, ...
, \alpha_{n,v}^{f_v}\},$$ where $f_v=[M_w:F_v]$.*
A word of explanation may be helpful. In [@AC], it is proved that for every cuspidal $\pi$, the base change $\pi_M$ is equivalent to an isobaric sum of unitary cuspidal automorphic representations; when $M/F$ is cyclic of prime degree $p$, for example, $\pi_M$ is either cuspidal or of the form $\boxplus_{j=0}^{p-1} (\eta\circ\tau^j)$, where $\tau$ is a generator of Gal$(M/F)$. Since base change is additive relative to isobaric sums, it follows that for any $\pi$ in $\mathcal A_u(n,F)$, $\pi_M$ lies in $\mathcal A_u(n,M)$.
A Preliminary Step
==================
[**Proposition 2.1**]{} *Let $F$ be a number field and $n\geq 1$ an integer. Suppose $\pi, \pi' \in \mathcal A_u(n,F)$ are such that for every positive integer $m\leq (n^2+1)/2$, and for all but a finite number of primes $v$ of $F$ of degree $m$, we have $\pi_v \simeq \pi'_v$. Then $\pi$ and $\pi'$ are isomorphic.*
This is essentially an immediate consequence of the bound of Luo-Rudnick-Sarnak. For completeness, we quickly go through the relevant points of [@Ra] to make it evident that they carry over, modulo the basic results cited in section 1 and induction on the number of cuspidal isobaric summands, from ($n=2$; $\pi, \pi'$ cuspidal) to ($n$ arbitrary; $\pi, \pi'$ isobaric sums of unitary cuspidal representations).
[*Proof*]{}. Denote by $X$ the complement in $\sigma_F$ of the union of the archimedean places and the finite places where $\pi$ or $\pi'$ is ramified. Given any subset $Y$ of $X$ we set (as in [@Ra]): $$Z_Y(s)=L_Y(\bar\pi\times\pi ,s)L_Y(\bar\pi'\times\pi',s)/L_Y(\bar\pi
\times\pi',s)L_Y(\bar\pi'\times\pi ,s).\leqno(2.1)$$ Write $$\pi=\boxplus_{i=1}^\ell m_i\pi_i, \, \, \, \pi'=\boxplus_{j=1}^r m_j' \pi_j',$$ with $m_i, m_j' \in {\mathbb N}$, and $\pi_i$, $\pi'_j$ unitary cuspidal, with $\pi_i\not\simeq \pi_a$ if $i\ne a$ and $\pi_j'\not\simeq \pi_b'$ if $j\ne b$.
Suppose $\pi_i\not\simeq \pi'_j$ for all $i, j$. Then, using Theorem 1.1, we see that $Z_X(s)$ is holomorphic at every $s\neq 1$, with $$-{\rm ord}_{s=1}Z_X(s) \, = \, \mu+\mu',\leqno(2.2-a)$$ where $$\mu=\sum_{i=1}^\ell m_i^2, \, \mu'=\sum_{j=1}^r {m'_j}^2.\leqno(2.2-b)$$
We note that one knows (see [@HRa]) that $Z_Y(s)$ is of positive type, i.e., $\log Z_Y(s)$ is Dirichlet series with non-negative coefficients.
As the subproduct of an absolutely convergent Euler product is absolutely convergent, we have the following
[**Lemma 2.3**]{} *Let $S$ denote the subset of $X$ consisting of finite places $v$ of degree $> \frac{n^2+1}{2}$. Then the incomplete Euler products $L_S(\bar\pi\times\pi ,s)$ and $L_S
(\bar\pi\times\pi',s)L_s(\bar\pi'\times\pi ,s)$ converge absolutely in $\{s\in{\mathbb C}\, | \, \Re(s)>1\}$.*
We may write $$\log (L_Y(\bar\pi\otimes\pi ,s))=\sum_{m\geqq 1}c_m(Y)m^{-s}\leqno(2.4)$$ for all subsets $Y$ of $X$. Then $c_m(Y)=0$ unless $m$ is of the form $Nv^r$ for some $v\in Y$ and $r\in{\mathbb N}$, and when $m$ is of this form, $$c_m(Y)=\sum_M \, \frac{1}{r}\sum_{1\leqq i,j\leqq 2}\overline{\alpha^r_{i,v}}
\alpha^r_{j,v}.$$ where $M$ is the set of pairs $(v ,r)\in Y\times{\mathbb N}$ such that $m=Nv^r$.
When $v \in S$, as $Nv > \frac{n^2+1}{2}$, the Luo-Rudnick-Sarnak bound (Theorem 1.2) implies that $\sum_{m\geqq 1}c_m(S)m^{-s}$ converges in $\{\Re(s)\geq 1\}$.
One has a similar statement for $\log (L_S(\bar\pi'\otimes\pi ,s))$, $\log (L_S(\bar\pi'\otimes\pi ,s))$, and $\log (L_S(\bar\pi'\otimes\pi' ,s))$. So we get the following
[**Lemma 2.5**]{} *Let $S$ be as in Lemma 2.3. As $s$ goes to $1$ from the right on the real line, we have $$\log Z_S(s) \, = \, o\left(\log \frac{1}{s-1}\right).$$*
Now, since $\pi_v \simeq \pi'_v$ for all but a finite number of places of $X$ outside $S$, we get, thanks to this Lemma, the following: $$\log Z_X(s) \, = \, 4\log L_X(\bar\pi\otimes\pi ,s) + o\left(\log \frac{1}{s-1}\right) \, = \,
4\log L_X(\bar\pi'\otimes\pi' ,s) + o\left(\log \frac{1}{s-1}\right).\leqno(2.6)$$ Applying (2.2-b), we then get $$\mu \, = \, \mu',\leqno(2.7)$$ and $$\log Z_X(s) \, = \, 4\mu\log \frac{1}{s-1} + o\left(\log \frac{1}{s-1}\right).\leqno(2.8)$$ This contradicts (2.2-a) since $\mu=\mu' \geq 1$.
Thus we must have $\pi_i \simeq \pi'_j$ for [*some*]{} $(i,j)$. If $\pi$ or $\pi'$ is cuspidal, then both will need to be cuspidal with $\pi=\pi_i \simeq \pi'_j=\pi'$, an so we are done in this case. We may assume that $\pi, \pi'$ are non-cuspidal. Consider then the isobaric automorphic representations $\Pi$, $\Pi'$ such that $$\pi = \Pi\boxplus \pi_i, \, \pi' = \Pi'\boxplus \pi'_j.$$ The $\Pi, \Pi'$ satisfy the hypotheses of Proposition 2.1, and we may find as before cuspidal isobaric summands $\pi_k$ of $\Pi$ and $\pi'_m$ of $\Pi'$ which are isomorphic. Continuing thus, by infinite decent, we arrive finally at the situation when one of the isobaric forms is cuspidal, which we have already taken care of. This proves Proposition 2.1.
Central character and unitarity
===============================
Suppose $\pi$, $\pi'$ are cuspidal automorphic representations of GL$_n({\mathbb A}_F)$ of respective central characters $\omega, \omega'$, such that $\pi_v \simeq \pi'_v$ for all but a finite number of primes $v$ of $F$ of degree $1$. Then $\omega$ and $\omega'$ agree at all (but a finite number of) the degree one places $v$, which forces the global identity $$\omega \, = \, \omega'.\leqno(3.1)$$ In fact, by Hecke, this conclusion will result as soon as $\omega$ and $\omega'$ agree at a set of primes of density $> 1/2$.
It is a standard fact that, given a cuspidal $\pi$, there is a unique real number $t(\pi)$ such that $\pi\otimes\vert\cdot\vert^{-t(\pi)}$ is unitary; here $\vert\cdot\vert$ denotes the $1$-dimensional representation $g\mapsto \vert{\rm det}(g)\vert$. Taking central characters, we see then that $\omega\vert\cdot\vert^{-nt(\pi)}$ is a unitary character. Thanks to (3.1), we will then get $$t(\pi) \, = \, t(\pi').\leqno(3.2)$$
This allows us, in the proof of Theorem A, to assume that $\pi, \pi'$ are unitary cuspidal automorphic representations.
Nested chains of cyclic $p^2$-extensions
========================================
Let $p$ be a prime. We will call an extension $L/F$ of number fields of degree $p^r$, for some $r \geq 2$, a [*nested chain of cyclic $p^2$-extensions*]{} if there is an increasing filtration of fields $$F=L_0 \subset L_1 \subset L_2 \subset \dots \subset L_{r-2} \subset L_{r-1} \subset L_r=L,\leqno(4.1)$$ with $$[L_j : L_{j-1}] = p, \, \, \forall \, j \in\{1, 2, \dots, r\},\leqno(4.2)$$ and $$L_j/L_{j-2}: \, \, {\rm cyclic}, \, \, \forall \, j \in\{2, \dots, r\}.\leqno(4.3)$$
An easy example is given by a cyclic $p^r$ extension, while a better example is the following. Let $F$ contain $\mu_{p^2}$. (As usual, we write $\mu_n$ for the group of $n$-th roots of unity in the algebraic closure of $F$.) Let $\alpha$ be an element of $F$ which is not a $p$-th power. Put $\alpha_0=\alpha$ and define $\alpha_j$, for $j=1, \dots, r$, recursively by taking it to be a $p$-th root of $\alpha_{j-1}$, and set $L_j=L_{j-1}(\alpha_j)$ and $L_0=F$. Note that for $j \geq 2$, $L_j/L_{j-2}$ is cyclic of order $p^2$ by Kummer theory, because $\alpha_j^{p^2}=\alpha_{j-2}$, and $\mu_{p^2}\subset L_{j-2}$, making all the conjugates of $\alpha_j$ over $L_{j-2}$ to lie in $L_j$. (For this example, it is in fact sufficient to have $\mu_p \subset F$ and $\mu_{p^2}\subset L_1$, as seen by the case $L_1=F(\mu_{p^2})$.)
[**Lemma 4.4**]{} *Let $L/F$ be a nested chain of cyclic $p^2$-extensions (of number fields), with $[L:F]=p^r$ and filtration $\{L_j\}$ as above. Suppose $v_0$ is a finite place of $F$, unramified in $L$, which is inert in $L_1$. Then there exists, for each $j\geq 1$, a unique place $v_j$ of $F_j$ lying over $v_{j-1}$, so that $Nv_j=(Nv_{j-1})^p$. In particular, $Nv_r=(Nv_0)^{p^r}$.*
[*Proof*]{}. Let us first treat the case when $r=2$, i.e., when $L/F$ is cyclic of degree $p^2$. Since $v_0$ is inert in the intermediate field $L_1$, we need to check that $v_0$ does not split into $p$ places in $L$. Suppose, to the contrary, that it does split that way. Let $u$ be one of the $p$ places of $L$ above $v_0$. It must then be fixed by a subgroup $H$ of Gal$(L/F)$ of order $p$, with $H$ giving the local Galois group ${\rm Gal}(L_u/F_{v_0})$. Since $v_0$ is inert in $L_1$ with divisor $v_1$, $u$ necessarily has degree $1$ over $v_1$, and so $H = {\rm Gal}(L_{1, v_1}/F_{v_0})$. If $\sigma$ is a non-trivial element of $H$, then it acts non-trivially on $L_{1,v_1}$, and hence on $L_1$. On the other hand, since $L/F$ is cyclic, it has a unique subgroup of order $p$, which forces $H$ to be Gal$(L/L_1)$, implying that $\sigma$ acts trivially on $L_1$, yielding a contradiction. Put another way, if $v_0$ has degree $p$ in $L$, then the corresponding Frobenius class $Fr_{v_0}$ is given by an element $\sigma$ of Gal$(L/F)$ of order $p$, which has trivial image in the quotient by $H=\langle \sigma\rangle$, making $v_0$ split in the fixed field $L^H$ of $H$. Clearly, $L^H$ must be $L_1$ by the cyclicity of $L/F$. Either way, the case $r=2$ is now settled.
Now let $r > 2$, and assume by induction that the Lemma holds for $r-1$. So for every $j \leq r-1$, there is a unique place $v_j$ of $L_j$ above $v_{j-1}$ (of $L_{j-1}$). Now all we have to show is that $v_{r-1}$ is inert in $L=L_r$. Since $L_r/L_{r-2}$ is cyclic of order $p^2$, and since (by induction) the place $v_{r-2}$ of $L_{r-2}$ is inert in $L_{r-1}$, we conclude what we want by appealing again to the $r=2$ scenario.
The assertion about the norm of $v_r$ follows.
[**Lemma 4.5**]{} *Let $L^{(i)}/F$, $1\leq i \leq k$ be disjoint $p^r$-extensions. Suppose moreover that every $L^{(i)}$ is a nested chain of cyclic $p^2$-extensions with respective filtrations $$F=L_0^{(i)}\subset L_1^{(i)} \subset \dots \subset L_r^{(i)}=L^{(i)}.$$ Let $v_0^{(i)}$, $1\leq i \leq k$, be distinct primes of $F$, unramified in the compositum $M:=L^{(1)}L^{(2)}\dots L^{(k)}$, such that each $v_0^{(i)}$ is inert in $L_1^{(i)}$. Then, if ${\tilde v}^{(i)}$ is a prime of $M$ lying above $v_0^{(i)}$, we have $$N{\tilde v}^{(i)} \, \geq \, (Nv_0^{(i)})^{p^r}, \, \, \, \forall \, i\leq k.$$*
[*Proof*]{}. Fix any $i \leq k$. By Lemma 4.4, for each $j\geq 2$, there is a unique prime $v_j^{(i)}$, of $L_j^{(i)}$ lying above $v_{j-1}^{(i)}$. Then $\tilde v^{(i)}$ must lie above $v_r^{(i)}$ in the extension $M/L^{(i)}$. So $$N{\tilde v^{(i)}} \, \geq \, Nv_r^{(i)}.\leqno(4.6)$$ On the other hand, by Lemma 4.4, we have $$Nv_r^{(i)} \, = \, (Nv_0^{(i)})^{p^r}.\leqno(4.7)$$ The assertion of Lemma 4.5 now follows by combining (4.6) and (4.7).
Isomorphism over suitable solvable extensions $L/K$, $L\supset E$
=================================================================
Let $K/k$ be a cyclic $p$-extension. For $j\geq 1$, denote by $\Sigma^j_{K/k}$ the set of finite places $v$ of $K$ which are unramified over $k$ and of degree $j$ over $k$; of course this set is non-empty only for $j\in \{1,p\}$. Let $\pi, \pi'$ be cuspidal automorphic representations of GL$_n({\mathbb A}_K)$ such that, as in the setup of Theorem A, $$\pi_v \, \simeq \, \pi'_v, \, \, \forall \, v \in \Sigma^1_{K/k}.\leqno(5.1)$$
As noted in section 3, the central characters of $\pi$ and $\pi'$ must be the same, and moreover, we may assume that $\pi, \pi'$ are unitary.
If $p > (n^2+1)/2$, then Theorem A follows immediately from Proposition 2.1. In general, fix a positive integer $r$ such that $$p^r \, > \, (n^2+1)/2.\leqno(5.2)$$
The object of this section is to prove the following:
[**Proposition 5.3**]{} *Let $K/k$, $\pi, \pi'$ be as in Theorem A. Then there is a finite solvable extension $L/K$ containing $E:=K(\mu_{p^2})$ such that the base changes $\pi_L$, $\pi'_L$, satisfy $$\pi_L \, \simeq \, \pi'_L.$$*
In fact $L/K$ will be much nicer than just being solvable. The extension $L/E$ will turn out to be the compositum of a finite number $L^{(i)}$ of $p^r$-extensions, with each $L^{(i)}$ a nested chain of cyclic $p^2$-extensions. The Galois closure of $L$ over $K(\mu_p)$ will again be a $p$-power extension, hence nilpotent. We will also have some freedom in the choice of the $L^{(i)}$, and their filtrations, which will become relevant in the next section when we descend to $E$.
Put $K'=K(\mu_p)$ and $k'=k(\mu_p)$. Then $K'/k'$ is still a cyclic $p$-extension. The following Lemma is clear since $K'/K$ and $k'/k$ are of degree dividing $p-1$.
[**Lemma 5.4**]{} *Let $v \in \Sigma^j_{K/k}$, for $j\in\{1,p\}$. Then, for every prime $v'$ of $K'$ above $v$, we have $v \in \Sigma^j_{K'/k'}$.*
Consequently, the hypotheses of Theorem A are preserved for $K'/k'$, and we may assume from here on, after replacing $k$ (resp. $K$) by $k'$ (resp. $K')$), that $$\mu_p \, \subset \, k.\leqno(5.5)$$
[**Proof of Proposition 5.3 when $K=E$**]{}
Since $\mu_p \subset k$, we may realize the cyclic $p$-extension $K$ as $k(\alpha^{1/p})$, for an element $\alpha$ in $k$ which is not a $p$-th power (in $k$). Choose a sequence of elements $\alpha_{-1}=\alpha$, $\alpha_0, \dots, \alpha_r$ in the algebraic closure of $K$, and the corresponding chain of fields $k=L_{-1}, K=L_0, \dots, L_r$ such that for each $j\geq 0$, $$L_j=L_{j-1}(\alpha_j), \, \, {\rm with} \, \, \alpha_j^p = \alpha_{j-1}.\leqno(5.6)$$ Clearly, every $L_j/L_{j-1}$ is cyclic of order $p$, and so $[L_r:K]=p^r$. Moreover, since $\mu_{p^2}\subset E=K$, each $L_j/L_{j-2}$ is also cyclic by Kummer theory. In other words, $L_r/K$ is a nested chain of cyclic $p^2$-extensions. In fact, $L_r/k$ is also such a nested chain, but of degree $p^{r+1}$.
Now put $L=L_r$. Applying Lemma 4.4, we then see that for every prime $\tilde v$ in $L$ lying over some $v$ in $\Sigma^p_{K/k}$, the degree of $\tilde v$ is $p^r$ over $k$, hence has degree at least $p^r$ over ${\mathbb Q}$. On the other hand, every other prime $\tilde u$ of $L$ unramified over $k$ lies above some $u$ in $\Sigma^1_{K/k}$. So the hypotheses of Theorem A imply (by base change [@AC]) that $\pi_{L,\tilde u}\simeq \pi'_{L,\tilde u}$. (Such a $\tilde u$ could have small degree, like $p$, over $K$, but nevertheless it must lie over a prime $u$ of degree $1$ over $k$, which is all that matters to us.) Putting these together, and applying Proposition 2.1 over $L$, we get Proposition 5.3 when $K=E$.
[**Proof of Proposition 5.3 when $K\ne E$**]{}
Here we want to base change and consider the cyclic $p$-extension $$E/F, \, \, {\rm with} \, \, F = k(\mu_{p^2}), \, E=KF.\leqno(5.7)$$ Clearly, the $(p,p)$-extension $E/k$ contains $p+1$ subfields $F^{(i)}$, $0\leq i \leq p$, of degree $p$ over $k$, with one of them being $K$; say $K=F^{(0)}$. We need the following
[**Lemma 5.8**]{} *Let $v\in \Sigma^p_{K/k}$ be unramified in $E$. Then $v$ splits into $p$ places $v_1, \dots, v_p$ in $E$, and there is a (unique) cyclic $p$-extension $F^{(i)}$ of $k$ (depending on $v$), $1\leq i \leq p$, such that each $v_j$ lies in $\Sigma^p_{E/F^{(i)}}$. In other words, if $z$ is the unique place of $k$ below $v$, then $z$ splits into $p$ places in $F^{(i)}$, each of which is inert in $E$.*
[*Proof of Lemma 5.8*]{}. Since $G:=$Gal$(E/k)$ is ${\mathbb Z}/p \times {\mathbb Z}/p$, the decomposition groups are either trivial or of order $p$. So, if $z$ is the place of $k$ lying below $v$, its Frobenius class $Fr_z$ in $G$ is given by an element $\sigma$ of order $p$ (since $z$ is inert in $K$). So $v$ must split in $K$. If we put $H=\langle \sigma\rangle$, then $K^H$ is $F^{(i)}$ for a unique $i\in \{1, \dots, p\}$. Then $z$ splits in $F^{(i)}$ and then becomes inert in $E$, as claimed.
Fix an index $i\in \{1, \dots, p\}$. As $\mu_p\subset k\subset F^{(i)}$, we may find an element $\alpha^{(i)}$ in $F^{(i)}$ which is not a $p$-th power such that $$E \, = \, F^{(i)}((\alpha^{(i)})^{1/p}).\leqno(5.9)$$ Choose a sequence of elements $\alpha_{-1}^{(i)}=\alpha^{(i)}$, $\alpha_0^{(i)}, \dots, \alpha_r^{(i)}$ in the algebraic closure of $E$, and the corresponding chain of fields $F^{(i)}=L_{-1}^{(i)}, E=L_0^{(i)}, \dots, L_r^{(i)}$ such that for each $j\geq 0$, $$L_j^{(i)}=L_{j-1}^{(i)}(\alpha_j^{(i)}), \, \, {\rm with} \, \, (\alpha_j^{(i)})^p = \alpha_{j-1}^{(i)}.\leqno(5.10)$$ By construction, every $L_j^{(i)}/L_{j-1}^{(i)}$ is cyclic of order $p$, and so $[L_r^{(i)}:E]=p^r$. Moreover, since $\mu_{p^2}\subset E$, each $L_j^{(i)}/L_{j-2}^{(i)}$ is also cyclic by Kummer theory. In other words, $L_r^{(i)}/E$ is a nested chain of cyclic $p^2$-extensions. In fact, $L_r^{(i)}/F^{(i)}$ is also such a nested chain (of degree $p^{r+1}$).
This way we get $p$ nested chains $L^{(i)}/E$, disjoint over $K$ from each other. Let $L$ be the compositum of the $L^{(i)}$, as $i$ runs over $\{1, \dots, p\}$. Pick any place $v$ in $\Sigma^p_{K/k}$. Then we know (by Lemma 5.8) that there is a unique $i \leq p$ such that each the divisors $v_k$ of $v$ in $E$, $1\leq k \leq p$, lies in $\Sigma^p_{E/L^{(i)}}$. Then by the $r=2$ case of Lemma 4.4, $v_k$ is inert in $L^{(1)}$. Applying Lemma 4.5, we then see that every prime $\tilde v$ of $L$ lying over some $v_k$ (and hence over $v$) is of degree $\geq p^r > (n^2+1)/2$. So one may apply Lemma 2.1 and conclude that $\pi_L$ and $\pi'_L$ are isomorphic.
Descent to $E=K(\mu_{p^2})$
===========================
Let us preserve the notations of the previous section. Thanks to Proposition 5.3, we know that for the $p$-power extension $L/E$ we constructed there, one has $$\pi_L \, \simeq \pi'_L.\leqno(6.1)$$ In order to prove Theorem A, we need to descend this isomorphism down to $E$. For this we will make use of the fact that there is quite a bit of freedom in choosing $L$.
[**Proof of descent when $K=E$**]{}
After realizing $E$ as $k(\alpha^{1/p})$ for some $\alpha$ ($=\alpha_{-1}$) in $k$ which is not a $p$-th power, we chose a sequence of elements $\alpha_j, 0\leq j \leq r$, with $\alpha_j=\alpha_{j-1}^{1/p}$, and set $L_j=L_{j-1}(\alpha_j)$. We may replace $\alpha$ by $\alpha\beta^p$ for any $\beta$ in $k-k^{p}$, which will have the effect of leaving $E=L_0$ intact, but changing $L_1$ from $E(\alpha_1)$ to $E(\alpha_1\beta_1)$ for a $p$-th root $\beta_1$ of $\beta$. Using this we can ensure, for a suitable choice of $\beta$, that the discriminant of $L_1/E$ is divisible by a prime $P_1$ not dividing the conductor of either $\pi_E$ or $\pi'_E$. Next we may choose a $\gamma \in k-k^p$ and put $\alpha_0=\alpha_0\beta^p\gamma^{p^2}$, which will not change $L_0$ and $L_1$, but will change $L_2$, and we may arrange for the discriminant of the new $L_2/L_1$ to be divisible by a prime $P_2$ of $L_1$ whose norm down to $E$ is relatively prime to ${\mathfrak c}(\pi_E){\mathfrak c}(\pi'_E)P_1$. This way we may continue and modify all the $L_j$ so that at each stage $L_j/L_{j-1}$, the relative discriminant is divisible by a new prime $P_j$ of $L_{j-1}$ whose norm down to $E$ is relatively prime to ${\mathfrak c}(\pi_E){\mathfrak c}(\pi'_E)P_1N_{L_1/E}(P_2)\dots N_{L_{j-2}/E}(P_{j-1})$.
Now look at the top stage $L_r/L_{r-1}$. Thanks to (6.1), we know by the properties of base change ([@AC]) that every cuspidal isobaric component $\eta$, say, of $\pi_{L_{r-1}}$ will be twist equivalent to a cuspidal isobaric component $\eta'$ of $\pi'_{L_{r-1}}$. More precisely, we will need to have, for some integer $j$ mod $p$, $$\eta' \, \simeq \, \eta\otimes\delta_r^j,\leqno(6.2)$$ where $\delta_r$ is the character of order $p$ of (the idele classes of) $L_{r-1}$ attached to $L_r$. But the conductor of $\delta_r$ is divisible by the prime $P_r$, whose norm down to $E$ is, by construction, relatively prime to the conductors of $\pi_E$ and $\pi'_E$ and to the discriminant of $L_{r-1}/E$. This forces $j=0$, i.e., $\eta \simeq \eta'$. Peeling off this way isomorphic cuspidal components of $\pi_{L_{r-1}}$ and $\pi'_{L_{r-1}}$ one at a time, we conclude that $\pi_{L_{r-1}}$ is isomorphic to $\pi'_{L_{r-1}}$. Next, by an easy induction on $r-j$, we deduce similarly that, for every $j\in \{0, \dots, r-1\}$, $$\pi_{L_j} \, \simeq \, \pi'_{L_j},\leqno(6.3)$$ which proves the assertion of Theorem A.
[**Proof of descent when $K\ne E$**]{}
For each $i=\{1, \dots, p\}$, we may modify the elements $\alpha_j^{(i)}$ and thus the fields $L_j^{(i)}$ as above, with a new prime divisor $P_j^{(i)}$ of the discriminant of $L_j/L_{j-1}$ popping up at stage $j$, which is prime to the conductors of $\pi_E$, $\pi'_E$, and the discriminant of $L_{j-1}/E$. Now we may, and we will, also choose these primes in such a way that the sets $\{P_1^{(i)}, \dots, P_r^{(i)}\}$ and $\{P_1^{(k)}, \dots, P_r^{(k)}\}$ are disjoint whenever $i \ne k$. Now we may realize $L$ as a sequence of cyclic $p$-extensions, such that at each stage there is a new prime divisor of the relative discriminant. We may then descend each step as above and finally conclude that $$\pi_E \, \simeq \, \pi'_E,\leqno(6.4)$$ as asserted.
Descent to $K(\mu_p)$
=====================
As before, we may assume that $\mu_p \subset k\subset K$. If $\mu_{p^2}\subset K$, i.e., if $E=K$, then we have already seen above that we have an isomorphism $\pi\simeq \pi'$ over $K$.
So we may, and we will, assume below that $K\ne E$. Then $$E=KF, \, k=K\cap F, \, \, \, {\rm where} \, \, F=k(\mu_{p^2}),\leqno(7.1)$$ with $$[E:F]=[K:k]=[E:K]=[F:k]=p,$$ and by section 6, $$\pi_E \, \simeq \, \pi'_E.\leqno(7.2)$$ This implies that if $v$ is any prime of $K$ which splits into $p$ primes $w_1, \dots, w_p$ in $E$, then by [@AC], we have ($\forall j\leq p$) $$\pi_v \, \simeq \, \pi_{w_j} \, \simeq \, \pi'_{w_j} \, \simeq \, \pi'_{v}.\leqno(7.3)$$
On the other hand, since $E/k$ is a $(p,p)$-extension, in particular not cyclic of order $p^2$, any prime $u$ of $k$ which is inert in $K$ must split in $E$ (assuming $u$ is unramified in $E$). This implies, thanks to (7.3), the following: $$\pi_v \, \simeq \, \pi'_v, \, \, \, \forall \, v\in \Sigma^p_{K/k}-{{\rm finite} \, \, \, {\rm set}}.\leqno(7.4)$$
When we combine (7.4) with the hypothesis of Theorem A that $$\pi_v \, \simeq \, \pi'_v, \, \, \forall \, v \in \Sigma^1_{K/k},\leqno(7.5)$$ we immediately get the desired isomorphism $$\pi \, \simeq \, \pi' \, \, ({\rm over} \, \, K).$$
We are now done with the proof of Theorem A. The assertion of Corollary B is obvious given Theorem A (since $\mu_2\subset {\mathbb Q}\subset K$).
0.2in
Dinakar Ramakrishnan
253-37 Caltech
Pasadena, CA 91125, USA.
dinakar@caltech.edu
|
---
address: |
Peter Connor\
Department of Mathematical Sciences\
Indiana University South Bend\
1700 Mishawaka Ave\
South Bend\
IN 46634\
USA\
pconnor@iusb.edu
author:
- Peter Connor
bibliography:
- 'minlit.bib'
title: A vase of catenoids
---
[Abstract. ]{} In this note we construct a vase of catenoids - a symmetric immersed minimal surface with planar and catenoid ends.
[2000 *Mathematics Subject Classification*. Primary 53A10; Secondary 49Q05, 53C42. ]{}
[*Key words and phrases*. Minimal surface, catenoid. ]{}
Introduction
============
The building blocks for minimal surfaces with finite total curvature and embedded ends are planes and catenoids. One can consider the possible arrangements of planar and catenoid ends that yields a minimal surface. This note proves the existence of two beautiful families of minimal surfaces on punctured spheres. The first, which we call a vase of catenoids, has a horizontal planar end, a downward pointing catenoid end with vertical normal, and symmetrically placed upward pointing catenoid ends with non-vertical normals. See figure \[figure:kvase\].
![Vase of catenoids[]{data-label="figure:kvase"}](kvase){height="3.45in"}
The second is a variation of the vase of catenoids. Imagine taking two copies of a vase of catenoids. Cut off the bottom catenoid end on each copy, and glue the copies together along the resulting closed curves. See figure \[figure:doublekvase\] for the resulting surface. It has two horizontal planar ends, symmetrically placed downward pointing catenoid ends with non-vertical normals, and symmetrically placed upward pointing catenoid ends with non-vertical normals.
![Two vases glued together[]{data-label="figure:doublekvase"}](doublekvase){height="3.45in"}
The construction of these surfaces was inspired by the Finite Riemann minimal surface constructed in [@we5] with a horizontal planar end together with two catenoid ends with non-vertical normals, and also by the k-noid surface constructed by Jorge and Meeks [@jm1] with catenoid ends at each root of unity with horizontal normals in the direction of that root of unity. None of these surfaces are embedded. By the Lopez-Ros theorem [@lor1], the plane and catenoid are the only complete embedded minimal surfaces on punctured spheres.
Weierstrass Representation
==========================
We use the Weierstrass Representation of minimal surfaces on a punctured sphere, which may be written as $$X(z)={\operatorname{Re}}\int_{z_0}^z\left(\frac{1}{2}\left(\frac{1}{G}-G\right)dh,\frac{i}{2}\left(\frac{1}{G}+G\right)dh,dh\right)$$ where $z,z_0\in\Sigma=\overline{{\mathbb{C}}}-\{p_1,p_2,\ldots,p_n\}$. The points $p_1,p_2,\ldots,p_n$ are the ends of the surface, $G$ is the composition of stereographic projection with the Gauss map, and $dh$ is a meromorphic one-form called the height differential. A good reference for the Weierstrass representation is [@os1]. One issue is that $X$ depends on the path of integration. The map $X:\Sigma\rightarrow{\mathbb{R}}^3$ is well defined provided that $${\operatorname{Re}}\int_\gamma\left(\frac{1}{2}\left(\frac{1}{G}-G\right)dh,\frac{i}{2}\left(\frac{1}{G}+G\right)dh,dh\right)=(0,0,0)$$ for all closed curves $\gamma$ in $\Sigma$. This is called the period problem, and it can be expressed as $$\text{Res}_{p_j}\left(\left(\frac{1}{G}-G\right)dh\right)\in{\mathbb{R}}, \text{Res}_{p_j}\left(i\left(\frac{1}{G}+G\right)dh\right)\in{\mathbb{R}}, \text{Res}_{p_j}\left(dh\right)\in{\mathbb{R}}$$ for $j=1,2,\ldots,n$.
A second issue is that we want $X$ to be regular. This is ensured by requiring that $G$ has either a zero or pole at $p\in\Sigma$ if and only if $dh$ has a zero at $p$ with the same multiplicity.
Constructions
=============
One can use the desired geometry of a minimal surface to create potential Weierstrass data for a minimal surface. A vertical normal at $p_j$ corresponds to $G(p_j)=0$ (downward pointing normal) or $G(p_j)=\infty$ (upward pointing normal). If $X$ has a horizontal planar end at $p_j$ then $G$ has a zero or pole of order $k+1$ and $dh$ has a zero of order $k-1$ at $p_j$. If $X$ has a catenoid end at $p_j$ with vertical normal then $G$ has a simple pole or zero and $dh$ has a simple pole at $p_j$. If $X$ has a catenoid end at $p_j$ with non-vertical normal then $G$ has neither a pole or zero and $dh$ has a double order pole at $p_j$.
When the domain is a punctured sphere, the sum of zeros of $G$ equals the sum of poles of $G$ and the sum of zeros of $dh$ is two less then the sum of poles of $dh$.
We can use the images of our desired surfaces to construct the Weierstrass data $G$ and $dh$. For the vase of catenoids, place the horizontal planar end at $z=\infty$ and the downward pointing catenoid end with vertical normal at $z=0$. Place the upward pointing catenoid ends with non-vertical normals at each root of unity. Fix $G(0)=0$. Then $G(\infty)=\infty$. There will also be a point on each catenoid at the roots of unity with vertical downward pointing normal. In keeping with the symmetry of the surface, fix $$G\left(ae^{i2\pi j/k}\right)=0$$ for $j=0,1,\ldots,k-1$. Let $G(z)$ have simple zeros at $z=0$ and the roots of $z^k=a^k$, with $a\in{\mathbb{R}}$. Then, $G$ has a pole of order $k+1$ at $z=\infty$, and $$G(z)=\rho z(z^k-a^k).$$ The height differential has simple zeros at the roots of $z^k=a^k$, double order poles at the kth-roots of unity, and a simple pole at $z=0$. This forces $dh$ to have a zero of order $k-1$ at $z=\infty$, and $$dh=\frac{a^k-z^k}{z(z^k-1)^2}dz.$$
For each positive integer $k>1$ and real number $a\in(0,1)$ there exists a $\rho>0$ such that $$\begin{split}
G(z)&=\rho z(z^k-a^k)\\
dh&=\frac{a^k-z^k}{z(z^k-1)^2}dz
\end{split}$$ is the Weierstrass data for a minimal surface with a horizontal planar end at $z=\infty$, a downward pointing vertical catenoid end at $z=0$, and upward pointing non-vertical catenoid ends at the roots of unity.
All that remains is to solve the period problem. As the residues of $dh$ are all real, $dh$ has no periods. The residues of $Gdh$ and $1/Gdh$ are zero at $z=0$ and $z=\infty$. Assuming $\rho\in{\mathbb{R}}$, the symmetries of the surface we are constructing reduce the period problem to the equation $$0=\text{Res}_1\left(\left(\frac{1}{G}+G\right)dh\right)=\frac{\rho(a^k-1)(ka^k+k-a^k+1)}{k^2}+\frac{k+1}{\rho k^2}$$ which is solved when $$\rho=\sqrt{\frac{k+1}{(1-a^k)(ka^k+k-a^k+1)}}.$$
Examining figure \[figure:doublekvase\], the second family has horizontal planar ends at $z=0$ and $z=\infty$, downward pointing catenoid ends with non-vertical normal at the roots of $z^k=b^k$, and upward pointing catenoid ends with non-vertical normal at the roots of $z^k=1/b^k$. The Gauss map is $0$ at the roots of $z^k=a^k$ and $\infty$ at the roots of $z^k=1/a^k$. The height differential $dh$ has double order poles at the roots of $z^k=b^k$ and $z^k=1/b^k$ and simple zeros at the roots of $z^k=a^k$ and $z^k=1/a^k$. In order for the surface to have horizontal planar ends at $0$ and $\infty$, we need $dh$ to have zeros at $0$ and $\infty$. Thus, set $dh$ with zeros of order $k-1$ at $0$ and $\infty$. This forces $G$ to have a zero of order $k+1$ at $z=0$ and a pole of order $k+1$ at $z=\infty$. Hence, $$G(z)=\frac{\rho z^{k+1}(z^k-a^k)}{a^kz^k-1}$$ and $$dh=\frac{b^{2k}z^{k-1}(z^k-a^k)(a^kz^k-1)}{a^k(z^k-b^k)^2(b^kz^k-1)^2}dz.$$ If $\rho=1$ then, similar to the first example, the period problem reduces to a single equation.
For each positive integer $k>1$ and real number $b\in(0,1)$ there exists an $a>0$ such that $$\begin{split}
G(z)&=\frac{z^{k+1}(z^k-a^k)}{a^kz^k-1}\\
dh&=\frac{b^{2k}z^{k-1}(z^k-a^k)(a^kz^k-1)}{a^k(z^k-b^k)^2(b^kz^k-1)^2}dz
\end{split}$$ is the Weierstrass data for a minimal surface with horizontal planar ends at $z=0$ and $z=\infty$, upward pointing non-vertical catenoid ends at solutions to $z^k=b^k$, and downward pointing non-vertical catenoid ends at solutions to $z^k=1/b^k$.
As with the vase of catenoids, the symmetries of the surface reduce the period problem to the equation
$$\begin{split}
0=&\text{Res}_b\left(\frac{1}{G}+G\right)dh\\
=&\frac{b^{2k}\left(k-1+b^{2+2k}(k-1)+(b^2+b^{2k})(k+1)\right)a^{2k}}{a^kb(b^k-1)^3(b^k+1)^3k^2}\\
&+\frac{2b^k\left(1+b^{2+4k}-(b^{2k}+b^{2+2k})(2k+1)\right)a^k}{a^kb(b^k-1)^3(b^k+1)^3k^2}\\
&+\frac{-k-1+b^{2k}+b^{2+4k}-b^{2+6k}+3kb^{2k}+3kb^{2+4k}-kb^{2+6k}}{a^kb(b^k-1)^3(b^k+1)^3k^2}
\end{split}$$
which is solved when
[$$a=\sqrt[k]{\frac{-1-b^{2+4k}+(b^{2k}+b^{2+2k})(2k+1)+(1-b^{2k})\sqrt{k^2+b^2(1-b^{2k})^2(2k+1)+k^2b^2(1+b^{4k}+b^{2+4k})}}{b^k(k-1+b^{2+2k}(k-1)+(b^2+b^{2k})(k+1))}}$$]{}
and $0<b<1$. For example, $a\approx 3.97667$ when $b=0.25$ and $k=6$.
\[sec:liter\]
|
---
abstract: 'In cloud radio access network (C-RAN), remote radio heads (RRHs) and users are uniformly distributed in a large area such that the channel matrix can be considered as sparse. Based on this phenomenon, RRHs only need to detect the relatively strong signals from nearby users and ignore the weak signals from far users, which is helpful to develop low-complexity detection algorithms without causing much performance loss. However, before detection, RRHs require to obtain the real-time user activity information by the dynamic grant procedure, which causes the enormous latency. To address this issue, in this paper, we consider a grant-free C-RAN system and propose a low-complexity Bernoulli-Gaussian message passing (BGMP) algorithm based on the sparsified channel, which jointly detects the user activity and signal. Since active users are assumed to transmit Gaussian signals at any time, the user activity can be regarded as a Bernoulli variable and the signals from all users obey a Bernoulli-Gaussian distribution. In the BGMP, the detection functions for signals are designed with respect to the Bernoulli-Gaussian variable. Numerical results demonstrate the robustness and effectivity of the BGMP. That is, for different sparsified channels, the BGMP can approach the mean-square error (MSE) of the genie-aided sparse minimum mean-square error (GA-SMMSE) which exactly knows the user activity information. Meanwhile, the fast convergence and strong recovery capability for user activity of the BGMP are also verified.'
author:
- |
\
State Key Lab of ISN, Xidian University, China, Singapore University of Technology and Design, Singapore, City University of Hong Kong, China, Doshisha University, Kyoto, Japan,\
Nanyang Technological University, Singapore [^1]
title: 'Message Passing in C-RAN: Joint User Activity and Signal Detection'
---
C-RAN, Bernoulli-Gaussian, message passing, user activity and signal detection.
Introduction
============
To support massive data demands in wireless communications, cloud radio access network (C-RAN) emerges as a candidate for the next generation network architecture, which can significantly improve spectral efficiency and energy efficiency [@C-ran; @FanMaga; @Zuo]. Unlike traditional multiuser multiple-input multiple-output (MU-MIMO) systems, C-RAN consists of hundreds of remote radio heads (RRHs) deployed in a large area and a pool of baseband units (BBUs) centralized in a data cloud center. All RRHs collect signals from users and merge all signals to BBUs for signal recovery.
In order to reliably recover signals with low complexity, a promising detection method is the message passing algorithm based on factor graph [@Loeliger2006; @Donoho2009; @GAMP; @Chongwen], which transforms the optimal cost function for signal recovery into the iterative calculations among nodes in the factor graph. For different networks, the message passing algorithm need to be specially designed. In C-RAN, affected by the path loss, the signals from far users are very weak when arrive at RRHs, which results in a nearly sparse channel. Authors in [@Fan_Dynamic2016] proved that the C-RAN channel could be sparsified without causing much performance loss, where RRHs only needed to detect the relatively strong signals from nearby users and ignored the weak signals from far users. The channel sparsification is helpful to develop low-complexity message passing algorithms. Nevertheless, due to different statistic distributions of channels, the Gaussian message passing (GMP) algorithms proposed for the MU-MIMO [@Lei2015; @Lei2016; @Lei20162] cannot be directly extended to the C-RAN. As a result, authors in [@Fan2015; @Fan2016] proposed a sparse message passing algorithm for the C-RAN with the sparsified channel [@Fan_Dynamic2016].
However, in the above works [@Lei2015; @Lei2016; @Lei20162; @Fan_Dynamic2016; @Fan2015; @Fan2016], receivers are assumed to exactly know the real-time user activity information and then detect the signals for active users. In practice, the user activity information is obtained by the complicated grant procedure. When the number of users is large and the activity of each user changes at any time, the dynamic grant procedure causes the enormous latency. To address this issue, authors in [@XXu2015; @Utkovski2017] considered a grant-free C-RAN system and proposed a modified Bayesian compressive sensing and a hybrid generalized approximate message passing (GAMP) respectively, which were to estimate the channel state information and user activity. However, in [@XXu2015; @Utkovski2017], the channel models do not take into account the geographical distributions of RRHs and users and these algorithms do not consider the signal recovery for active users.
In this paper, we consider joint user activity and signal detection over the grant-free C-RAN. Since the activity of each user changes at any time, the user activity can be regarded as a Bernoulli variable at RRHs. Moreover, we assume that active users transmit Gaussian signals and the transmissions of inactive users can be treated as zeros for RRHs. Statistically, the signals from all users obey a Bernoulli-Gaussian distribution. Therefore, based on the sparsified channel and corresponding factor graph, we propose a low-complexity Bernoulli-Gaussian message passing (BGMP) algorithm to jointly detect the user activity and signal. In the BGMP, messages passing among nodes and relevant update functions at nodes are associated with the Bernoulli-Gaussian variable. Numerical results demonstrate the robustness and effectivity of the BGMP. That is, for different sparsified channels, the BGMP can approach the mean-square error (MSE) of the genie-aided sparse minimum mean-square error (GA-SMMSE) which exactly knows the user activity information. Moreover, the fast convergence and strong recovery capability for user activity of the BGMP are also verified.
System Model
============
![Illustration of an uplink grant-free C-RAN system, where RRHs and users are uniformly located over a large coverage area.[]{data-label="Model"}](Model.pdf "fig:"){width="0.8\columnwidth"}\
Figure \[Model\] shows an uplink grant-free C-RAN system with $M$ RRHs and $K$ users uniformly located over a large coverage area, where active users transmit signals at any time without the complicated grant procedure. Each RRH has $N$ antennas and each user has one antenna. Signal ${{\bm{y}}^m} \in {\mathcal{R}}^{{N} \times {\rm{1}}}$ arrived at the $m$-th RRH is $$\label{rev}
{{\bm{y}}^m}=P^{\frac{1}{2}}{\bm{H}}^m {\bm{x}}+{\bm{z}}^m,\quad m=1, ..., M $$ where ${\bm{H}}^m$ $\in$ ${\mathcal{R}}^{N \times K}$ denotes the channel matrix from $K$ users to the $m$-th RRH, $P$ is the transmit power allocated to each user, ${\bm{x}}$ $\in$ ${\mathcal{R}}^{K \times {\rm{1}}}$ is the transmitted signal from $K$ users, and ${\bm{z}}^m$$\in$ ${\mathcal{R}}^{N \times {\rm{1}}}$ is a Gaussian noise vector obeying $\mathcal{N}(0, \sigma_{n}^{2}\bm{I}_{N})$ with an $N\times N$ identity matrix $\bm{I}_{N}$. The $(n, k)$-th entry $h^m_{n,k}$ of ${\bm{H}}^m$ is assumed as $\gamma^m_{n,k}d_{m,k}^{-{\alpha}}$, where $\gamma^m_{n,k}$ is an independent and identically distributed (i.i.d.) fading coefficient obeying ${\mathcal{N}}(0,1/K)$, $d_{{m,k}}$ is the geographic distance between the $k$-th user and the $m$-th RRH, and $\alpha$ is a path loss exponent. Note that $d_{m,k}^{-{\alpha}}$ denotes the path loss from the $k$-th user to the $m$-th RRH. Here, we assume that the $m$-th RRH perfectly knows channel state information ${\bm{H}}^m$.
Due to the effect of path loss, the received signals from far users are drastically degraded such that RRHs can ignore the detections for far users without causing much performance loss [@Fan_Dynamic2016]. The channel sparsification can provide a sparse factor graph to develop low-complexity message passing algorithms [@Fan2015; @Fan2016]. Therefore, as [@Fan2015; @Fan2016], we set a distance threshold $d_0$ to sparsify the channel in Fig. \[Model\]. Specifically, the $(n, k)$-th entry ${\hat{h}}^{m}_{n,k}$ of sparsified channel matrix ${\bm{\hat{H}}}^m$ is $$\nonumber
{\hat{h}}^{m}_{n,k} = \left\{
{\begin{array}{*{20}c}
\!\!\!{h}^m_{n,k},\quad \;{d_{m,k} < d_0}, \\
{0,\quad \quad\rm{otherwise}.}
\end{array}} \right.$$ Then, Eq. (\[rev\]) is rewritten as $$\begin{aligned}
\nonumber{\bm{y}}^m&=P^{\frac{1}{2}}{\bm{\hat{H}}}^m{\bm{x}}+P^{\frac{1}{2}}{\bm{\tilde{H}}}^m{\bm{x}}+{\bm{z}}^m \\ \label{newRev}
&=P^{\frac{1}{2}}{\bm{\hat{H}}}^m{\bm{x}}+\bm{\eta}^m\rm{,}\end{aligned}$$ where ${\bm{\tilde{H}}}^m$$=$${\bm{H}}^m$$-{\bm{\hat{H}}}^m$ and $\bm{\eta}^m$$=$$P^{\frac{1}{2}}$${\bm{\tilde{H}}}^m$${\bm{x}}$$+{\bm{z}}^m$ is an interference vector of length $N$. The variance of the $n$-th entry $\eta^m_n$ of $\bm{\eta}^m$ is ${\sigma}^2_{mn}=PE[\sum_{k}|\tilde{h}^m_{n,k}x_k|^2]+\sigma_{n}^{2}$, where $\tilde{h}^m_{n,k}$ is the ($n$, $k$)-th entry of ${\bm{\tilde{H}}}^m$, $x_k$ is the $k$-th entry of $\bm{x}$, $k=1, ..., K$ and $n=1, ..., N$.
Note that in the grant-free C-RAN, RRHs cannot obtain the user activity information in advance. Thus, the user activity can be regarded as a Bernoulli variable at the RRHs. Moreover, we assume that active users transmit Gaussian signals. The transmissions of inactive users can be treated as zeros for RRHs. Statistically, entries of ${\bm{x}}$ obey a Bernoulli-Gaussian distribution. That is, $$\nonumber
x_k = \left\{
{\begin{array}{*{20}c}
{0,\quad \quad \quad\quad\;{\rm{with~probability}}\; 1-\rho, } \\
{\!\!\!\!\!\!\!\!\!\!\!{\mathcal{N}}(0,\rho^{-1} ),\;\;\;{\rm{with~probability }}\;\rho },
\end{array}} \right.$$ where $0 < \rho < 1$ is the probability of user activity and the power of $x_k$ is normalized to $1$. Our goal is to develop a low-complexity message passing algorithm based on the sparsified channel, which jointly detects the user activity and signal.
Bernoulli-Gaussian Message Passing
==================================
![Factor graph of the grant-free C-RAN with the sparsified channel. There are $M$ multiple-antenna RRHs in which each RRH has $N$ antennas denoted as sum nodes, and $K$ single-antenna users denoted as variable nodes. Bernoulli vector ${\bm{\lambda}}=[{\lambda_1, ..., \lambda_K}]^T$ and Gaussian signal vector ${\bm{g}}=[g_1, ..., g_K]^T$ are denoted as Bernoulli and Gaussian nodes respectively.[]{data-label="FG"}](FG.pdf "fig:"){width="1\columnwidth"}\
{width="1.3\columnwidth"}\
To identify active users and recover their signals, we propose a Bernoulli-Gaussian message passing (BGMP) algorithm for the C-RAN with the sparsified channel. To simplify the analysis, Bernoulli-Gaussian vector $\bm{x}$ is transformed into the componentwise product of a Bernoulli vector ${\bm{\lambda}}$ obeying i.i.d. $\mathcal{B}(1,\rho\bm{I}_K)$ and a Gaussian vector ${\bm{g}}$ obeying i.i.d. $N(0, \rho^{-1}\bm{I}_K)$, i.e., $${\bm{x}}={\bm{\lambda}} \circ \bm{g},
\vspace{-1mm}$$ where ${\bm{\lambda}}$ and ${\bm{g}}$ are independent of each other and $\circ$ refers to the element-wise multiplication. Thus, the recovery for $\bm{x}$ is transformed into the joint recovery for $\bm{\lambda}$ and $\bm{g}$. Fig. \[FG\] shows the factor graph of the C-RAN, where antennas of all RRHs, users, $\bm{\lambda}$, and $\bm{g}$ are denoted as sum, variable, Bernoulli, and Gaussian nodes respectively.
As the signal detection in conventional message passing algorithms, such as GMP algorithm [@Lei2015] and belief propagation (BP) decoding of LDPC code [@LDPC], in the proposed BGMP algorithm, we decompose the global calculation based on the full channel matrix into many local calculations at nodes in the factor graph. This is efficient to reduce the computation complexity. Note that the messages in GMP or BP decoding relate to Gaussian or discrete signals. Different from GMP and BP decoding, the messages in the BGMP are associated with both Bernoulli and Gaussian signals. Fig. \[MessPass\] illustrates the update processes of messages between sum nodes and variable nodes in the BGMP algorithm. To be specific, we present message updates at sum and variable nodes as follows.
Bernoulli-Gaussian Message Update at Sum Node
---------------------------------------------
For simplicity, we consider the detection for user $k$ at the $m$-th RRH, where user $k$ is nearby the $m$-th RRH, i.e., ${\hat{h}}^{m}_{n, k}=1, n=1, ..., N$. Fig. \[MessPass\](a) shows the update process for messages passing from the $n$-th sum node of the $m$-th RRH to the $k$-th variable node. Then, we rewrite Eq. (\[newRev\]) as $$\begin{aligned}
\nonumber
y^m_n &=\hat{h}^m_{n,k} \lambda_k g_k+ \sum_{i \in \mathcal{K}\setminus k} \hat{h}^m_{n,i} \lambda_i g_i + \eta^m_n \\ \nonumber
&=\hat{h}^m_{n,k} \lambda_k g_k+ {\hat{\eta}}^m_{nk},\end{aligned}$$ where ${\hat{\eta}}^m_{nk}=\sum_{i \in \mathcal{K}\setminus k}\hat{h}^m_{n,i} \lambda_i g_i + \eta^m_n$, $\mathcal{K}=\{1, ..., K\}$, and $i \in \mathcal{K} \setminus k$ denotes that $i \in \mathcal{K}$ and $i \neq k$. Due to independent transmissions of all users, based on the central limit theorem, ${\hat{\eta}}^m_{nk}$ can be regarded as a Gaussian variable with mean $e^m_{nk}$ and variance $v^m_{nk}$. At the $t$-th iteration, $$\begin{aligned}
\label{ETAM}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
e^m_{nk}(t)=E[{\hat{\eta}}^m_{nk}(t)]=\sum_{i \in \mathcal{K}\setminus k}\hat{h}^m_{n,i}p^m_{i\rightarrow n}(t) e^m_{i\rightarrow n}(t),}\\
\label{ETAV}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
v^m_{nk}(t)=Var[{\hat{\eta}}^m_{nk}(t)]}\end{aligned}$$ $$\nonumber
\resizebox{1\hsize}{!}{$=\sum_{i \in \mathcal{K}\setminus k}(\hat{h}^m_{n,i})^2 p^m_{i\rightarrow n}(t)\big(v^m_{i\rightarrow n}(t)+
(1-p^m_{i\rightarrow n}(t))e^m_{i\rightarrow n}(t)^2\big) + {\sigma}^2_{mn},$}$$ where $E[a]$ and $Var[a]$ denote the expectation and variance of variable $a$, $e^m_{i\rightarrow n}(t)$ and $v^m_{i\rightarrow n}(t)$ are the mean and variance of $g_i$, and $p^m_{i\rightarrow n}(t)$ is the non-zero probability of $\lambda_i$. These input messages associated with $g_i$ and $\lambda_i$ are from the $i$-th variable node. Based on these priori inputs, the $n$-th sum node of the $m$-th RRH outputs mean $e^m_{n\rightarrow k}(t)$ and variance $v^m_{n\rightarrow k}(t)$ for $g_k$, and non-zero probability $p^m_{n\rightarrow k}(t)$ for $\lambda_k$, which are sent to the $k$-th variable node.
*1) Gaussian message update for $g_k$$\sim$${\mathcal{N}}$$($$e^m_{n\rightarrow k}(t)$, $v^m_{n\rightarrow k}$$(t))$*: $$\begin{aligned}
\nonumber
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
e^m_{n\rightarrow k}(t)=E[g_k|y^m_n, {\hat{\eta}}^m_{nk}, \lambda_k=1]}\\ \label{mUpSum}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
=(\hat{h}^m_{n,k})^{-1}(y^m_n-e^m_{nk}(t)),}\end{aligned}$$ $$\begin{aligned}
\nonumber
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
v^m_{n\rightarrow k}(t)=Var[g_k|y^m_n, {\hat{\eta}}^m_{nk}, \lambda_k=1]} \\ \label{vUpSum}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
=(\hat{h}^m_{n,k})^{-2}v^m_{nk}(t),}\end{aligned}$$ where $E[a|b]$ and $Var[a|b]$ denote the conditional expectation and variance of variable $a$ when given variable $b$ and Eq.(\[mUpSum\]) and Eq.(\[vUpSum\]) are derived from the fact that $\lambda_k$ and $g_k$ are independent of each other. Let initial mean vector ${{\bm{e}}^m_n}(0)=[e^m_{1\rightarrow n}(0), ..., e^m_{K\rightarrow n}(0)]^T$ and variance vector ${\bm{v}}^m_n(0)=[v^m_{1\rightarrow n}(0), ..., v^m_{K\rightarrow n}(0)]^T$ be $\bm{0}$ and $+\bm{\infty}$ respectively, where $\bm{0}$ and $+\bm{\infty}$ denote the vector forms of $0$ and $+ \infty$.
*2) Bernoulli message update for $\lambda_k$*: $$\begin{aligned}
\nonumber \lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
p^m_{n\rightarrow k}(t)=\big[{1+\frac{P(y^m_n|\lambda_k=0,{\hat{\eta}}^m_{nk})}
{P(y^m_n|\lambda_k=1,{\hat{\eta}}^m_{nk})}}\big]^{-1}} \\ \label{pUpsum}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
{=\frac{1}{1+\frac{f(y^m_n,~e^m_{nk}(t),~v^m_{nk}(t))}{f(y^m_n,~\hat{h}^m_{n,k}{e}^m_{k \rightarrow n}(t)+e^m_{nk}(t),~(\hat{h}^m_{n,k})^2
{v}^m_{k \rightarrow n}(t)+v^m_{nk}(t))}}},
}\end{aligned}$$ where $f(y, a, b)$ is the standard Gaussian probability density function (PDF) with respect to variable $y$ whose mean is $a$ and variance is $b$. Let initial non-zero probability vector ${{\bm{p}}^m_n}(0)=[p^m_{1\rightarrow n}(0), ..., p^m_{K\rightarrow n}(0)]^T$ be $=0.5 \times {\bm{1}}$, where $\bm{1}$ denotes the all-ones vector.
Bernoulli-Gaussian Message Update at Variable Node {#BGVN}
--------------------------------------------------
As shown in Fig. \[MessPass\](b), we present the update process for messages passing from the $k$-th variable node to the $n$-th sum node of the $m$-th RRH. At first, let ${\bm{\bar{e}}}=[\bar{e}_1, ..., \bar{e}_K]^T$, ${\bm{\bar{v}}}=[\bar{v}_1, ..., \bar{v}_K]^T$ be the priori mean and variance of $\bm{g}$, and ${\bm{\bar{p}}}=[\bar{p}_1, ..., \bar{p}_K]^T$ be the priori non-zero probability of $\bm{\lambda}$. We assume that $\bar{e}_k=0$, $\bar{v}_k=\rho^{-1}$, and $\bar{p}_k=\rho$, $k \in \mathcal{K}$. Then, based on the estimated messages from all sum nodes, at the ($t+1$)-th iteration, the $k$-th variable node outputs mean $e^m_{k \rightarrow n}(t+1)$ and variance $v^m_{k \rightarrow n}(t+1)$ for $g_k$, and non-zero probability $p^m_{k \rightarrow n}(t+1)$ for $\lambda_k$, which are sent to the $n$-th sum node of the $m$-th RRH.
*1) Gaussian message update for $g_k$$\sim$${\mathcal{N}}$$($$e^m_{k\rightarrow n}(t+1)$, $v^m_{k\rightarrow n}$$(t+1))$*: According to the update rules of Gaussian message [@Lei2015; @Lei2016; @Lei20162], PDF of the output Gaussian message from a variable node is the normalized product of PDFs of the input Gaussian messages. Therefore, we can obtain $$\begin{aligned}
\nonumber \lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
v^m_{k \rightarrow n}(t+1)=Var[g_k|{\bm{V}}^k_{\setminus\{m, n\}}(t), {\bar{v}_k}]} \\ \label{VUpVa}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
={\big[{\bar{v}}^{-1}_k\!\!+\!\!\!\!\sum_{\small{i\in \mathcal{M}\setminus m}}\sum_{j\in \mathcal{D}}v_{j \rightarrow k}^i(t)^{-1}\!\!+\!\!\!\sum_{d \in \mathcal{D}\setminus n}\!\!\!v_{d\rightarrow k}^m(t)^{-1}\big]^{-1},}} $$ $$\begin{aligned}
\nonumber \lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
e^m_{k \rightarrow n}(t+1)=E[g_k|{\bm{V}}^k_{\setminus\{m, n\}}(t), {\bm{E}}^k_{\setminus\{m, n\}}(t), {\bar{v}_k, {\bar{e}}_k}]} \\ \label{mUpVa}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
=v^m_{k \rightarrow n}(t+1)\big[\frac{\bar{e}_k}{\bar{v}_k}\!+\!\!\!\!\!\sum_{i \in \mathcal{M}\setminus m}\sum_{j\in \mathcal{D}}\frac{e_{j \rightarrow k}^i(t)}{v_{j \rightarrow k}^i(t)}\!+\!\!\!\!\sum_{d \in \mathcal{D}\setminus n}\!\!\frac{e_{d \rightarrow k}^m(t)}{v_{d\rightarrow k}^m(t)}\big],}\end{aligned}$$ where ${\bm{V}}^k(t)$$=$$[v_{j \rightarrow k}^i(t)]_{MN \times 1}$ and ${\bm{E}}^k(t)=[e_{j \rightarrow k}^i(t)]_{MN \times 1}$ denote the mean and variance vectors associated with ${g}_k$ from all sum nodes, $i \in \mathcal{M}$, $j \in \mathcal{D}$, $\mathcal{M}$$=$$\{$$1,$ $...,$ $M\}$, $\mathcal{D}=\{1, ..., N\}$, and $\setminus\{m, n\}$ denotes that $j \neq n$ if and only if $i=m$.
*2) Bernoulli message update for $\lambda_k$*: By combining non-zero probability ${\bm{P}}^k(t)=[p_{j \rightarrow k}^i(t)]_{MN \times 1}$ associated with ${\lambda}_k$ from all sum nodes, where $i \in \mathcal{M}$ and $j \in \mathcal{D}$, we can obtain $$\begin{aligned}
\nonumber \lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
p^m_{k \rightarrow n}(t+1)=\big[{1+\frac{P(\lambda_k=0|{\bm{P}}_{\setminus\{m, n\}}(t),{\bar{p}}_k)}{P(\lambda_k=1|{\bm{P}}_{\setminus\{m, n\}}(t),{\bar{p}}_k)}}\big]^{-1}} \\ \label{PUpVa}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
=\!\!\frac{1}
{1\!+\!
\frac{(1-\bar{p}_k) [\prod\nolimits_{i\in \mathcal{M}\setminus m}\prod\nolimits_{j\in \mathcal{D}}(1-p^i_{j \rightarrow k}(t))]\prod\nolimits_{d\in \mathcal{D}\setminus n}(1-p^m_{d \rightarrow k}(t))}
{{\bar{p}_k [\prod\nolimits_{i\in \mathcal{M}\setminus m}\prod\nolimits_{j\in \mathcal{D}}p^i_{j \rightarrow k}(t)]\prod\nolimits_{d\in \mathcal{D}\setminus n}p^m_{d \rightarrow k}(t)}}}.}\end{aligned}$$
Considering a large amount of probability multiplications in Eq. (\[PUpVa\]) easily cause the storage overflow in the simulations, we transform probability calculations into log-likelihood ratio (LLR) calculations by using function $L(p)$ $=$ ${\rm{log}}\frac{p}{1-p}=-{\rm{log}}(p^{-1}-1)$. Specifically, we denote the LLR forms of $\bar{p}_k$ and $p^m_{k \rightarrow n}(t)$ as $\bar{{\ell}}_k=\mathcal{L}(\bar{p}_k)$ and ${\ell}^m_{k \rightarrow n}(t)=\mathcal{L}(p^m_{k \rightarrow n}(t))$ respectively. Then, Eq. (\[PUpVa\]) is transformed into $$\begin{aligned}
\label{LPUpVa}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
{\ell}^m_{k \rightarrow n}(t+1)=\bar{{\ell}}_k\!+\!\!\!\sum_{i\in \mathcal{M}\setminus m}\sum_{j\in \mathcal{D}}{\ell}^i_{j \rightarrow k}(t)\!+\!\!\sum_{d\in \mathcal{D}\setminus n}\!\!{\ell}^m_{d \rightarrow k}(t),}\end{aligned}$$ where initialization of ${{\bm{L}}^m_n}(0)$$=$$\mathcal{L}({{\bm{p}}^m_n}(0))$ is $\bm{0}$. Correspondingly, input message ${p}^m_{k\rightarrow n}(t)$ of Eq. (\[ETAM\]) and Eq. (\[ETAV\]) is equal to $[{{\rm{tanh}}(\frac{{\ell}^{m}_{{k} \rightarrow {n}}(t)}{2})+1}]/{2}$.
Decision and Output of BGMP
---------------------------
The BGMP algorithm is performed iteratively between the sum nodes and variable nodes, where Eq. (\[mUpSum\])–Eq. (\[pUpsum\]) are the update functions for messages at sum nodes, and Eq. (\[VUpVa\])– Eq. (\[LPUpVa\]) are the update functions for messages at variable nodes. The whole iterative process is stop until when the preset maximum iterative number is reached or the MSE requirement is satisfied. According to the message passing rules [@LDPC; @Lei2015], the decision depends on the full messages at the Gaussian and Bernoulli nodes which combine priori messages and input messages from sum nodes together. The full messages of mean and variance of $\bm{g}$ are denoted as $\bm{\tilde{e}}$ and $\bm{\tilde{v}}$ respectively, and those of non-zero probability and the corresponding LLR of $\bm{\lambda}$ are denoted as $\bm{\tilde{{P}}}$ and $\bm{\tilde{{\ell}}}$ respectively. The $k$-th entries of $\bm{\tilde{e}}$, $\bm{\tilde{v}}$, $\bm{\tilde{{P}}}$, and $\bm{\tilde{{\ell}}}$, $k\in \mathcal{K}$, are $$\begin{aligned}
\label{tlv}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\tilde{v}_k=[\bar{v}_k^{-1}+\sum_{i\in \mathcal{M}}\sum_{j\in \mathcal{D}} v^i_{j\rightarrow k}(t)^{-1}]^{-1},} \\ \label{tlm}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\tilde{e}_k=\tilde{v}_k[{\bar{e}_k}{\bar{v}_k}^{-1}+\sum_{i\in \mathcal{M}}\sum_{j\in \mathcal{D}}{e_{j \rightarrow k}^i(t)}{v_{j \rightarrow k}^i(t)}^{-1}],}\\ \label{tlL}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\tilde{\ell}_k=\bar{\ell}_k+\sum_{i\in \mathcal{M}}\sum_{j\in \mathcal{D}}{\ell}^i_{j \rightarrow k}(t),
}\\
\label{tlp}
\lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\tilde{p}_k=[{{\rm{tanh}}(\frac{\tilde{\ell}_k}{2})+1}]/{2},
}\end{aligned}$$ Based on Eq. (\[tlv\])–Eq. (\[tlp\]), the $k$-th entry of final estimation $\bm{\tilde{\lambda}}$ of $\bm{\lambda}$ is $$\nonumber
\tilde{\lambda}_k = \left\{ {\begin{array}{*{20}c}
{1,\quad {\rm{when}}\;\tilde{\ell}_k > 0}, \\
{0,\quad {\rm{when}}\;\tilde{\ell}_k \leq 0},
\end{array}} \right.$$ and the final estimation $\bm{\tilde{x}}$ of $\bm{x}$ is $\bm{\tilde{x}}= \bm{\tilde{\lambda}} \circ \bm{\tilde{{P}}}\circ \bm{\tilde{e}}$.
Complete BGMP Algorithm
-----------------------
Now we present the complete process of BGMP algorithm. Assume $i\in\mathcal{M}$, $j\in\mathcal{D}$, and $k\in\mathcal{K}$. Let $\bm{\hat{H}}$$=$$[\bm{\hat{H}}^1,$ $...,$ $\bm{\hat{H}}^M]^T$$=$$[\hat{h}_{j,k}^i]_{MN\times K}$ be the whole matrix. We define $\mathcal{J}(i)$ as the set of neighbors of the $i$-th node, which denotes that there is a edge connecting the $i$-th node and any $d$-th node, $d$$\in$$\mathcal{J}(i)$, i.e., $\hat{h}_{j, d}^i$$\neq$$0$. Moreover, let $\bm{y}$ $=$ $[y_{j}^i]_{MN\times \rm{1}}$, $\bm{E^{\eta}}(t)$ $=$ $[e^m_{nk}(t)]_{MN\times K}$, $\bm{V^{\eta}}(t)$ $=$ $[v^m_{nk}(t)]_{MN\times K}$, $\bm{\sigma^{\eta}}$$=$$[\sigma_{mn}^2]_{MN \times \rm{1}}$, $\bm{E}^s(t)=[e_{j \rightarrow k}^i(t)]_{MN \times K}$, $\bm{V}^s(t)$$=$$[v_{j \rightarrow k}^i(t)]_{MN \times K}$, $\bm{P}^s(t)$$=$$[p_{j \rightarrow k}^i(t)]_{MN \times K}$, $\bm{L}^s(t)$ $=$ $[\ell_{j \rightarrow k}^i(t)]_{MN \times K}$, $\bm{{E}}^v(t)$$=$$[e_{k \rightarrow j}^i(t)]_{K\times MN}$, $\bm{V}^v(t)$ $=$ $[v_{k \rightarrow j}^i(t)]_{K\times MN}$, $\bm{P}^v(t)$ $=$ $[p_{k \rightarrow j}^i(t)]_{K\times MN}$, $\bm{L}^v(t)$ $=$ $[\ell_{k \rightarrow j}^i(t)]_{K\times MN}$. The output of function ${\text{sign}}(a)$ is equal to $1$ when $a>0$ and $0$ when $a\leq 0$. The complete BGMP algorithm is given in Algorithm 1.
Numerical Results
=================
In this section, we investigate the performance of the proposed BGMP algorithm over the grant-free C-RAN system. We assume that the RRHs and users are uniformly located over a square network whose side length is $5~\text{km}$. The path loss exponent $\alpha$ is $2.25$. The number of RRHs and users is $M=120$ and $K=200$, where each RRH has $N=10$ antennas and the probability of user activity $\rho=0.3$. The maximum iteration number of the BGMP algorithm is $\tau_{\text{max}}=50$. The average receive signal-to-noise ratio (RSNR) is $\frac{PE[\sum_{m \in \mathcal{M}} ||\bm{H}^m||^2_2]}{MN\sigma_n^2}$. We measure the performances in terms of the average MSE and user state error (USE), i.e., $\text{MSE}=\frac{1}{K}E[||\bm{x}-\bm{\tilde{x}}||^2_2]$ and $\text{USE}=\frac{1}{K}E[||\bm{\lambda}-\bm{\tilde{\lambda}}||_1]$.
Benchmark Detections
--------------------
To evaluate recovery accuracy of the BGMP algorithm, we present three benchmark detections: genie-aided minimum mean-square error (GA-MMSE), genie-aided sparse MMSE (GA-SMMSE), and general SMMSE, where the genie aid denotes that the detector knows the non-zero locations of $\bm{x}$ in advance and the sparseness denotes that the detector exploits sparsified matrix $\bm{\hat{H}}$ instead of original matrix $\bm{H}$. The estimations of these detectors are $$\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\bm{x}^{\rm{GA-MMSE}}_{\setminus\{0\}}=\big(\bm{H}_{\setminus \{0\}}^T\bm{H}_{\setminus \{0\}}+\sigma_n^{2}\rho \bm{I}\big)^{-1}\bm{H}_{\setminus \{0\}}^T\bm{Y},$$ $$\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\bm{x}^{\rm{GA-SMMSE}}_{\setminus\{0\}}=\big(\bm{\hat{H}}_{\setminus \{0\}}^T\bm{\hat{H}}_{\setminus \{0\}}+\rho \bm{\sigma^{\eta}}_{\setminus \{0\}}\bm{I}\big)^{-1}\bm{\hat{H}}_{\setminus \{0\}}^T\bm{Y},$$ $$\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\bm{x}^{\rm{SMMSE}}=\big(\bm{\hat{H}}^T\bm{\hat{H}}+\rho\bm{\sigma^{\eta}}\bm{I}\big)^{-1}\bm{\hat{H}}^T\bm{Y},$$ where$\setminus \{0\}$denotes that the entries with respect to the zeros locations of $\bm{x}$ are excluded. Since the GA-MMSE and GA-SMMSE exactly know the non-zeros locations of $\bm{x}$, they can provide the ideal limit and lower-bound performances respectively. In contrast, the SMMSE only makes use of $\bm{\hat{H}}$ so that it provides the upper-bound performance.
Complexity Comparison
---------------------
Note that the realizable SMMSE detector has a computational complexity of $\mathcal{O}(K^3+K^2MN+KMN)$. In contrast, we discuss the computational complexity of the BGMP. By defining channel sparsity $\it{\gamma}$ of $\bm{\hat{H}}$ as the ratio of the number of non-zero entries to that of all entries, the number of edges in the factor graph is ${\it{\gamma}}MNK$. In each iteration, the message update along one edge requires $20$ multiplication/division, and $2$ exponent/logarithm operations. Therefore, the computational complexity of the proposed BGMP is $\mathcal{O}({\it{\gamma}}MNK\tau_{\text{max}})$. As $\it{\gamma}$ decreases, the BGMP can achieve very low complexity.
MSE Performance Comparison {#MSEComp}
--------------------------
In Fig. \[MSE\], we give the MSEs of the proposed BGMP, GA-MMSE, GA-SMMSE, and SMMSE over the C-RAN system, where distance threshold $d_0=3.5~\text{km}$ and channel sparsity ${\it{\gamma}}=0.7$. Note that the MSE curve of the BGMP is very close to that of GA-MMSE and GA-SMMSE at the entire range of RSNRs and the gap between the MSE curves of the BGMP and GA-SMMSE is less than $3$ dB at the high RSNRs. Compared with the SMMSE, the BGMP has about $5$dB performance gain. Moreover, we also present the MSEs of GAMP [@GAMP] and basis pursuit de-noising (BPDN) [@BPDN]. It is noticed that the GAMP still achieves high MSE even when the RSNR is high and MSE of BPDN just converges that of SMMSE, where $\tau_{\text{max}}$ for GAMP and BPDN is also $50$.
BGMP Convergence and USE Performance
------------------------------------
Fig. \[BER\] shows that USEs of the BGMP with different RSNRs and iterations, where the simulation conditions are the same as in section \[MSEComp\]. Note that for each RSNR, the BGMP only takes less than $12$ iterations to converge, which illustrates the fast convergence of the BGMP. Additionally, as the RSNR increases, the USE of the BGMP is as low as $3\times 10^{-2}$.
Effect of Channel Sparsity
--------------------------
To investigate the robustness of the BGMP, in Fig. \[SMSE\] we provide the MSEs of the BGMP, GA-MMSE, and GA-SMMSE over the C-RAN with different channel sparsity and RSNRs. By directly changing the values of $d_0$, $\it{\gamma}$ changes from a small value near $0$ to $1$ accordingly. Fig. \[SMSE\] shows that for each RSNR, the MSE of the BGMP is close to that of GA-SMMSE at the entire range of $\it{\gamma}$. In addition, as the RSNR increases, the gaps between the MSE curves of GA-MMSE and GA-SMMSE become large. The reason is that eligible $d_0$ increases with the RNSR [@Fan_Dynamic2016], which results in the increases of eligible $\it{\gamma}$.
Moreover, we present the USEs of the BGMP with different $\it{\gamma}$ and RSNRs. Fig. \[SMSE\] illustrates that for a given RSNR, $\it{\gamma}$ almost does not affect the USE performance for the BGMP.
Furthermore, in the random network, the distance between each user and each RRH is different so that the variances of entries of $\bm{\hat{H}}$ are different. As a result, the entries of $\bm{\hat{H}}$ are independent but differently distributed. Fig. \[SMSE\] and Fig. \[SBER\] further verify that the BGMP is robust to the statistical distribution of channel.
![MSE Curves of the proposed BGMP, GA-MMSE, GA-SMMSE, SMMSE, GAMP [@GAMP] and BPDN [@BPDN]. GA-MMSE, GA-SMMSE, and SMMSE provide the limit, lower-bound, and upper-bound performances respectively. The MSE of the proposed BGMP approaches that of the GA-SMMSE at available RSNRs and has performance loss just within $3$ dB at high RSNRs.[]{data-label="MSE"}](MSE.pdf "fig:"){width="0.9\columnwidth"}\
![USE curves of the proposed BGMP with different RSNRs and iterations. For each RSNR, the BGMP only takes less iterations to converge.[]{data-label="BER"}](USE.pdf "fig:"){width="0.9\columnwidth"}\
![MSE Curves of the proposed BGMP, GA-MMSE (limit) and GA-SMMSE (low bound) over the sparsified C-RAN with different channel sparsity $\it{\gamma}$ and RSNRs. For each RNSR, MSE curves of the proposed BGMP approach that of GA-SMMSE at the entire range of $\it{\gamma}$.[]{data-label="SMSE"}](SMSE.pdf "fig:"){width="0.9\columnwidth"}\
![USE curves of the proposed BGMP with different channel sparsity $\it{\gamma}$ and RSNRs. For each RSNR, the USE performances of the BGMP are robust to $\it{\gamma}$ which changes from a value near $0$ to $1$.[]{data-label="SBER"}](SparseUSE.pdf "fig:"){width="0.85\columnwidth"}\
Conclusion
==========
In this paper, we proposed a low-complexity Bernoulli-Gaussian message passing (BGMP) algorithm for the grant-free C-RAN system. Based on the sparsified channel, the BGMP could jointly detect user activity and signal with low complexity. Numerical results showed that for different sparsified channels, the BGMP took less iterations to approach the MSE of the GA-SMMSE and low USEs. In the future work, we will provide the convergence analysis for the BGMP.
[h]{} “C-RAN: The road towards green RAN,” China Mobile Res. Inst., Beijing, China, Oct. 2011, White Paper, ver. 2.5. C. Fan, Y. Zhang, and X. Yuan, “Advances and challenges toward a scalable cloud radio access network”, *IEEE Commun. Magazine*, vol. 54, no. 6, pp. 29-35, Jun. 2016. J. Zuo, J. Zhang, C. Yuen, W. Jiang, and W. Luo, “Energy efficient user association for cloud radio access networks”, *IEEE Access*, vol. 4, pp. 2429-2438, May 2016. H. A. Loeliger, J. Dauwels, J. Hu, S. Korl, L. Ping, and F. R. Kschischang, “The factor graph approach to model-based signal processing,” *Proc. IEEE*, vol. 95, no. 6, pp. 1295-1322, Jun. 2007. D. L. Donoho, A. Maleki, and A. Montanari, “Message passing algorithms for compressed sensing," *Proceedings of the National Academy of Sciences*, 2009. S. Rangan, “Generalized approximate message passing for estimation with random linear mixing," in *Proc. IEEE ISIT*, Aug. 2011. C. Huang, L. Liu, C. Yuen, and S. Sun, “A LSE and sparse message passing-based channel estimation for mmWave MIMO systems,” in *Proc. IEEE GLOBECOM Workshops*, Dec. 2016. C. Fan, Y. Zhang, and X. Yuan, “Dynamic nested clustering for parallel PHY-layer processing in Cloud-RANs”, *IEEE Tran. Wireless Commun.*, vol. 15, no. 3, pp. 1881-1894, Mar. 2016. L. Liu, C. Yuen, Y. L. Guan, Y. Li, and Y. Su, “A low-complexity Gaussian message passing iterative detection for massive MU-MIMO systems," in *Proc. IEEE ICICS*, Dec. 2015. L. Liu, C. Yuen, Y. L. Guan, Y. Li, and Y. Su, “Convergence analysis and assurance Gaussian message passing iterative detection for massive MU-MIMO systems," *IEEE Trans. Wireless Commun.*, vol. 15, no. 9, pp. 6487-6501, Sept. 2016. L. Liu, C. Yuen, Y. L. Guan, Y. Li and C. Huang, “Gaussian message passing iterative detection for MIMO-NOMA systems with massive users," in *Proc. IEEE GLOBECOM*, Dec. 2016. C. Fan, Y. Zhang, and X. Yuan,“Scalable uplink processing via sparse message passing in C-RAN”, in *Proc. IEEE GLOBECOM*, Dec. 2015 C. Fan, X. Yuan, and Y. Zhang,“Randomized Gaussian message passing for scalable uplinke signal processing in C-RANs”, in *Proc. IEEE ICC*, May 2016. X. Xu, X. Rao, and V. K. N. Lau, “Active user detection and channel estimation in uplink CRAN systems," in *Proc. IEEE ICC,* Jun. 2015. Z. Utkovski, O. Simeone, T. Dimitrova, and P. Popovski, “Random access in C-RAN for user activity detection with limited-capactiy fronthaul”, *IEEE Signal Process. Lett.*, vol. 24, no.1, pp. 17-21, Jan. 2017. T. J. Richardson, and R. L. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding”, *IEEE Tran. Inf. Theroy*, vol. 47, no. 2, pp. 599-618, Feb. 2001. S. S. Chen, D. L. Donoho, and M. A. Saunders,“Atomic decomposition by basis pursuit," *SIAM J. Scientif. Comput.*, vol. 20, no. 1, pp. 33-61, 1998. D. Moltchanov, “Distance distributions in random networks," *Ad Hoc Netw.*, vol. 10, no. 6, pp. 1146-1166, Mar. 2012.
[^1]: This work was supported in part by the National Natural Science Foundation of China under Grants 61671345, in part by the Singapore A\*STAR SERC Project under Grant 142 02 00043, in part by the Japan Society for the Promotion of Science through the Grant-in-Aid for Scientific Research (C) under Grant 16K06373, and in part by the Ministry of Education, Culture, Sports, Science and Technology through the Strategic Research Foundation at Private Universities (2014-2018) under Grant S1411030. The first author was also supported by the China Scholarship Council under Grant 201606960042.
|
---
abstract: 'Predicting which crystalline modifications can be present in a chemical system requires the global exploration of its energy landscape. Due to the large computational effort involved, in the past this search for sufficiently stable minima has been performed employing a variety of empirical potentials and cost functions followed by a local optimization on the [*ab initio*]{} level. However, this entails the risk of overlooking important modifications that are not modeled accurately using empirical potentials. In order to overcome this critical limitation, we develop an approach to employ [*ab initio*]{} energy functions during the global optimization phase of the structure prediction. As an example, we perform a global exploration of the landscape of LiF on the [*ab initio*]{} level and show that the relevant crystalline modifications are found during the search.'
author:
- 'K. Doll, J.C. Sch[ö]{}n and M. Jansen'
title: 'Global exploration of the energy landscape of solids on the [*ab initio*]{} level'
---
Introduction
============
A fundamental issue in solid state theory is the crystalline structure a given chemical system exhibits in the solid state[@Maddox88; @Cohen89; @Hawthorne90; @Catlow90; @Schoen96b; @Jansen02b]. Why is a particular periodic atomic configuration adopted, which among several modifications is the preferred one at a particular temperature and pressure, and which thermodynamically metastable but kinetically stable modifications are possible in the first place? Answering these questions requires the global exploration of the energy landscape of the chemical system [@Schoen96b; @Jansen02b; @Schoen01]. Every metastable modification of a solid compound corresponds to a locally ergodic region on the energy landscape[@Schoen96b; @Schoen01], i.e. a set of atomic configurations which exhibits the property that the equilibration time $\tau_{eq}$ of the chemical system within the region is much smaller than the observational time scale $t_{obs}$, which in its turn is much smaller than the time scale $\tau_{esc}$ on which the system is expected to leave this region, $\tau_{eq} \ll t_{obs} \ll \tau_{esc}$. In particular at low temperatures such locally ergodic regions constitute basins around one or several local minima of the energy landscape, and the kinetic stability of the corresponding compounds is controlled by the energetic and entropic barriers surrounding the region[@Schoen01; @Schoen03]. Thus, the first step in the prediction of the possible structures in a chemical system is the determination of the local minima on the energy landscape using a global optimization algorithm. One should note that it is not sufficient to obtain only the global minimum: all local minima that are surrounded by sufficiently high energy barriers correspond to metastable modifications that may be of interest regarding their physical and/or chemical properties both in scientific and in technological applications [@Schoen96b; @Jansen02b].
Since the beginning of the 1990’s, methods for theoretical structure determination and prediction employing global optimization techniques have been developed [@Liu90; @Pannetier90; @Freeman93; @Schoen94; @Boisen94; @Schoen95; @Bush95; @Putz98a; @Woodley99; @Mellot00; @Allan00; @Mellot02; @Winkler01; @Oganov06; @Pentin06a], using e.g. simulated annealing [@Kirk83; @Czerny85], genetic algorithms [@Holland75; @Johnston04; @Woodley07], or the threshold algorithm[@Sibani93; @Schoen96a]. Recently, the combination of molecular dynamics with a history dependent potential was suggested in the framework of the metadynamics approach [@Laio2002], in order to explore energy landscapes and phase transitions, e.g. [@Martonak2005]. The general idea, i.e. starting at a local minimum and exploring the neighborhood on the landscape bears some resemblance to the lid or threshold algorithm[@Sibani93; @Schoen96a], but is different from typical global optimization techniques such as simulated annealing, where long jumps on the energy landscape are allowed (sometimes called ’basin hopping’). The metadynamics employs molecular dynamics and focuses on a set of a few relevant variables which are used to define excluded regions of the landscape and to describe the reaction mechanism, while in the threshold algorithm the energy surface is stochastically sampled by a Monte-Carlo random walk below a sequence of energy thresholds, and the lid algorithm systematically explores the landscape below such energy lids by excluding all parts of phase space that have already been visited.
Since a typical set of global optimization runs involves millions or even billions of energy evaluations, a modular approach has become standard[@Schoen96b; @Woodley04], where a global search on an empirical energy / cost function landscape generates structure candidates, which are subsequently locally optimized on full quantum mechanical level using e.g. the Hartree-Fock approximation or density functional theory[@Putz98a]. Of course, this use of empirical potentials contains the risk that good candidates are overlooked because they do not correspond to a minimum (or only to a high-lying shallow one) on the empirical landscape, and there are many chemical systems where no straightforward empirical energy function based on simple or refined potentials or a crystal-chemically inspired cost function such as a bond-valence potential exist. But even for those systems such as ionic compounds where supposedly good model potentials have been constructed, the question to what degree the empirical energy landscape globally agrees with the [*ab initio*]{} energy landcape has been debated since the inception of work on structure prediction[@Schoen96b; @Cancarevic06; @Martonak06]. Clearly, a careful comparison between these two energy landscapes for a particular system should yield much insight into the foundations of the current standard modular approach to structure prediction in solids. Only now the computers are reaching the speed and ubiquity that will allow us to perform the global optimization on the [*ab initio*]{} level, as Oganov and co-workers[@Oganov06] have shown using a hybrid genetic algorithm for this purpose.
However, the step from model potentials to [*ab initio*]{} calculations is absolutely non-trivial, and requires careful adjustment of parameters to use only a miminum of CPU time and thus keep the calculations tractable. In this study, we investigate the energy landscape of the LiF-system using stochastic simulated annealing. This system was chosen as a test case, since we had studied the landscapes of the alkali halides in earlier work[@Schoen95] using a Coulomb- plus Lennard-Jones-potential. Thus, the most important structure candidates in the system are known at the empirical potential level, and we can better judge both the success of the global exploration on the [*ab initio*]{} level and the degree of agreement between the empirical and [*ab initio*]{} energy landscapes than would have been possible when choosing a not-yet-investigated chemical system.
There are thus two major goals of this article: Firstly, to show that the [*ab-initio*]{} exploration of energy landscapes with a Monte Carlo random walk based technique such as simulated annealing is feasible. Secondly, to investigate to what degree crucial features of the landscape such as the relevant minima on the level of the empirical potentials and on the [*ab initio*]{} level are the same. Finally, we note that the [*ab initio*]{} energy landscape is expected to be an appropriate choice for any system, whether ionic, covalent, or metallic. Thus, being able to globally explore such an [*ab initio*]{} energy landscape will open the path to structure prediction in systems which can no longer be reasonably described with straightforward empirical potentials.
Methods {#meth}
=======
General approach. {#gen-ap}
-----------------
Our general approach to the determination of structure candidates has been given in detail elsewhere[@Schoen96b; @Schoen01; @Schoen05]. To summarize: First, the minima on the energy landscape are determined using simulated annealing, possibly combined with a stochastic quench, as a global optimization algorithm, where both atom positions and cell parameters are freely varied. Next, the corresponding configurations are analyzed regarding their symmetries using an algorithm to find symmetries [@Hundt99a] and the space group [@Hannemann98a] as implemented in the program KPLOT[@Hundt79], followed by a comparison using an algorithm to compare cells[@Hundt06], in order to eliminate duplicate structures. Finally, the structure candidates are locally optimized on the [*ab initio*]{} level using both a heuristic algorithm[@Cancarevic04a; @Schoen04b] and the energy minimization routines included in the various [*ab initio*]{} codes. We always employ several [*ab initio*]{} methods (Hartree-Fock and density functional theory), since this allows us to compare the ranking of the candidates by energy and thus to gain some estimate of their thermodynamic stability. This is crucial since, by definition, no comparison of the predicted structures with experimental data is possible and thus we cannot “tune” the parameters of the quantum mechanical methods to reproduce the experiment. Furthermore, comparing the outcomes of the local optimizations for different methods yields insights into the connectedness of the candidates via low-lying saddles on the energy landscape. Finally, if sufficient computational power is available, we can employ the lid or threshold algorithm[@Sibani93; @Schoen96a], in order to quantitatively study the energetic and entropic barriers on the landscape, which control the kinetic stability of the metastable modifications of the solid compound.
[*Ab initio*]{} calculations and global exploration: technical details. {#abin}
------------------------------------------------------------------------
For the [*ab initio*]{} energy calculations we employ the program CRYSTAL2006 [@CRYSTAL2006]. A set of preliminary tests was performed to optimize the efficiency of our approach when applied to structure prediction on the [*ab initio*]{} level. The most important parameters tested were: the basis sets for the [*ab initio*]{} calculations; parameters such as integral thresholds for the [*ab initio*]{} calculations; the length of the simulated annealing run and of the subsequent quench run; the move classes involved.
This preliminary step is actually crucial for the task of performing [*ab initio*]{} explorations of energy landscapes: the energy calculations are performed without the use of symmetry, since all possible structures must be accessible during the random walk. However, with the default parameters in CRYSTAL2006, a single Hartree-Fock calculation for an eight-atom simulation cell whose side length equals to the experimental lattice constant without symmetry (space group P1) takes $\sim$ 13 minutes. A typical run may consist of 100000 and more simulated annealing steps, and thus the total CPU time for a single run would be on the scale of 10$^6$ minutes, i.e. roughly 2 years. For the exploration of a landscape, dozens or even hundreds of such runs are necessary. In addition, it is necessary to achieve convergence of the self-consistency cycles in calculations which start from a random geometry. This makes it obvious that a very careful calibration of all parameters is necessary. One has to exploit the fact that only a rough knowledge about the possible local mimima is required in the first stage of the global optimization. The final local optimization, based on an optimization via analytical gradients, can be subsequently done with good parameters.
The initial tests resulted in the following choices: The basis sets from reference [@Prencipe] were selected, with a slightly modified fluorine basis set: slightly tighter $sp$ functions were chosen for the two outermost exponents (0.45 instead of 0.437, 0.2 instead of 0.147) to enhance the numerical stability and the speed of the calculations. As the global optimization only has to provide a rough information about the energy landscape, it is not necessary to converge the solution very accurately. The threshold for the convergence of the self-consistent field (SCF) cycle was therefore reduced from $10^{-5}$ to $10^{-3} \ E_h$. Similarly, the thresholds for neglecting integrals were reduced from the default values of $10^{-6}$, $10^{-6}$, $10^{-6}$, $10^{-6}$, $10^{-12}$ to $10^{-4}$, $10^{-4}$, $10^{-4}$, $10^{-4}$, $10^{-8}$. For the $\vec k$-point sampling, a shrinking factor of 2 was used. The calculations were performed at the Hartree-Fock level. Note that since only the approximate positions of the basins need to be found during the initial global exploration, the level of theory does not play a crucial role.
The initial cell was cubic with a cell parameter of 7.07 Å. 4 lithium and 4 fluorine atoms were randomly placed in this cell. No symmetry was used, i.e. the simulated annealing and quenching was done in $P1$. The probabilities of the individual moves were as follows: moving individual atoms (70 %; maximal step size was 5 pm), exchanging atoms (10 %), moving atoms with (10%) and without (5%) simultaneous change of the unit cell, and the change of the origin (5 %; this move is important when the cell is subsequently truncated due to a change of the cell vectors). For all the moves which change the cell parameters, the probability of a suggested move to shorten the cell was set to 70%, in order to speed up the shrinking of the cell. To avoid atoms coming too close, a minimum distance of the sum of the radii, multiplied by 0.8, was required. The radii were determined by using the Mulliken charges and linearly interpolating between the tabulated radii [@Emsley90] for the neutral atoms and the ions. With this choice of parameters, it turned out that only a very short simulated annealing run was required (5000 steps, with the starting and final temperatures corresponding to 1 eV and 0.9 eV, respectively), followed by a quench of $\sim$ 5000 steps. The reason why the simulated annealing part could be kept so short is probably due to the fact that there are only two atom types and a very simple bonding mechanism involved. Clearly, when testing this approach with more demanding systems, exhibiting covalent bonds or a larger number of atom types, considerably longer runs are to be expected.
We generated different initial atom positions by using different start values for the random number generator. About 70 runs were performed, of which about half converged to reasonable structure candidate, whereas the other half remained in energetically unfavorable situations such as very low densities or two-dimensional structures. This could certainly have been improved by performing longer simulated annealing runs, but only at a much higher computational cost.
After the quench was finished, the space group of the configuration was analyzed with the program KPLOT [@Hundt79], using algorithms to identify the symmetry [@Hundt99a] and to find the space group [@Hannemann98a]. A subsequent local optimization was performed with the CRYSTAL code, using analytical gradients for the nuclear positions [@IJQC; @CPC] and the unit cell [@KlausDovesiRO; @KlausDovesiRO1d2d] and the full geometry optimization as implemented in the present release [@Mimmo2001; @CRYSTAL2006]. As this local optimization is not too demanding in terms of CPU time and as a high accuracy is desirable, the integral thresholds and the threshold for SCF convergence were set to the default values, and a shrinking factor of 4 was used for the $\vec k$ net. Also, the original basis set as in [@Prencipe] was used. The optimization was performed both at the Hartree-Fock level and at the level of the local density approximation (LDA). The fully optimized structures were again analyzed with KPLOT.
The computational effort was typically a few days for the simulated annealing and subsequent quench runs, and a few minutes up to one hour for the local optimization, on a single CPU of a standard PC.
Results {#res}
=======
The results of these optimizations are displayed in Tables 1 and 2. Eight promising low-energy structure candidates were found (shown in Figs. \[fig1\] - \[fig3\], generated with XCrysDen [@XCrysDen]): the rock salt structure as observed experimentally, the zincblende structure (sphalerite), the wurtzite structure, the so-called 5-5 structure[@Schoen95; @Schoen96b] (an ionic analogue to the $B_k$ structure of hexagonal BN), the NiAs structure and three structures with space group 62, 7 and 36, denoted LiF(I), LiF(II) and LiF(III), respectively (see Table 1 regarding the fraction of runs that resulted in the various structure types). LiF(I) and LiF(II) consist of nets of LiF$_4$-tetrahedra, with the first one containing narrow channels and the second one resembling a twisted sphalerite or wurtzite structure. Finally, LiF(III) consists of a network of LiF$_5$ square-pyramids. All these structure candidates had also been observed in earlier global searches in alkali halide systems[@Schoen95] using emipirical potentials consisting of a Coulomb-term and a van-der-Waals-term, and one should note that LiF(I), LiF(II) and LiF(III) are quite typical representatives of the higher-lying local minima. This result clearly demonstrates, that the global exploration of the energy landscape on the [*ab initio*]{} level is feasible and provides reasonable structures.
Discussion and Conclusion {#disc}
=========================
Concerning the accuracy of the [*ab initio*]{} calculations, we note that at the Hartree-Fock level, the wurtzite structure exhibits the lowest energy, in contrast to the experimental observation that LiF is found in the rock salt type. The failure of the Hartree-Fock approach to account for the proper ground state may be attributed to the neglect of the van der Waals interaction which is important for alkali halides as was shown in [@Dollalkali1; @Dollalkali2]. Although the van der Waals interaction is not considered in the LDA either, the LDA performs better and predicts the rock salt structure as the ground state, perhaps due to LDA’s inherent tendency to over-bind and favor higher coordinations. These observations are in good agreement with [*ab initio*]{} calculations by [Č]{}an[č]{}arevi[ć]{} et al[@Cancarevic06b; @Chane06] using Hartree-Fock and density functional theory, which showed that for LiF the functionals LDA found the rocksalt type as the minimum structure, while Hartree-Fock and the hybrid functional B3LYP found the wurtzite type, respectively. Regarding the thermodynamic stability of the various modifications as function of pressure, there is no change compared to the results of [Č]{}an[č]{}arevi[ć]{} et al[@Cancarevic06b; @Chane06], which were based on the global exploration of the empirical potential based enthalpy landscapes of LiF for ten different pressures ranging from -16 GPa to +160 GPa. Here, we only give the transition pressures based on the LDA functional: $\beta$-BeO $\rightarrow$ wurtzite at $\sim$ -8.5 GPa, wurtzite $\rightarrow$ 5-5 at $\sim$ -5.5 GPa, and 5-5 $\rightarrow$ rock salt at $\sim$ -5.0 GPa. Possible high-pressure phases might be the NiAs-type or the CsCl-type according to some density functionals; however, even if these modifications were to become thermodynamically stable, in both instances the transition pressures would be very high ($>$ 100 GPa) beyond the range of the validity of straightforward [*ab initio*]{} calculations.
The comparison between the outcome of the global landscape explorations on the [*ab initio*]{} level and on the level of empirical potentials shows that the present investigation determines essentially all the minima with the lowest energies that had been found during the earlier work [@Schoen95]. Due to the relatively short simulated annealing runs, at the end of the global exploration phase many of the candidates had ended up in side-minima corresponding to distorted versions of the structures belonging to the main minimum of the various basins typically exhibiting lower symmetries. However, the subsequent local minimization using the standard high accuracy for the [*ab initio*]{} energy calculations resulted in the system reaching the main minima of the various basins. We note that in several instances the Hartree-Fock and the DFT (LDA) calculations reached different structure types, wurtzite and 5-5, respectively. This supports our earlier observations that the wurtzite and the 5-5-structure are close neighbors on the energy landscape. [@footnote1] Similarly, we observed that the local mimization on DFT (LDA) level starting from four slightly distorted versions of the LiF(I) structure type, resulted in one case in the rock salt and in the other three cases in the 5-5-structure, respectively. [@footnote2] Again, this confirms our earlier results that the 5-5-type is located on the landscape close to the rock salt structure, possibly constituting a transitional modification on the route from the wurtzite to the rock salt type (c.f. Fig. \[fig1\]). [@footnote3] Finally, we note that in contrast to the study using the empirical energy landscape the structures exhibiting four- and five-fold coordination were found quite often. This is clearly a reflection of the fact that their energies on the [*ab initio*]{} level are very similar to the ones of the structure types with six-fold coordination (rock salt, NiAs) which are more strongly preferred when using the empirical potential.
To summarize, the comparison between the empirical and [*ab initio*]{} energy landscapes shows that at least for the case of a simple ionic system such as LiF a) the minima representing the most relevant modifications and similarly most of the other chemically interesting structure types are present on both energy landscapes, b) however, their ranking in energy depends on the type of energy calculation and similarly the likelihood of observing the minima can also be quite different, c) structures which are closely related on the empirical landscape, i.e. separated by relatively small barriers, are also close neighbors on the [*ab initio*]{} energy landscape, and d) even classes of structures that are rather unusual such as those containing channels or square pyramids are found on both landscapes. We can thus conclude that one of the fundamental assumptions behind the standard approach of structure prediction of solids, i.e. that the empirical potentials can be employed in global landscape explorations to identify the relevant modifications of a solid compound is valid at least for ionic systems where reasonable suitable potentials are available.
Of course, the number of simulated annealing runs possible when using the Hartree-Fock energy function is still much smaller than the number of runs with an empirical energy function. As a consequence, in particular the many high-lying and/or shallow minima associated with structures containing channels or belonging to transitions between two large basins, were detected relatively rarely. Nevertheless, the fact that all relevant modifications have been observed in the present study shows that the global exploration of the energy landscape of solids on the [*ab initio*]{} level using standard simulated annealing as the global optimization tool has finally become feasible. This will allow us to predict the possible modifications of crystalline compounds also in those chemical systems, where no simple empirical potentials can be constructed, thus overcoming one of the major hurdles facing crystal structure prediction in general chemical systems.
We would like to thank Z. [Č]{}an[č]{}arevi[ć]{} and U. Wedig for valuable discussions. The work was funded by the MMM-initiative of the Max-Planck-Society.
[99]{}
J. Maddox, Nature, [**335**]{} (1988) 201.
M. L. Cohen, Nature, [**338**]{} (1989) 291.
F. C. Hawthorne, Nature, [**345**]{} (1990) 297.
C.R.A. Catlow, G. D. Price, Nature, [**347**]{} (1990) 243.
J.C. Sch[ö]{}n, M. Jansen, Angew. Chem. Int. Ed., [**35**]{} (1996) 1286.
M. Jansen, Angew. Chem. Int. Ed., [**41**]{} (2002) 3747.
J.C. Sch[ö]{}n, M. Jansen, Z. Kristallogr., [**216**]{} (2001) 307, 361.
J.C. Sch[ö]{}n, M.A.C. Wevers, M. Jansen, J. Phys.: Cond. Matter, [**15**]{} (2003) 5479.
A.Y. Liu, M.L. Cohen, Phys. Rev. B, [**41**]{} (1990) 10727.
J. Pannetier, J. Bassas-Alsina, J. Rodriguez-Carvajal, V. Caignart, Nature, [**346**]{} (1990) 343.
C. M. Freeman, J. M. Newsam, S. M. Levine, C.R.A. Catlow, J. Mater. Chem., [**3**]{} (1993) 531.
J. C. Sch[ö]{}n, M. Jansen, Ber. Bunsenges., [**98**]{} (1994), 1148.
M. B. Boisen Jr., G. V. Gibbs, M.S.T. Bukowinski, Phys. Chem. Miner., [**21**]{} (1994) 269.
J. C. Sch[ö]{}n, M. Jansen, Comput. Mater. Sci., [**4**]{} (1995) 43.
T. S. Bush, C.R.A. Catlow, P. D. Battle, J. Mater. Chem., [**5**]{} (1995) 1269.
H. Putz, J. C. Sch[ö]{}n, M. Jansen, Comput. Mater. Sci., [**11**]{} (1998) 309.
S. M. Woodley, P. D. Battle, J. D. Gale, C.R.A. Catlow, Phys. Chem. Chem. Phys., [**1**]{} (1999) 2535.
N.L. Allan, G.D. Barrera, M.Yu. Lavrentiev, I.L. Todorov, J.A. Purton, J. Mat. Chem., [**11**]{} (2001) 63.
C. Mellot-Draznieks, J. M. Newsam, A.M. Gorman, C. M. Freeman, G. Férey, Angew. Chem. Int. Ed., [**39**]{} (2000) 2270.
B. Winkler, C.J. Pickard, V. Milman, G. Thimm, Chem. Phys. Letters, [**337**]{} (2001) 36.
C. Mellot-Draznieks, S. Girard, G. Férey, J.C. Sch[ö]{}n, [Ž]{}. [Č]{}an[č]{}arevi[ć]{}, M. Jansen, Chem. Eur. J., [**8**]{} (2002) 4102.
A.R. Oganov, C.W. Glass, J. Chem. Phys, [**124**]{} (2006) 244704.
J.C. Sch[ö]{}n, I.V. Pentin, M. Jansen, Phys. Chem. Chem. Phys., [**8**]{} (2006) 1778.
S. Kirkpatrick, C.D. Gelatt Jr., M.P. Vecchi, Science, [**220**]{} (1983) 671.
V. Czerny, J. Optim. Theory Appl., [**45**]{} (1985) 41.
J. Holland, Adaptation in Natural and Artificial Systems, Ann Arbor: Univ. Michigan Press (1975).
R. L. Johnston, eds., Applications of Evolutionary Computation in Chemistry, Struct. Bonding 110, New York: Springer (2004).
S.M. Woodley, Phys. Chem. Chem. Phys., [**9**]{} (2007) 1070.
P. Sibani, J. C. Sch[ö]{}n, P. Salamon, J.-O. Andersson, Europhys. Lett., [**22**]{} (1993) 479.
J. C. Sch[ö]{}n, H. Putz, M. Jansen, J. Phys.: Cond. Matter, [**8**]{} (1996) 143.
A. Laio and M. Parrinello, Proc. Natl. Aca. Sci. U. S. A., [**99**]{} (2002) 12562.
R. Martoňák, A. Laio, M. Bernasconi, C. Ceriani, P. Raiteri, F. Zipoli, M. Parrinello, Z. Kristallogr., [**220**]{} (2005) 489.
S.M. Woodley, in: Struct. Bond. 110, eds. R. L. Johnston, New York: Springer (2004), p. 95.
. [Č]{}an[č]{}arevi[ć]{}, J.C. Schön, M. Jansen, Phys. Rev. B, [ **73**]{} (2006) 224114.
R. Martoňák, D. Donadio, A. R. Oganov, and M. Parrinello, Nature Materials, [**5**]{}, 623 (2006).
J.C. Sch[ö]{}n, M. Jansen, Mat. Res. Soc. Symp. Proc.: Solid State Chemistry of Inorganic Materials V, edited by J. Li et al. 848 (2005) p. 333.
R. Hundt, J.C. Sch[ö]{}n, A. Hannemann, M. Jansen, J. Appl. Cryst., [**32**]{} (1999) 413.
A. Hannemann, R. Hundt, J.C. Sch[ö]{}n, M. Jansen, J. Appl. Cryst., [**31**]{} (1998) 922.
R. Hundt, KPLOT, University of Bonn, Germany, (1979); Version 9 (2007).
R. Hundt, J.C. Sch[ö]{}n, M. Jansen, J. Appl. Cryst., [**39**]{} (2006) 6.
J.C. Sch[ö]{}n, [Ž]{}. [Č]{}an[č]{}arevi[ć]{}, M. Jansen, J. Chem. Phys., [**121**]{} (2004) 2289.
. [Č]{}an[č]{}arevi[ć]{}, J.C. Schön, M. Jansen, Proc. Progress in Materials Science and Processes, Mat. Sci. Forum, [ **453**]{} (2004) 71.
R. Dovesi, V. R. Saunders, C. Roetti, R. Orlando, C. M. Zicovich-Wilson, F. Pascale, B. Civalleri, K. Doll, N. M. Harrison, I. J. Bush, Ph. D’Arco, M. Llunell, CRYSTAL2006, University of Torino, Torino (2006).
M. Prencipe, A. Zupan, R. Dovesi, E. Aprà, and V. R. Saunders, Phys. Rev. B [**51**]{}, 3391 (1995).
J. Emsley, The Elements, Oxford: Oxford Univ. Press (1990).
K. Doll, V. R. Saunders, N. M. Harrison, Int. J. Quantum Chem., [**82**]{} (2001) 1. K. Doll, Comp. Phys. Comm., [**137**]{} (2001) 74. K. Doll, R. Dovesi and R. Orlando, Theor. Chem. Acc., [**112**]{} (2004) 394. K. Doll, R. Dovesi and R. Orlando, Theor. Chem. Acc., [**115**]{} (2006) 354. B. Civalleri, Ph. D’Arco, R. Orlando, V. R. Saunders, and R. Dovesi, Chem. Phys. Lett., [**348**]{} (2000) 131. A. Kokalj, Comp. Mater. Sci., 2003, Vol. 28, 155. K. Doll and H. Stoll, Phys. Rev. B, [**56**]{} (1997) 10121. K. Doll and H. Stoll, Phys. Rev. B, [**57**]{} (1997) 4327.
. [Č]{}an[č]{}arevi[ć]{}, Ph.D. thesis, Univ. Stuttgart (2006).
. [Č]{}an[č]{}arevi[ć]{}, J.C. Schön, M. Jansen, in preparation (2007).
S. Limpijumnong, W.R.L. Lambrecht, Phys. Rev. B [**63**]{} (2001) 104103.
V.A. Streltsov, V.G. Tsirelson, R.P. Ozerov, O.A. Golovanov, Kristallografiya [**33**]{} (1987) 90.
---------------- ------------- ----------- ----------- ---- -----
structure type space group
HF LDA HF LDA
rock salt 225 -428.2210 -427.0665 9 11
zincblende 216 -428.2211 -427.0445 10 9
5-5 194 -428.2222 -427.0534 4 12
wurtzite 186 -428.2250 -427.0484 8 4
NiAs 194 -428.2051 -427.0515 1 1
LiF(I) 62 -428.2162 - 4 0
LiF(II) 7 -428.2089 -427.0374 1 1
LiF(III) 36 -428.2054 - 1 0
---------------- ------------- ----------- ----------- ---- -----
: \[energy\] Structures obtained corresponding to local minima, the total energy (without a correction for the basis set superposition error) for four formula units, in hartree units ($E_h$), and the number of times the structures were found.
[ccc]{} structure type &\
(space group) & &\
\
rock salt$^a$ (225) & $a$=4.01 Å& $a$=3.94 Å\
& Li (0,0,0) &(0,0,0)\
& F (1/2,0,0) & (1/2,0,0)\
\
zincblende (216) & $a$=4.31 Å& $a$=4.23 Å\
& Li (0,0,0) & (0,0,0)\
& F (1/4, 1/4, 1/4) & (1/4, 1/4, 1/4)\
\
5-5 (194) & $a$=3.28 Å; $c$=4.05 Å& $a$=3.24 Å; $c$=3.94 Å\
& Li (1/3,2/3,1/4) & (1/3,2/3,1/4)\
& F (2/3,1/3,1/4) & (2/3,1/3,1/4)\
\
wurtzite (186) & $a$=3.09 Å; $c$=4.86 Å& $a$=3.07 Å; $c$=4.64 Å\
& Li (1/3,2/3,0) & (1/3,2/3,0)\
& F (1/3,2/3,0.386) & (1/3,2/3,0.399)\
\
NiAs (194) & $a$=2.79 Å; $c$=4.80 Å& $a$=2.73 Å; $c$=4.74 Å\
& Li (0,0,0) & (0,0,0)\
& F (1/3,2/3,1/4) & (1/3,2/3,1/4)\
\
LiF(I) (62) & $a$=5.60 Å, $b$=3.14 Å, $c$=5.17 Å&\
& Li (0.831, 3/4, 0.587) & -\
& F (0.677, 1/4, 0.895) &\
\
LiF(II) (7)& $a$=2.92 Å, $b$=5.62 Å, $c$=5.34 Å, $\beta=$115.9$^\circ$ & $a$=2.84 Å, $b$=5.50 Å, $c$=5.25 Å; $\beta$=115.5$^\circ$\
& Li (0, 0.407,0) & (0, 0.405,0)\
& Li (0.648, 0.121, 0.428) & (0.638, 0.122, 0.430)\
& F (0.955, 0.880, 0.676) & (0.941, 0.878, 0.678)\
& F (0.613, 0.403, 0.616) & (0.599, 0.401, 0.617)\
\
LiF(III) (36) & $a$=2.72 Å, $b$=5.30 Å, $c$=4.81 Å&\
& Li (0, 0.083, 0) & -\
& F (0, 0.617,0.714) &\
![Ball-and-stick models of the structurally related rock salt (left), 5-5 (middle) and wurtzite (right) structures, viewed along the c-axis. Large and small spheres correspond to Li- and F-atoms, respectively. The wurtzite-type transforms into the 5-5-type by slightly displacing the Li- and F-atoms along the c-axis such that the local coordination changes from LiF$_4$-tetrahedra to LiF$_5$-trigonal bipyramids. Similarly, one obtains the rock salt-type from the 5-5-type by compressing the 5-5-structure along the a-axis such that the LiF$_5$-trigonal bipyramids become LiF$_6$-’square bipyramids’, i.e. LiF$_6$-octahedra.[]{data-label="fig1"}](HFsinglepoint_run34_3005-2.eps "fig:"){width="5cm"} ![Ball-and-stick models of the structurally related rock salt (left), 5-5 (middle) and wurtzite (right) structures, viewed along the c-axis. Large and small spheres correspond to Li- and F-atoms, respectively. The wurtzite-type transforms into the 5-5-type by slightly displacing the Li- and F-atoms along the c-axis such that the local coordination changes from LiF$_4$-tetrahedra to LiF$_5$-trigonal bipyramids. Similarly, one obtains the rock salt-type from the 5-5-type by compressing the 5-5-structure along the a-axis such that the LiF$_5$-trigonal bipyramids become LiF$_6$-’square bipyramids’, i.e. LiF$_6$-octahedra.[]{data-label="fig1"}](HFsinglepoint_run46_3005-2.eps "fig:"){width="5cm"} ![Ball-and-stick models of the structurally related rock salt (left), 5-5 (middle) and wurtzite (right) structures, viewed along the c-axis. Large and small spheres correspond to Li- and F-atoms, respectively. The wurtzite-type transforms into the 5-5-type by slightly displacing the Li- and F-atoms along the c-axis such that the local coordination changes from LiF$_4$-tetrahedra to LiF$_5$-trigonal bipyramids. Similarly, one obtains the rock salt-type from the 5-5-type by compressing the 5-5-structure along the a-axis such that the LiF$_5$-trigonal bipyramids become LiF$_6$-’square bipyramids’, i.e. LiF$_6$-octahedra.[]{data-label="fig1"}](HFsinglepoint_run38_3005-2.eps "fig:"){width="5cm"}
![Ball-and-stick models of LiF(I) (left), LiF(II) (middle) and LiF(III) (right), respectively. For notation, c.f. Fig. \[fig1\]. These networks of LiF$_4$-tetrahedra and LiF$_5$-square pyramids are characteristic of metastable higher-lying local minima on the energy landscape of the alkali halides. [@Schoen95][]{data-label="fig2"}](HFsinglepoint_run18_3005-2.eps "fig:"){width="5cm"} ![Ball-and-stick models of LiF(I) (left), LiF(II) (middle) and LiF(III) (right), respectively. For notation, c.f. Fig. \[fig1\]. These networks of LiF$_4$-tetrahedra and LiF$_5$-square pyramids are characteristic of metastable higher-lying local minima on the energy landscape of the alkali halides. [@Schoen95][]{data-label="fig2"}](HFsinglepoint_run94_3005-2.eps "fig:"){width="5cm"} ![Ball-and-stick models of LiF(I) (left), LiF(II) (middle) and LiF(III) (right), respectively. For notation, c.f. Fig. \[fig1\]. These networks of LiF$_4$-tetrahedra and LiF$_5$-square pyramids are characteristic of metastable higher-lying local minima on the energy landscape of the alkali halides. [@Schoen95][]{data-label="fig2"}](HFsinglepoint_run17_3005-2.eps "fig:"){width="5cm"}
![Ball-and-stick models of the sphalerite (left) and NiAs (right) structure, respectively. For notation, c.f. Fig. \[fig1\].[]{data-label="fig3"}](HFsinglepoint_run53_3005-2.eps "fig:"){width="5cm"} ![Ball-and-stick models of the sphalerite (left) and NiAs (right) structure, respectively. For notation, c.f. Fig. \[fig1\].[]{data-label="fig3"}](HFsinglepoint_run50_3005-2.eps "fig:"){width="5cm"}
|
---
abstract: |
We present a study of the broadband UBV color profiles for 257 Sbc barred and non–barred galaxies, using photoelectric aperture photometry data from the literature. Using robust statistical methods, we have estimated the color gradients of the galaxies, as well as the total and bulge mean colors. A comparative photometric study using CCD images was done. In our sample, the color gradients are negative (reddish inward) in approximately 59% of the objects, are almost null in 27%, and are positive in 14%, considering only the face–on galaxies, which represents approximately 51% of the sample. The results do not change, essentially, when we include the edge–on galaxies.
As a consequence of this study we have also found that barred galaxies are over–represented among the objects having null or positive gradients, indicating that bars act as a mechanism of homogenization of the stellar population. This effect is more evident in the (U$-$B) color index, although it can also be detected in the (B$-$V) color.
A correlation between the total and bulge colors was found, which is a consequence of an underlying correlation between the colors of bulges and disks found by other authors. Moreover, the mean total color is the same irrespective of the gradient regime, while bulges are bluer in galaxies with null or positive gradients, which indicates an increase of the star formation rate in the central regions of these objects.
We have also made a quantitative evaluation of the amount of extinction in the center of these galaxies. This was done using WFPC2 and NICMOS HST archival data, as well as CCD B, V and I images. We show that although the extinction in the V–band can reach values of up to 2 magnitudes in the central region, it is unlikely that dust plays a fundamental role in global color gradients.
We found no correlation between color and O/H abundance gradients. This result could suggest that the color gradients are more sensitive to the age rather than to the metallicity of the stellar population. However, the absence of this correlation may be caused by dust extinction. We discuss this result considering a picture in which bars are a relatively fast recurrent phenomenon.
These results are not compatible with a pure classical monolithic scenario for bulge and disk formation. On the contrary, they favor a scenario where both these components are evolving in a correlated process, in which stellar bars play a crucial role.
author:
- 'D. A. Gadotti'
- 'S. dos Anjos'
title: |
Homogenization of the Stellar Population along Late–Type\
Spiral Galaxies[^1]
---
Introduction
============
Several recent works have been contributed to our understanding of the dynamical evolutionary processes related to stellar bars in galaxies (see [@fri99] for a review). Much of these works point to the possibility that these processes may be related to the formation and/or building of galactic bulges, as opposed to a pure monolithic scenario ([@egg62]) and the hierarchical scenario (e.g., [@kau93; @kau94; @bau96; @bou98]).
We know that bars are very easy to form in stellar disks due to non–circular orbits of the stars in the disk, or due to instabilities generated by the presence of a companion. In the RC3 ([@dev91]), for instance, 30% of the spiral galaxies are strongly barred. Recent theoretical studies based on N–body simulations (e.g., [@fri95] and references therein) show that, once formed, the stellar bar induces a series of dynamical processes in the host galaxy. Basically, these studies show two routes for the formation and/or building of galactic bulges. In the first one, stellar bars could collect gas from the outer disk, generating bursts of star formation and a chemical enrichment in the central regions. Another possibility is that the stars themselves might be transported from the disk to the bulge, through, for instance, the hose mechanism ([@too66]), orbital resonances ([@com81]) and the onset of irregular stellar orbits (e.g., [@ber98]). Moreover, [@nor96], among other theoretical works, showed that the central concentration of mass, induced by the bar, could destroy its orbital structure and eventually the bar itself. These authors suggest that the formation of the bar, its dissolution and consequent formation and/or building of the bulge, may be a fast recurrent process (i.e., $\sim 10^{8}$ years). [@fri93] suggest that the continuous building of the bulge in a galaxy could actually change its overall morphology. One Sc galaxy, for instance, might become in a first step a SBb, and then a Sb, giving an evolutionary meaning to the late–type spiral scheme along the Hubble sequence.
From the observational point of view, comparative studies related to the general properties of barred and non–barred galaxies seem to give some support to the formation and/or building of galactic bulges through this secular evolutionary scenario. [@kor82] found that triaxial bulges, which are normally associated with bars, rotate faster than bulges of non–barred galaxies. [@kor83] showed that bulges of barred galaxies have a central velocity dispersion smaller than the one presented by bulges in non–barred galaxies. [@mar94] and [@zar94] show that barred galaxies have less pronounced O/H gradients than non–barred galaxies. [@sak99] show that barred galaxies present a higher degree of central concentration of CO molecular gas than non–barred galaxies.
Moreover, box–shaped bulges, representing at least 20–30% of edge–on S0’s ([@des87; @sha87]), seem to show this morphology as a consequence of steps in the secular dynamical evolutionary processes in bars, as indicated by a series of recent results ([@kui95; @mer99; @bur99a; @ath99; @bur99b; @bur99c]).
Studies related to general properties of spirals (e.g., [@pel96]) revealed similar broadband colors of the inner disk and bulge; [@cou96] and [@dej96b] found a correlation between the scale lengths of disks and bulges. These results indicate the existence of an evolutionary connection between these two components (see [@wys97] for a review), and have been interpreted as a consequence of the dynamical secular evolutionary scenario.
Another way to obtain clues related to the formation and evolution processes in galaxies is through the study of radial color distribution. However, surprisingly, there are few statistical works in the literature exploring the broadband colors to study the bulge and the disk components separately. The study of the integrated broadband colors in galaxies have been done to obtain information concerning the stellar population (e.g., [@sea73; @tin80; @fro85; @pel89; @sil94]), as well as the internal extinction caused by the interstellar dust (e.g., [@eva94; @pel94]). Exceptions are the works of [@dej94] and [@dej96a], for instance. Such studies certainly bring clues about the bulge formation scenarios.
The main goal of this paper is to compare the color gradients’ behaviour in barred and non–barred late–type galaxies and verify if these results are in agreement with the predictions from evolutionary processes. A very fast way to verify alternative scenarios for the formation and/or building of bulges, exploring the radial color distribution in a statistical point of view, is to use the available data in the literature.
With this objective, we have selected a sample of 257 Sbc galaxies with broadband colors available in the literature and observed through photoelectric aperture photometry (Sect. 2). Using robust statistical methods, we estimated, for each galaxy, the color gradient as well as the mean total and bulge characteristic color indices (Sect. 3). Moreover, we have also acquired CCD images for 14 galaxies in the sample (Sect. 4) in order to test the accuracy of our results. In Sect. 5 we present the main results of our analysis and, finally, in Sect. 6 we present a general discussion and our main conclusions.
Sample Selection
================
The photoelectric data used in this analysis were extracted from the compilation by [@lon83] and its supplement ([@lon85]). Both compilations will hereafter be referred to as LdV83,85, respectively. Among other information, the catalogue presents for different galaxies the (U$-$B) and (B$-$V) aperture color indices extracted from the literature. We have selected galaxies with Hubble stage index T = 3, 4 or 5, corresponding to morphological types Sb, Sbc and Sc, barred and non–barred, and having B$_{T}$ brighter than 14, according to the Third Reference Catalogue ([@dev91]; hereafter RC3). This criterion assures that the morphological classification is more reliable, since fainter objects are, in general, more difficult to be classified. Nevertheless, it is worth notice that several galaxies were distinctly classified in the LdV83,85 and in the RC3. Since the rms uncertainty associated with the morphological type is of order 2 units ([@lah95]), we will consider all galaxies in our sample as belonging to one unique mean morphological class (T = 4 $\pm$ 1).
We remark that our choice for galaxies in this specific type range was motivated by the fact that these are the most luminous objects in the B broadband along the Hubble sequence ([@van97; @rob94]). This is possibly indicating that these systems have the highest rate of star formation among spirals. Another reason for this choice comes from the observation that it is possible that the dynamical evolutionary processes occur mainly in late–type spirals rather than in early–type ones ([@wys97]).
A first selection of the data was partially done with the electronic version of the catalogue, available at the CDS ([*Centre de Données Astronomiques de Strasbourg*]{}), and resulted in a sample containing 531 objects. In order to have an equally representative set of data, we have removed from the sample those objects with less than 5 different color aperture data. Thus, we selected only those objects for which a more careful study of the distribution of the color indices could be done.
We know that extinction by dust can strongly affect studies of the radial color distribution in the U, B and V bands, in particular for late–type galaxies. However, these bands are well suited for the study proposed here, since in these bands we can detect recent star formation, which is a possible consequence of the dynamical secular evolutionary scenario. In order to minimize the effects of dust, we have also performed a visual inspection of all galaxies, using images of the DSS (Digitized Sky Survey), eliminating those peculiar systems, (e.g., NGC 891), presenting clear perturbations, such as strong dustlanes or close companions in strong interaction, that could disturb the analysis. After this last step we ended with a final sample having 257 galaxies, used in the present analysis.
Estimating Gradients and Colors
===============================
Color Gradients
---------------
As mentioned before, we have used the LdV83,85 data to estimate the (U$-$B) and (B$-$V) color gradients of the galaxies in our sample. Since this is a compilation of data acquired by different observers, telescopes, instruments and in different atmospheric conditions, it is natural that, for any given galaxy, some data will not appear consistent, due to larger internal errors. For instance, different authors could assign quite distinct values for the color index of the same galaxy at the same aperture. Indeed, this is the case, for example, of NGC 2377 in the aperture of 2.6 arcminutes, where three different sources gave to the (U$-$B) color index the values 0.11, 0.20 and 0.38! Therefore, trying to fit a straight line to the color data, using these discrepant values and the classical least squares regression (LS), will result in a quite uncertain estimative for the color gradient.
Since we do not know a priori how to identify the bad data, it is mandatory to use some robust statistical technique more insensitive to the presence of these uncertainties. In our analysis, we choose to apply the Least Median of Squares (LMS) method ([@rou84]). Contrary to the classical LS regression, this method minimizes the [*median*]{} of the squared residuals. The results obtained are more resistant to the effects of contamination in the data. More specifically, the estimation of the color gradient was done with the program [progress]{} ([@rou87]), available at the StatLib (http://www.lib.stat.cmu.edu/). This program performs a robust regression analysis by means of the LMS method, yielding more reliable estimates of the regression parameters, and allowing to identify outliers in the data. [progress]{} first calculates the regression parameters by LS, then by LMS, and finally by a reweighted LS (in which the outliers have weight zero). Through this algorithm, the estimated gradient has, in most cases, the same value obtained through the LMS method alone. However, the reweighted LS method works better than the LMS method when the number of data points is small ([@rou84; @rou87]).
The color gradient was estimated following the same definition of [@pru98], i.e.,
$${G} = {{\Delta (X-Y)} \over {\Delta \log {A}}},$$
where $(X-Y)$ represents the integrated color index in magnitudes within an aperture $A$ in units of 0.1 arcminute.
The estimation of each gradient was also accompanied by a graphical visual inspection, since, in some cases, the results from the non–reweighted LS were more representative than those by LMS or by the reweighted LS. This could happen because, when trying to minimize the errors, the LMS method can be fooled by a small group of data points that fits very well a straight line. Thus, in these cases, we defined the gradient by the parameters obtained by the classical LS regression.
From our sample of 257 galaxies, we obtained 239 (B$-$V) and 202 (U$-$B) color gradients. The other estimations were rejected either because the number of data points were too small and/or the points were too inconsistent to result in a reliable value. Figure 1 shows four examples of the radial color distribution in galaxies. NGC 1425 and NGC 2613 are examples of objects having the more typical negative color gradient. An example of object with a clear null gradient is NGC 1672. The more rare case of objects with a positive gradient is represented here by UGC 3973. In this figure, we can also compare the fits using the three different methods discussed above. The dashed lines refer to the standard LS method, while dotted lines refer to the LMS method and the solid lines refer to the [progress]{} algorithm. Note the importance of using a robust statistical method to determine color gradients in such cases as for NGC 2613 and UGC 3973.
The LdV83,85 data is not corrected for either Galactic reddening or internal reddening. In determining the color gradients, the correction for Galactic reddening is not necessary, since it only introduces a constant vertical shift of the points, not affecting the gradient evaluation. However, the correction for internal reddening is quite difficult to predict correctly, due to the still unsolved problems related to the optical thickness and the inclination of galaxies (e.g., [@gio95; @dej96c]). However, although such a correction could be important for any particular object, it will only produce minor changes compared to the uncertainties involved in the measurement and determination of the color gradients.
On the other hand, in models of the dust distribution in disk dominated galaxies, it was shown ([@dej96c]) that only a small fraction of the color gradients could be due to dust reddening, i.e., dust reddening plays a minor role in color gradients. Furthermore, color gradients induced by dust are small from U to R broadbands, because the absorption properties do not change very much among these bands.
In Fig. 2 we show the color gradients plotted against both the Galactic reddening and the inclination of the galaxies. We can see from this figure that the two corrections mentioned above do not interfere with the distribution of color gradients obtained in our sample. The top panels show that there is no correlation between color gradients and Galactic reddening, represented by the color excess $E(B-V)$, determined through the recently obtained maps of [@sch98]. On the other hand, since the internal reddening varies with the inclination of the galaxy along the line of sight, which can be represented by the $\log R_{25}$ parameter of the RC3, the bottom panels of Fig. 2 show that there is no clear correlation between color gradients and internal reddening. Since no correlation was found, we opt to neglect the internal reddening when estimating the color gradients. We remark, however, that both effects are still obviously relevant when dealing with the integrated color, as we present in the next subsection.
Total and Bulge Colors
----------------------
We have used two different procedures to determine the total and the bulge characteristic color indices. In the first one, we adopt the bulge color as the one observed through the smallest aperture, and the total color as the one observed through the aperture that reaches the 25 mag arcsec$^{-2}$ B isophotal level, as presented in the RC3. In some cases, when the data do not reach the dimensions required, an extrapolation was done using the estimated gradient. On the other hand, an average was done for apertures with several data points. No reddening corrections were made in this method.
We stress that this method is completely unbiased, in the sense that we use the original data, and therefore it is useful to verify if the total colors and the colors of bulges are correlated. Such correlation should in fact exist, since the colors of bulges and disks are correlated ([@pel96]). They have used the (U$-$R), (B$-$R), (R$-$K) and (J$-$K) colors in a sample of 30 early–type spirals (earlier than Sbc). As shown in Sect. 5.5, we also found a good correlation, consistent with the findings of these authors.
Since galaxies have different angular sizes and were observed through different sets of apertures, the method described above is only an approximated procedure, since, in many cases, the measurements were made at different galactocentric distances. In order to compare the data of galaxies at the same physical dimension, we have defined a characteristic bulge color index as the one measured within 1/5 of the galaxy effective radius. Even if there is some disk contamination at this aperture, the major contribution comes from the bulge, and therefore we made no attempt to correct for this contamination. Using the definition of gradient (Eq. (1)), this bulge color was derived from our fits as
$${{(X-Y)}_{b}} = {{(X-Y)}_{eff} - 0.7 G},$$
where $(X-Y)_{eff}$ is the effective color index, measured within the effective aperture in the B band. We have also define a characteristic total color as the one measured within 2 effective radius, corresponding therefore to
$${{(X-Y)}_{T}} = {{(X-Y)}_{eff} + 0.3 G}.$$
Equations (2) and (3), and the effective color indices given by the RC3, were used to determine these characteristic colors.
As we have already mentioned, this second method is more suitable to compare the values from different galaxies. However, we could not use this method to verify the correlation between the total colors and the colors of bulges, since Eq(s). (2) and (3) already imposes such a correlation, as one can see by subtracting them. Therefore, this justifies our first rough method used only to verify the existence of a real correlation, since that method do not suffer from this kind of bias.
We have corrected these characteristic color indices for Galactic reddening using the maps of [@sch98] to obtain the (B$-$V) color excess, and used the relation
$${{E(U-B)}\over{\;E(B-V)}} = {0.72\pm0.03},$$
which can be found in [@kit98].
We did not correct these values for any differential internal reddening between bulge and disk. Instead, we have applied an integrated correction to account for the effects of inclination. According to [@gio94], the internal extinction as a function of the inclination of the galaxy, derived from I–band images of Sc galaxies, is
$${{A}_{I}} = {1.12(\pm 0.05) \log {{a}\over {b}}},$$
where $a$ and $b$ are, respectively, the major and minor axis of the galaxy. For the U, B and V bands, [@elm98] shows that the extinction coefficients are, respectively, 3.81, 3.17 and 2.38 times the extinction coefficient in the I band, according to observations done in the Galaxy. Since the Galaxy is likely a Sbc galaxy we used these same relations for the objects in our sample. Using the definition of the color excess and the fact that $\log a/b$ is approximately equivalent to the $\log R_{25}$ parameter of the RC3, we finally arrive to the relations used in our work:
$${E(U-B)} = {0.68 \log R_{25}}$$
and
$${E(B-V)} = {0.87 \log R_{25}}.$$
It is interesting to observe that the corrections we have applied are actually $\sim$ 2–3 times larger than the ones adopted in the RC3! Indeed, earlier works (see, e.g., [@dev59]) argued that spiral galaxies were nearly transparent. But more recent studies (e.g., [@bos94; @gio95]) show that the optical thickness of spiral galaxies is higher. The adopted corrections in Eq(s). (6) and (7) assume that spiral galaxies have a large optical thickness, and thus are much more realistic.
Although the galaxies in our sample can be considered as local (-295 Km/s (NGC 224) $\leq cz \leq$ 8720 Km/s (UGC 4013)), with a typical value of $cz \sim$ 2000 Km/s), we have also applied the K–correction, using the equations of the RC3.
The galaxies analyzed in this work, as well as the results from the determination of the (B$-$V) and (U$-$B) gradients, and of the total and bulge color indices, can be seen in Table 1.
Comparative Studies
===================
CCD Images
----------
The ideal set of data to study the radial color distribution in the disk and bulge components is obtained by using CCD photometry, which permits a differential evaluation of the color along the galaxies. However, as mentioned before, we have choose a more fast way in order to have a statistically significant set of data. This was the main reason which lead us to use the available data from LdV83,85. It is interesting, therefore, to compare the color distribution obtained from CCD and aperture photoelectric photometry.
In this subsection, we present a comparison with the CCD data of 14 galaxies observed at the Pico dos Dias Observatory (PDO/LNA – CNPq, Brazil). The CCD observations were done with a 24 inch telescope having a focal ratio f/13.5, and using a thin back–illuminated CCD SITe SI003AB, with 1024 $\times$ 1024 pixels. The plate scale is 0.57 arcsec/pixel, resulting in a field of view of approximately 10 $\times$ 10 arcmin. The CCD gain was fixed on 5 electrons/ADU and the read–out noise on 5.5 electrons. All objects were observed in the B, V, R and I passbands of the Cousins system. For each object, we have done 6 exposures in the B band, 5 in the V, and 3 in the R and I bands, typically, with an exposure of 300 seconds. The multiple exposures aim to ease cosmic ray removal. The data was calibrated with a set of standard stars of [@gra82] and corrected for atmosphere and Galactic extinction. The later correction was done using the maps of [@sch98].
The standard processing of the CCD data includes bias subtraction, flatfielding and cosmetics. The first step in the sky subtraction was done by editing the combined images in each filter, removing the galaxy and stars. After that step we determined the mean sky background and its standard deviation ($\sigma$). Then, we removed all pixels whose values were discrepant by more than 3 $\sigma$ from the mean background. An sky model was obtained by fitting a linear surface to the image, and this model was subtracted from the combined image. We finally removed objects such as stars and H[ii]{} regions. All these procedures were done using the [iraf]{}[^2] package.
We then used the [ellipse]{} task to calculate the surface brightness profiles of each galaxy in each band. Subtracting the profiles we obtained color gradients, constructing tables in the same units of the ones in LdV83,85. These tables were used in the [progress]{} algorithm to provide values for the gradients in the same way it was done for our whole sample.
In Fig. 3, we show a plot of the CCD gradients and those obtained with the photoelectric aperture data, showing that both estimations are essentially the same. The good correlation between these two set of values (Pearson correlation coefficient R = 0.93) gives support to the results obtained with the LdV83,85 data. The mean difference is G$_{LdV}$ $-$ G$_{CCD} \simeq -0.06$.
We have also done a comparison with the CCD observations made by [@dej94] to study color profiles in a sample of 86 face–on disk galaxies. We have applied for the 8 galaxies our samples have in common, the same method we have used in this work, using the B and V CCD images kindly provided by de Jong. We have simulated photometric apertures on these images using the [imexamine]{} task from [iraf]{}. The comparison of the (B$-$V) gradients obtained using the photoelectric data by LdV83,85 and de Jong’s CCD images revealed a Pearson correlation coefficient of $R=0.74$. If we do not consider two outliers the Pearson coefficient is $R=0.97$.
Comparison with Prugniel & Héraudeau
------------------------------------
[@pru98], hereafter PH98, have also estimated the (U$-$B) and (B$-$V) gradients, using both CCD and photoelectric aperture photometry, for a large fraction of the galaxies in our sample. To avoid the uncertainties due to inconsistent measures these authors have attributed different statistical weights to each source of data.
In Fig. 4 we present a comparison between the gradients determined in the present work and those estimated by PH98. The correlation coefficient R is 0.85 for (B$-$V) and 0.81 for (U$-$B). Moreover, we can see that there are no systematic differences between the two works. The mean value of the differences is 0.004 in (B$-$V) and 0.011 in (U$-$B).
Analysing Gradients, Colors and Bars
====================================
In this section, we will analyse the results obtained from Sect. 3, regarding the color gradients, as well as those relating to the total and bulge color indices. We have separated our sample into barred (SAB+SB) and non–barred (S+SA) galaxies, in order to test the bulge formation in the evolutionary and monolithic scenarios. Since the identification of bars is much more difficult in edge–on systems, and the effects of dust extinction are minimized in face–on galaxies, we took the special care of analysing the face–on and the edge–on galaxies separately. We use the same criterion as [@dej94], defining as face–on those galaxies with $\log R_{25} \leq 0.20$, corresponding to $b/a \geq 0.625$. Galaxies which do not obey this criterion we regard as edge–on. The following galaxies, IC 983, NGC 253, NGC 1169, NGC 1625, NGC 1964, NGC 2276, NGC 2377, NGC 2525, NGC 3344, NGC 3646, NGC 4394, NGC 4402, NGC 5054, NGC 6215, NGC 6300, NGC 6878A, NGC 7307 and UGC 11555, whose gradients were too uncertain to be used, were removed from our sample.
Table 1 shows the color gradients and its errors for all galaxies in our sample, as well as the bulge and total characteristic color indices. The errors of the gradients are the ones obtained through the [progress]{} algorithm and thus are fit errors, which are larger than the photometric errors alone. One can see that the mean error on the (B$-$V) gradient is 0.03, and on the (U$-$B) is 0.05. The mean errors for the bulge and total color indices are, respectively, 0.04 and 0.03 for (B$-$V) and 0.05 and 0.04 for (U$-$B).
Gradients’ Distributions
------------------------
The distribution of the color gradients for barred and non–barred galaxies, considered separately, both face and edge–on projections, can be seen in Fig. 5. The statistical data from this figure are presented in Table 2, where column (1) contains the description of each subsample, while columns (2) and (5) contain the total number of objects in each subsample in each color index. Columns (3) and (6) show the mean values and their respective standard errors. Finally, columns (4) and (7) contain the standard deviations of these distributions. These values were obtained through a Gaussian fit to the observed distribution.
We can observe from Fig. 5 that the (U$-$B) distribution for barred galaxies, for both the face and edge–on projections, is wider than the distribution for non–barred galaxies. The results in Table 2 also show that barred galaxies have wider distributions. From this table, we can see that the differences in the standard deviations are larger than the expected photometric errors, indicating that this is indeed a real effect. With a smaller amplitude, the same effect is also present in the (B$-$V) gradients. Even considering that the photometric errors are larger in the U band, this can hardly explain this effect since such errors affect both kinds of objects, barred and non–barred, in the same way. Therefore, this is a real characteristic of barred galaxies, namely, to present a larger interval of (U$-$B) color gradients, probably associated with recents episodes of star formation. We note that the larger distributions are caused by a larger fraction of barred galaxies having zero or positive gradients. In (U$-$B), for instance, 55% $\pm$ 8% of the face–on barred galaxies have zero or positive gradients, whereas for the face–on non–barred galaxies this fraction is reduced to 32% $\pm$ 12%. However, considering the (B$-$V) index, these fractions are more similar, being 41% $\pm$ 6% among barred galaxies and 31% $\pm$ 11% among non–barred galaxies. At this point we might suspect that this difference between the two color indices may be caused by a larger age/metallicity sensibility of the (U$-$B) color index. We remark that this effect is present even when we do not separate the edge and face–on galaxies.
Moreover, one can see in Fig. 5 that the majority of barred galaxies have less pronounced (U$-$B) gradients than the non–barred galaxies, as can be also verified through the mean values presented in Table 2. But, interestingly, this behaviour does not occur in the (B$-$V) color. This is probably an effect that any enhancement in the star formation rate affects more the (U$-$B) than the (B$-$V) color.
Another interesting effect is that the edge–on galaxies show a tendency of having more pronounced negative gradients compared to the face–on systems, specially in (U$-$B). This effect may well be related to the fact that the internal reddening is more expressive in edge–on galaxies, and points to the presence of a small differential internal correction that affects the bulge and the disk in different ways. Indeed, one can conclude that the light emitted by the central regions shall be more affected by reddening, a result that agrees with the ones presented by [@dej96c].
We present in Fig. 6 the (U$-$B) versus the (B$-$V) gradients for the non–barred galaxies (a), barred galaxies (b) and for the total sample (c). We can see from this figure that the gradients in both colors are well correlated, and that there is no difference in the correlation for barred and non–barred galaxies. In fact, the Pearson correlation coefficient R is 0.71 for non–barred, 0.80 for barred, and 0.78 for the whole sample. The same correlation was observed separating face and edge–on galaxies without noticeable differences. Again, we can see that barred galaxies have a more extended color gradient amplitude in these plots. These correlations are indeed expected, since the same physical reason rules the gradients in both colors, namely, variations between the stellar populations of the inner and outer regions of the galaxies. The models of [@lar78], for instance, show that, for a population formed in a single burst, the variation in (B$-$V) for populations with a difference in age of 10 Giga years is 1.1, while for (U$-$B) it is 1.5. Thus, in these conditions, we shall expect $\Delta (U-B) / \Delta (B-V) = 1.4$. Since the color gradients are $G_{B-V} = \Delta (B-V) / \Delta \log {A}$ and $G_{U-B} = \Delta (U-B) /
\Delta \log {A}$, then we shall have $G_{U-B} / G_{B-V} = 1.4$. Surprisingly, the correlations in Fig. 6 give us $G_{U-B} / G_{B-V} = 1.2$, which is very close to what is predicted from these simple models. This difference might indicate that we are seeing stellar populations mixed with dust, since Larson and Tinsley’s models do not take dust into account.
Thus we interpret that the reason for this agreement, as will be seen in Sect. 5.5, is that the total color index is relatively stable among galaxies in our sample, but the color of the bulge varies noticeably between the barred and non–barred populations. Therefore, the amplitude of variation in the color gradients shown in Fig(s). 5 and 6 is related to variations in the stellar population of the bulges.
It is interesting to ask what would happen if the weakly–barred galaxies (SAB’s) would have been analysed separately. The answer to this question is the analysis would remain the same. Indeed, barred and weakly–barred galaxies show essentially the same mean color gradient in both the (B$-$V) and (U$-$B). The values for barred galaxies alone are $-0.12 \pm 0.02$ and $-0.11 \pm 0.03$, respectively, while for the weakly–barred galaxies these values turn to $-0.14 \pm 0.02$ and $-0.13 \pm 0.03$.
Negative, Zero and Positive Gradients
-------------------------------------
The vast majority of the galaxies in our sample have negative gradients, as one can see from Fig. 5, implying therefore that the bulge is redder than the disk. This result, in principle, is consistent with the monolithic scenario, where the older and redder population is located in the central parts, whereas the younger and bluer populations are more predominant in the outer regions of spiral galaxies.
In order to get some further insight, we have considered three arbitrary categories for the color gradients, according to their values. The first category is constituted by objects having negative gradients, with $G \leq -0.10$; the second have galaxies with almost zero gradient, defined by $-0.10 < G < 0.10$, and finally the third category have those galaxies with positive gradients, $G \geq 0.10$. In Table 3 we show for the face–on galaxies in our sample, where the distinction between barred and non–barred is more reliable, the distribution among these three classes of objects, in both colors. There are a total of 124 face–on galaxies with the (B$-$V) gradient, and 104 with the (U$-$B). In column (1) we present the total number of galaxies in each class of color gradient, while column (2) gives their fraction of the total sample. Column (3), (4) and (5) show, respectively, the fractions of non–barred, weakly–barred and barred galaxies in each gradient interval. Column (6) shows the total fraction of barred (SAB+SB) galaxies and, finally, Column (7) shows the number of galaxies hosting AGN’s. Galaxies with AGN were identified through the catalog of [@ver98]. The reason to investigate this class of galaxies in this study comes from the suggestions presented by other authors (e.g., [@shl89; @shl90]) that bars can fuel AGN through processes similar to the ones of the secular evolution. We can verify that, with small variations in each color index, we have approximately 59% of the galaxies presenting negative gradients, 27% with zero gradients, and 14% with positive gradients. We remark that this result does not change considerably when we consider a more restrictive definition of the zero gradient class, as $-0.05 < G < 0.05$. Moreover, essentially the same result is also obtained when we consider the whole sample, including together face and edge–on galaxies.
The total fraction of face–on barred galaxies in our sample is 79%. We can see in Table 3 that there exists an excess of barred galaxies among the ones with null or positive gradients. In (B$-$V), the fraction of barred galaxies with negative gradient is 75%, while it raises to 91% among the ones with zero gradient. In (U$-$B), 73% of the galaxies with negative gradient are barred, while 83% of the ones with null gradient are barred, and 90% of the positive gradient galaxies are barred. If we consider the more restrictive criterion for null gradient ($-0.05 < G < 0.05$), this excess is substantially emphasized. The fraction of barred galaxies with null (B$-$V) gradient then raises to 94% and the fraction of barred galaxies with null (U$-$B) gradient raises to 88%. This result indicates that barred galaxies are over–represented among the objects having null or positive gradients. Therefore, bars seem to act as a mechanism of homogenization of the color indices, and thus, of the stellar population, along galaxies. As a consequence, we are forced to conclude that a classical monolithic scenario would have difficulties to explain this result.
Another interesting feature of Table 3 is that the fraction of galaxies with AGN increase from $\sim$ 8% for systems with negative gradients to $\sim$ 36% for objects with positive gradients. Even considering the low number statistics, this might be an indication that the homogenization of the stellar population, induced by bars, is related to the AGN phenomenon.
Color Gradients and Abundances
------------------------------
Recent theoretical studies (e.g., [@fri95]) related to the dynamical secular evolution show that a stellar bar is able to collect gas from the outer to the inner regions of the disk, through shocks and gravitational torques that remove angular momentum from the gas. Thus, a large–scale mixing of the gas must occur along the galaxy, which could be, in principle, observed in the radial abundance profiles of certain chemical elements. [@mar94], hereafter MR94, and [@zar94], hereafter ZKH94, present O/H abundance gradients in spiral galaxies determined through the observation of H[ii]{} regions. Both studies show that barred galaxies tend to have less pronounced gradients. Moreover, MR94 conclude that the gradients become less pronounced as the normalized length of the bar, or its apparent ellipticity, increases. On the other hand, studies from [@sak99] show that barred galaxies have a higher central concentration of molecular gas (CO) than non–barred galaxies. Both results are in agreement with the prediction from the theoretical studies of dynamical secular evolution. Then, if the abundance is affected by this mechanism we also should expect it affected the color gradients.
In order to verify this possibility, we have compared 12 galaxies in common with MR94, and 18 with ZKH94. In Fig. 7, we plot our color gradients versus the abundance gradients of MR94 (top panel) and ZKH94 (bottom panel). We can see that there is no clear correlation between the photometric and the abundance gradients. Hardly this absence of correlation could be a consequence of errors in the photometric gradients, which typically range from 0.02 to 0.05. On the other hand, the errors in the abundance gradients are more difficult to determine, as we can see by looking at the quite different values of the NGC 2997 gradient as estimated by MR94 and ZKH94. However, these errors are also hardly larger than 0.02 dex $\times$ kpc$^{-1}$ (ZKH94). One can interpret that the absence of such correlation is a real feature, and thus it is interesting to explore its consequences. Since the color indices are sensible to both age and metallicity, this result could indicate that the excess of barred galaxies with zero color gradients, as we found in Sect. 5.2, reflects a difference in the behaviour of the mean [*age*]{} of the stellar population along barred and unbarred galaxies, and not of its metallicity. However, in principle, this absence of correlation could be attributed to the effects of dust extinction. We have argued in Sect. 3.1 that these effects shall be small, but in Sect. 5.6 below we will show a quantitative analysis of these effects, and we conclude that it is possible that the lack of this correlation may be caused by dust extinction.
We would expect to find such correlation in the dynamical secular evolutionary scenario. However, if we consider that bars are a relatively fast recurrent phenomenon, this absence of correlation would be natural. Indeed, we can imagine the following picture. If we consider a galaxy formed through the monolithic scenario, we shall expect it to show both the abundance and color gradients negative. In that case the galaxy would be placed in the lower left region of Fig. 7. This galaxy can develop a bar and then have its abundance gradient shallower, while its color gradient shall remain the same, because the time scale to mix the gas in the disk shall be smaller than the time required to form new stars in the central region. Galaxies in that stage would occupy the lower right part of Fig. 7. After the gas accumulates in the central region it will form new stars and then the color gradient will become shallower and the galaxy would be in the upper right part of Fig. 7. Instabilities generated by the mass accumulated in the central region will destroy the bar interrupting the transfer of gas along it and steepening the abundance gradient, while keeping the color gradient unchanged. In this case, we will see the galaxy in the upper left part of Fig. 7. The lack of new star formation in the central region and the aging of the stars will then turn negative the color gradient and the galaxy will again occupy the lower left part of Fig. 7. If a new bar is developed then the changes in the abundance and color gradients can occur again.
Color Gradients and the Morphology of Bars
------------------------------------------
In an attempt to perform a quantitative morphology of bars in galaxies, [@mar95], hereafter M95, made visual estimates of the axial ratio, $b/a$, the major axis length (normalized by the 25 mag arcsec$^{-2}$ isophote), $L_{b}$, and the apparent ellipticity of bars in spiral galaxies. In that work, it is found a relation between the length of the bar and the diameter of the bulge, in the sense that galaxies with large bulges also have large bars. Moreover, he found an apparent correlation between the presence of intense nuclear star formation and the axial ratio of the bar, in the sense that strong bars, those with $b/a \leq 0.6$, are present in galaxies with nuclear bursts of star formation.
A total of 45 galaxies in our sample were studied in M95, allowing us to verify correlations between our color gradients and the parameters of the bar morphology. Figure 8 shows for these objects our color gradients plotted against the bar parameters axial ratio, $b/a$, length, $L_{b}$, and apparent ellipticity, $\varepsilon_{b}$. We detect no correlation of these morphological bar parameters with the color gradients, meaning that the color gradient does not depend on the morphology of the bar. Thus, there are galaxies with the same gradient and bars with quite distinct morphologies. And, on the other hand, there are systems with the same bar morphology and quite different color gradients. It is worth notice that MR94 found that the O/H abundance gradients in barred galaxies turn less pronounced as the ellipticity or the length of the bar increases, i.e., galaxies with stronger bars have less pronounced O/H abundance gradients. Again, it is not unlikely that extinction by dust is masking a correlation. However, these results may be explained by different time scales in the homogenization of abundance gradients, measured in gas, and color gradients, measured in stars.
Total and Bulge Color Indices
-----------------------------
We remark that our total color indices are obviously affected by the contributions of both the bulge and the disk. The relative importance of these two components can be measured by the factor $f_B = L_{Bb}/L_{Bd}$, representing the bulge to disk luminosity ratio in the B band. On the other hand, both components have intrinsic colors (B$-$V)$_d$, (B$-$V)$_b$ and (U$-$B)$_d$, (U$-$B)$_b$. The total color is related to these component colors through the relations
$$(B-V)_T = (B-V)_b - 2.5 \log\frac{f_B+1}{\;\;\;\;f_B + 10^{ 0.4\Delta_{BV}}}$$
and
$$(U-B)_T = (U-B)_b + 2.5 \log\frac{f_B+1}{\;\;\;\;f_B + 10^{-0.4\Delta_{UB}}}$$
where $\Delta_{BV}=(B-V)_d-(B-V)_b$ and $\Delta_{UB}=(U-B)_d-(U-B)_b$.
Table 4 shows the median values of the characteristic total and bulge color indices for the galaxies in our sample, separated by the gradient class, together with their standard errors. For those objects with null color gradient we show a single color value. In the right part of this table, we present the data relative only to the face–on objects. We can see that the same trend is present in both samples.
Considering both the face–on galaxies and the total sample, one can observe that the total colors remain almost with the same value for the three classes of gradients. The differences are small in both the (U$-$B) and (B$-$V) colors, within the errors. However, bulges of zero or positive gradient objects are systematically bluer than the ones found in negative gradient objects. The differences are much larger than the errors, indicating that it is a real effect. Indeed, there is a difference of order 0.40 magnitudes between the colors of bulges in negative and positive gradient objects, while the estimated errors are within $\sim$ 0.03 magnitudes. Therefore, one major factor determining the value of the gradient is the bulge color. Moreover, the disk colors should also be redder, for objects with null or positive gradients, in order to keep the total colors almost unchanged, as it is observed. This is an effect which is not compatible with the monolithic scenario, since it indicates that, in the process of homogenization of the stellar population, induced by bars, bursts of star formation occur in the bulge, in complete agreement with the secular evolutionary scenario.
Another way of looking to this effect is shown in Fig. 9, where we plot the relation of the total and bulge color indices for different classes of gradients, considering only face–on galaxies. Although we have the total color instead of the disk color, these correlations have the same meaning as the ones found by other authors ([@pel96]), showing that the formations of bulge and disk are parts of the same process. However, this figure also shows that the zero point scale of the correlation is quite different for objects having negative and positive color gradients. While the correlations are in the same sense, we can see again that the bulge is much bluer in objects with positive gradients, while the mean total color is the same, irrespective of the gradient category. These results do not change when we consider also the edge–on galaxies.
Once again, it is interesting to verify if there are any differences in the properties of barred and weakly–barred galaxies. Like the color gradients, the characteristic total and bulge mean color indices for SB’s and SAB’s are essentially the same. The bulge colors for SB’s are $0.56 \pm 0.02$ and $-0.01 \pm 0.04$ in (B$-$V) and (U$-$B), respectively, while for SAB’s they are $0.60 \pm 0.02$ and $0.06 \pm 0.03$. On the other hand, the total colors for SB’s are $0.45 \pm 0.02$ and $-0.11 \pm 0.02$ in (B$-$V) and (U$-$B), and they are $0.46 \pm 0.02$ and $-0.08 \pm 0.02$ for SAB’s.
Dust Extinction
---------------
A fundamental point to be considered in this study are the effects of dust extinction and reddening. In principle, dust can disturb the analysis of color distribution in galaxies. To minimize its effects we have made a careful sample selection excluding galaxies presenting strong dust lanes.
Moreover, we have made our analysis considering also a sub–sample containing only the face–on galaxies of our total sample, in which it is well known that the effects of dust are minimized. We also consider the results from the models of dust distribution in disk dominated galaxies by [@dej96c] which show that the dust reddening plays a minor role in color gradients. This author also argues that color gradients produced by dust are small from the U to the R bands because the absorption properties do not change very much in these bands. Furthermore, we have shown that there is an excess of barred galaxies with blue bulges in comparison with non–barred galaxies, and we conclude that this is related to recent bursts of star formation. Since the effects of dust do not depend on whether or not the galaxy hosts a bar, this main conclusion remains unaltered, even if the extinction is considerable.
Nevertheless, although the extinction in face–on galaxies is smaller than in edge–on galaxies, it might be considerable in the central regions (see [@pel95]). Moreover, extinction and reddening depend on the geometry of the system and on the distribution of dust and stars (see, e.g., [@jan94]), so that it is prudent to verify empirically the role of dust in color gradients. With this aim, we have used HST archival data (NICMOS and WFPC2), and some CCD images obtained at Pico dos Dias, in order to determine the optical (B,V,I) and near–IR (H,K) color gradients for some galaxies, which are useful to evaluate the role of dust. These galaxies were chosen to have an inclination representative of our sample. As we have no photometry data in all selected passbands for all galaxies used in this analysis (see Table 5), we will assume that such gradients like ($H-K$) indicate variations in the old stellar population, while those like ($B-V$) or ($B-I$) are specially sensitive to recent star formation. Color gradients like ($I-H$) or ($V-H$) will primarily show the extinction caused by dust, as well as old stellar population gradients (see [@pel99]). All galaxies belong to our main sample (Sect. 2). As the HST data were measured only in the central region of the galaxies (inner $\sim$ 2 kpc), these central gradients shall not be compared with the global ones obtained in Sect. 3.
Since it is in the central region where most of the dust is accumulated, its role in color gradients evaluated here may be considered as an upper limit. Let us evaluate firstly the HST data. As the dust and gas contribution are not the same for all galaxies, we will discuss the results for each one individually, and summarize them in Table 5. NGC 3310 shows a very small old population gradient ($G(H-K)=-0.04$) and a small old population/dust gradient ($G(I-K)=-0.11$), while the color gradient produced by recent star formation is large ($G(B-I)=-0.41$). Thus one can conclude that, for this galaxy, dust may be responsible for $\sim 17\%$ of the observed central color gradient. NGC 5033 have also a very small old population gradient ($G(V-H)=+0.01$) but a [*positive*]{} and large star formation gradient ($G(B-H)=+0.39$). This means that, even with the dust present in the centre of this galaxy (as can be seen in the HST images), the blue light emitted by the young population are strong enough to produce positive color gradients. Another possibility to explain this behaviour is the presence of a strong dust lane off–centered, but this lane was not found in the images. NGC 5194 also have a very small old population gradient ($G(H-K)=-0.02$) and a considerable old population/dust gradient ($G(V-K)=-0.21$). With the star formation gradient values one can conclude that, in this galaxy, dust may cause nearly half of the observed central color gradient. Finally, NGC 5248 have a considerable old population/dust gradient ($G(V-H)=-0.24$), but a [*positive*]{} star formation gradient. Conclusions are the same as for NGC 5033.
Another way to study the role of dust in color gradients is to determine the reddening it causes. Using the HST data again we can estimate an upper limit, considering that there is no dust reddening beyond 1 $R_{eff}$ and that there are no stellar population gradients. Thus, the difference in color from the center to 1 $R_{eff}$ can be assumed to be all done by dust extinction. When the data does not reach 1 $R_{eff}$ we used the farthest available radius. We thus estimated such color excesses in ($I-K$) for NGC 3310, ($V-K$) for NGC 5194, and ($V-H$) and ($I-H$) for NGC 5248. Results are in Table 5. With the Galactic extinction law [@rie85] we have determined the extinction $A_{V}$ in the centre of these galaxies. Its average value is $A_{V}=1.5$. [@pel99] applied the same analysis to a sample of early–type spirals, obtaining $A_{V}=0.6-1.0$.
The same procedure we have used to the HST data we applied for 5 galaxies observed by us at the Pico dos Dias observatory in the B, V and I bands (see Table 5). Assuming that ($B-V$) gradients are sensitive to recent star formation, while ($V-I$) gradients are old population/dust gradients, we can infer the dust contribution to the observed color gradients to be of up to 45% in the central region. As one can see, there are 2 galaxies with a negative old population/dust gradients but with positive star formation gradients. This result is in agreement with the one obtained using the HST data. We have also estimated an average value for $A_{V}$ using the ($V-I$) color excesses. Its value is $A_{V}=0.4$. This value is lower than the one obtained with the HST data simply because it was not obtained with optical–near–infrared colors.
Now, assuming that the color excesses obtained truly represent an effect of dust extinction, we can “correct” the colors inside 1 $R_{eff}$ and re–calculate the ($B-V$) and ($U-B$) color gradients, using the Galactic extinction law. Table 6 shows the results and compares them with the gradients determined in Sect. 3. It can be seen that, with the HST data, dust effects can, in some cases, alter significantly the color gradients determined. But in other cases, even the high values of the upper limit for $A_{V}$ do not change the results. Table 6 also shows that using the color excesses obtained through our B, V and I CCD imaging make no significant changes in the color gradients.
This study has led us to conclude that indeed extinction in the center (inner $\sim$ 2 kpc) of late–type spirals is high, with a typical value for $A_{V}=1-2$ magnitudes. However, the results shown here seems to indicate that dust is very much concentrated in the center, so that [*global*]{} color gradients are not much disturbed by dust, in general. The fact that, even with dust present in the center, some galaxies have positive gradients, shows that the excess of barred galaxies with blue bulges, found in this work, is a result which is not affected by our ignorance on the dust effects. It means also that, in these blue bulges, one can have an underlying old stellar population beneath a recent burst of star formation. On the other hand, it seems that the absence of correlations between color gradients and abundance gradients (Sect. 5.3), and color gradients and the bar morphology (Sect. 5.4), could possibly be explained by dust extinction.
General Discussion and Conclusions
==================================
In the previous section we noticed that barred galaxies have less pronounced (U$-$B) mean color gradients. Moreover, both at (U$-$B) and (B$-$V) the amplitude of variation of the gradient, as measured by the standard deviation of its distribution, is larger in barred, as opposed to non–barred galaxies. These results imply that there is an excess of barred galaxies among the objects with null or positive gradients, as can be seen from Table 3. As a consequence, we conclude that bars act in the sense of promoting a more homogeneous stellar population in late–type spirals. Besides an underlying old and red stellar population, disks of late–type spirals have ubiquitous young and blue stars. Bulges in general have an old stellar population, but we have shown here that bulges of late–type barred galaxies have also an important young stellar component. Therefore, the stellar population of barred galaxies tend to show a degree of mixing not compatible with the pure monolithic scenario.
We found no correlation between the color and abundance gradients. We must consider here the results of Sect. 5.6, i.e., dust extinction is considerable in the central region of late–type spirals, but does not strongly disturb global color gradients, in general. In spite of the caveat that this lack of correlation may be caused by the effects of dust, judging from the estimated photometric and abundance errors, we believe that this could be a real effect, indicating that color gradients may be not associated with metallicities (but see [@pel99]). Therefore, the presence of color variations inside a given galaxy is quite probably related to an age effect caused by bursts of star formation. The absence of this correlation could also be explained if we consider that bars are a fast recurrent phenomenon.
Another conclusion from this study is that the mean total color indices remain remarkably constant independently of the galaxy’s color gradient. From the sample of face–on objects we can verify in Table 4 that the total mean colors are (B$-$V)$_{T} \simeq 0.55 \pm 0.02$ and (U$-$B)$_{T} \simeq -0.02 \pm 0.06$. On the other hand, bulges behave quite differently. The mean colors of bulges in null gradient galaxies are $\sim$ 0.20 bluer than the colors of bulges in negative gradient systems. Bulges of positive gradient galaxies are even bluer, $\sim$ 0.50 bluer than bulges in negative gradient objects. We see also in Table 4 that this difference is quite too large to be explained by photometric errors. In order to keep the total color unchanged it is necessary that the disks of the null or positive gradient galaxies become redder, i.e., evolve passively.
This same effect can be clearly seen from Fig. 9, where we present the correlation between total and bulge colors. In both the negative and positive gradient regimes there is a correlation between these two colors. These correlations are in agreement with the ones found by other authors ([@pel96]) for the colors of bulges and disks. However, we can also see from Fig. 9 that the correlation of the positive gradient objects is shifted in the blue direction by $\sim$ 0.50 magnitudes in their bulges. According to these authors, assuming similar metallicities for bulges and disks, their correlations imply in a difference of the order of less than 30% between the ages of the stellar populations in these two components. Again, the presence of a correlation between the total and bulge colors, as well as the bluer colors of bulges in galaxies with null or positive gradients, are not consistent with the pure monolithic scenario.
A more difficult task is to identify the correct evolutionary scenario responsible for these observable properties. The capture of nearby dwarfs in the accretion process of the hierarchical scenario seems to be incompatible with the constancy of the mean total colors of galaxies presenting different classes of color gradients, since this process do not predict a passive evolution for the disk. Moreover, the hierarchical scenario also does not predict an excess of barred galaxies showing null or positive color gradients.
On the other hand, the secular evolution induced by a bar can result in an enhancement of the star formation rate in the central regions of galaxies. This effect can be responsible for the bluer colors observed in bulges of galaxies showing null or positive color gradients. At this point we can not say, however, whether this enhancement is occurring in the bulge or in the internal region of the disk.
It is a pleasure to thank Ronaldo E. de Souza and Rob Kennicutt for fruitful discussions and suggestions, and for a careful reading of a preliminary version of the paper, and Tim Beers for presenting us the LMS method. Special thanks go to Roelof de Jong for providing us his CCD observations. We also thank G. Longo for helpful answers to our questions. We thank the anonymous referee for helping to improve the article, specially the discussion on dust effects. We acknowledge the Conselho Nacional de Pesquisa e Desenvolvimento (CNPq), the NExGal – ProNEx and the Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) for the financial support. We would also like to thank the staff at the Pico dos Dias Observatory (OPD/LNA – CNPq) for helping during the observational runs.
Athanassoula, E., and Bureau, M. 1999, accepted for publication in , astro-ph/9904206
Baugh, C.M., Cole, S., and Frenk, C.S. 1996, , 283, 1361
Berentzen, I., Heller, C.H., Shlosman, I., and Fricke, K.J. 1998, , 300, 49
Boselli, A., and Gavazzi, G. 1994, , 283, 12
Bouwens, R., Cayón, L., and Silk, J. 1998, astro-ph/9812193, accepted for publication in
Bureau, M., and Athanassoula, E. 1999, accepted for publication in , astro-ph/9903061
Bureau, M., and Freeman, K.C. 1999, accepted for publication in , astro-ph/9904015
Bureau, M., Freeman, K.C., and Athanassoula, E. 1999, in When and How do Bulges Form and Evolve?, ed. by C.M. Carollo, H.C. Ferguson & R.F.G. Wyse, Cambridge: CUP, astro-ph/9901246
Combes, F., and Sanders, R.H. 1981, , 96, 164
Courteau, S., de Jong, R., and Broeils, A. 1996, , 457, L73
de Jong, R.S. 1996a, , 118, 557
de Jong, R.S. 1996b, , 313, 45
de Jong, R.S. 1996c, , 313, 377
de Jong, R.S., and van der Kruit, P.C. 1994, , 106, 451
de Souza, R.E., and dos Anjos, S. 1987, , 70, 465
de Vaucouleurs, G. 1959, , 64, 397
de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H.G., Buta, R.J., Paturel, G., and Fouque P. 1991, in: Third Reference Catalog of Bright Galaxies, Springer–Verlag, New York [**(RC3)**]{}
Eggen, O.J., Lynden–Bell, D., and Sandage, A.R. 1962, , 136, 748
Elmegreen, D.M. 1998, in Galaxies and Galactic Structure, Prentice Hall
Evans, R. 1994, , 266, 511
Friedli, D. 1999, in The Evolution of Galaxies on Cosmological Timescales, ed. by J.E. Beckman & T.J. Mahoney, ASP Conf. Ser., astro-ph/9903143
Friedli, D., and Benz, W. 1995, , 301, 649
Friedli, D., and Martinet, L. 1993, , 277, 27
Frogel, J.A. 1985, , 298, 528
Giovanelli, R., Haynes, M.P., Salzer, J.J., Wegner, G., da Costa, L.N., and Freudling, W., 1994, , 107(6), 2036
Giovanelli, R., Haynes, M.P., Salzer, J.J., Wegner, G., da Costa, L.N., and Freudling, W., 1995, , 110(3), 1059
Graham, J.A. 1982, , 94, 244
Jansen, R.A. et al. 1994, , 270, 343
Kauffmann, G., and White, S.D.M. 1993, , 261, 921
Kauffmann, G., Guiderdoni, B., and White, S.D.M. 1994, , 267, 981
Kitchin, C.R. 1998, in Astrophysical Techniques, Institute of Physics Publishing, Bristol and Philadelphia
Kormendy, J. 1982, , 257, 75
Kormendy, J., and Illingworth, G. 1983, , 265, 632
Kuijken, K., and Merrifield, M.R. 1995, , 443, L13
Lahav, O., Naim, A, Buta, R.J., Corwin, H.G., and de Vaucouleurs, G. et al. 1995, Science, 267, 859
Larson, R.B., and Tinsley, B.M. 1978, , 219, 46
Longo, G., and de Vaucouleurs, A. 1983, Univ. Texas Monographs in Astronomy, No. 3 [**(LdV83)**]{}
Longo, G., and de Vaucouleurs, A. 1985, Univ. Texas Monographs in Astronomy, No. 3A [**(LdV85)**]{}
Martin, P. 1995, , 109(6), 2428 [**(M95)**]{}
Martin, P., and Roy, J.R. 1994, , 424, 599 [**(MR94)**]{}
Merrifield, M.R., and Kuijken, K. 1999, , 345, L47
Norman, C.A., Sellwood, J.A., and Hasan, H. 1996, , 462, 114
Peletier, R.F. 1989, PhD Thesis, University of Groningen, The Netherlands
Peletier, R.F., and Balcells, M. 1996, , 111, 2238
Peletier, R.F., Valentjin, E.A., Moorwood, A.F.M., and Freudling, W. 1994, , 108, 621
Peletier, R.F. et al. 1995, , 300, L1
Peletier, R.F. et al. 1999, , 310, 703
Prugniel, Ph., and Héraudeau, Ph. 1998, , 128, 299 [**(PH98)**]{}
Rieke, G., Lebofsky, M.J. 1985, , 288, 618
Roberts, M.S., and Haynes, M.P. 1994, , 32, 115
Rousseeuw, P.J. 1984, Journal of the American Statistical Association, 79(388), 871
Rousseeuw, P.J., and Leroy, A.M. 1987, in: Robust Regression and Outlier Detection, Wiley–Interscience, New York
Sakamoto, K., Okumura, S.K., Ishizuki, S., and Scoville, N.Z. 1999, in When and How do Bulges Form and Evolve?, ed. by C.M. Carollo, H.C. Ferguson & R.F.G. Wyse, Cambridge University Press, astro-ph/9902005
Schlegel, D.J., Finkbeiner, D.P., and Davis, M. 1998, , 500, 525
Shlosman, I., Begelman, M.C., and Frank, J. 1990, , 345, 679
Shlosman, I., Frank, J., and Begelman, M.C. 1989, , 338, 45
Searle, L., Sargent, W.L.W., and Bagnuolo, W.G. 1973, , 179, 427
Shaw, M.A. 1987, , 229, 691
Silva, D.R., and Elston, R. 1994, , 428, 511
Tinsley, B.M. 1980, Fund. of Cos. Phys., 5, 287
Toomre, A. 1966, in Geophysical Fluid Dynamics, 1966 Summer Study Program at Woods Hole Oceanographic Institution, ref. no. 66-46, 111
van den Bergh, S. 1997, , 113, 2054
Véron–Cetty, M.P., and Véron, P. 1998, in: Quasars and Active Galactic Nuclei (8th Ed.), ESO Sci. Rep., 18, 1
Wyse, R.F.G., Gilmore, G., and Franx, M. 1997, , 35, 637
Zaritsky, D., Kennicutt, R.C., and Huchra, J.P. 1994, , 420, 87 [**(ZKH94)**]{}
[cccccccccc]{} ESO271-010 & SABcd(s) & -0,02 & 0,03 & 0,04 & 0,06 & 0,39 & 0,39 & -0,22 & -0,22\
IC0342 & SABcd(rs) & 0,43 & 0,08 & 0,57 & 0,09 & 0,23 & 0,66 & -0,51 & 0,06\
IC1954 & SBb(s) & -0,06 & 0,03 & 0,15 & 0,10 & 0,24 & 0,24 & -0,40 & -0,25\
IC1993 & SABb(rs) & -0,05 & 0,03 & -0,01 & 0,10 & 0,73 & 0,73 & 0,24 & 0,24\
IC2554 & SBbc(s) & -0,06 & 0,03 & 0,08 & 0,04 & 0,19 & 0,19 & -0,45 & -0,45\
IC4444 & SABbc(rs) & 0,00 & 0,03 & – & – & 0,40 & 0,40 & – & –\
IC4839 & SAbc(s) & -0,28 & 0,03 & -0,44 & 0,06 & 0,88 & 0,60 & 0,41 & -0,03\
IC4845 & Sb(rs) & 0,13 & 0,00 & -0,01 & 0,07 & 0,45 & 0,58 & 0,05 & 0,05\
IC4852 & SBbc(s) & -0,13 & 0,05 & 0,03 & 0,07 & 0,67 & 0,54 & -0,07 & -0,07\
IC5092 & SBc(rs) & -0,01 & 0,03 & -0,02 & 0,09 & 0,69 & 0,69 & 0,09 & 0,09\
IC5179 & Sbc(rs) & -0,17 & 0,04 & -0,11 & 0,03 & 0,46 & 0,29 & -0,12 & -0,23\
IC5186 & SABb(rs) & -0,11 & 0,01 & -0,11 & 0,04 & 0,53 & 0,42 & -0,06 & -0,17\
IC5325 & SABbc(rs) & -0,18 & 0,03 & -0,05 & 0,05 & 0,69 & 0,51 & -0,07 & -0,07\
MCG-2-14-4 & SABcd(rs) & -0,33 & 0,11 & 0,07 & 0,01 & 0,64 & 0,31 & -0,11 & -0,11\
NGC0001 & Sb & -0,14 & 0,01 & -0,19 & 0,05 & 0,76 & 0,62 & 0,19 & 0,00\
NGC0024 & Sc(s) & -0,20 & 0,06 & -0,14 & 0,06 & 0,29 & 0,09 & -0,33 & -0,47\
NGC0134 & SABbc(s) & -0,18 & 0,01 & -0,35 & 0,03 & 0,49 & 0,31 & 0,12 & -0,23\
NGC0150 & SBb(rs) & -0,22 & 0,00 & -0,21 & 0,04 & 0,54 & 0,32 & -0,05 & -0,26\
NGC0151 & SBbc(r) & -0,36 & 0,03 & -0,46 & 0,06 & 0,73 & 0,37 & 0,33 & -0,13\
NGC0157 & SABbc(rs) & -0,23 & 0,03 & -0,27 & 0,04 & 0,60 & 0,37 & 0,07 & -0,20\
NGC0210 & SABb(s) & -0,17 & 0,02 & -0,32 & 0,00 & 0,78 & 0,61 & 0,30 & -0,02\
NGC0224 & Sb(s) & -0,03 & 0,00 & -0,08 & 0,01 & -0,10 & -0,10 & -0,26 & -0,26\
NGC0278 & SABb(rs) & -0,03 & 0,04 & – & – & 0,51 & 0,51 & – & –\
NGC0289 & SBbc(rs) & -0,19 & 0,02 & -0,34 & 0,02 & 0,78 & 0,59 & 0,32 & -0,02\
NGC0309 & SABc(r) & -0,63 & 0,04 & -0,47 & 0,10 & 0,91 & 0,28 & 0,26 & -0,21\
NGC0440 & Sbc(s) & -0,16 & 0,02 & -0,24 & 0,06 & 0,51 & 0,35 & -0,03 & -0,27\
NGC0470 & Sb(rs) & -0,13 & 0,03 & 0,01 & 0,06 & 0,63 & 0,50 & -0,06 & -0,06\
NGC0488 & Sb(r) & -0,13 & 0,02 & -0,28 & 0,04 & 0,90 & 0,77 & 0,58 & 0,30\
NGC0578 & SABc(rs) & -0,21 & 0,03 & -0,10 & 0,01 & 0,50 & 0,29 & -0,15 & -0,25\
NGC0613 & SBbc(rs) & 0,01 & 0,02 & 0,03 & 0,04 & 0,63 & 0,63 & 0,06 & 0,06\
NGC0615 & Sb(rs) & -0,12 & 0,01 & -0,27 & 0,02 & 0,55 & 0,43 & 0,29 & 0,02\
NGC0628 & Sc(s) & -0,14 & 0,01 & -0,23 & 0,02 & 0,64 & 0,50 & 0,09 & -0,14\
NGC0685 & SABc(r) & -0,22 & 0,03 & -0,15 & 0,04 & 0,62 & 0,40 & -0,04 & -0,19\
NGC0779 & SABb(r) & -0,11 & 0,02 & -0,25 & 0,03 & 0,45 & 0,34 & 0,10 & -0,16\
NGC0782 & SBb(r) & -0,31 & 0,00 & -0,53 & 0,06 & 0,82 & 0,51 & 0,44 & -0,09\
NGC0864 & SABc(rs) & -0,11 & 0,03 & – & – & 0,55 & 0,44 & – & –\
NGC0908 & Sc(s) & -0,22 & 0,02 & -0,50 & 0,04 & 0,54 & 0,32 & 0,17 & -0,33\
NGC0958 & SBc(rs) & -0,13 & 0,04 & – & – & 0,48 & 0,35 & – & –\
NGC1055 & SBb & -0,10 & 0,02 & -0,26 & 0,04 & 0,53 & 0,43 & 0,13 & -0,13\
NGC1068 & Sb(rs) & -0,10 & 0,01 & 0,01 & 0,01 & 0,74 & 0,64 & 0,00 & 0,00\
NGC1073 & SBc(rs) & -0,17 & 0,03 & -0,47 & 0,04 & 0,62 & 0,45 & 0,22 & -0,25\
NGC1084 & Sc(s) & -0,14 & 0,04 & -0,26 & 0,01 & 0,50 & 0,36 & 0,03 & -0,23\
NGC1087 & SABc(rs) & -0,01 & 0,04 & 0,11 & 0,04 & 0,31 & 0,31 & -0,36 & -0,25\
NGC1097 & SBb(s) & 0,08 & 0,03 & 0,12 & 0,02 & 0,64 & 0,64 & 0,03 & 0,15\
NGC1187 & SBc(r) & -0,15 & 0,02 & -0,10 & 0,04 & 0,60 & 0,45 & -0,01 & -0,11\
NGC1232 & SABc(rs) & -0,24 & 0,01 & -0,60 & 0,04 & 0,79 & 0,55 & 0,45 & -0,15\
NGC1255 & SABbc(rs) & -0,36 & 0,02 & -0,14 & 0,04 & 0,62 & 0,26 & -0,10 & -0,24\
NGC1288 & SABc(rs) & -0,30 & 0,02 & – & – & 0,88 & 0,58 & – & –\
NGC1300 & SBbc(rs) & -0,18 & 0,03 & -0,22 & 0,03 & 0,71 & 0,53 & 0,21 & -0,01\
NGC1365 & SBb(s) & 0,11 & 0,02 & 0,13 & 0,04 & 0,43 & 0,54 & -0,16 & -0,03\
NGC1421 & SABbc(rs) & -0,30 & 0,02 & -0,22 & 0,03 & 0,24 & -0,06 & -0,28 & -0,50\
NGC1425 & Sb(s) & -0,13 & 0,02 & -0,25 & 0,02 & 0,51 & 0,38 & 0,13 & -0,12\
NGC1483 & SBbc(s) & -0,09 & 0,04 & -0,04 & 0,04 & 0,39 & 0,39 & -0,26 & -0,26\
NGC1515 & SABbc(s) & -0,09 & 0,02 & -0,12 & 0,01 & 0,33 & 0,33 & -0,05 & -0,17\
NGC1530 & SBb(rs) & -0,12 & 0,04 & -0,02 & 0,01 & 0,55 & 0,43 & -0,11 & -0,11\
NGC1536 & SBc(s) & -0,03 & 0,10 & – & – & 0,51 & 0,51 & – & –\
NGC1566 & SABbc(s) & -0,06 & 0,03 & -0,06 & 0,05 & 0,57 & 0,57 & -0,04 & -0,04\
NGC1614 & SBc(s) & 0,02 & 0,03 & 0,16 & 0,04 & 0,44 & 0,44 & -0,34 & -0,18\
NGC1620 & SABbc(rs) & -0,24 & 0,01 & – & – & 0,56 & 0,32 & – & –\
NGC1637 & SABc(rs) & -0,22 & 0,04 & -0,26 & 0,06 & 0,76 & 0,54 & 0,18 & -0,08\
NGC1672 & SBb(s) & 0,00 & 0,01 & 0,04 & 0,01 & 0,57 & 0,57 & -0,05 & -0,05\
NGC1688 & SBd(rs) & 0,15 & 0,04 & 0,42 & 0,12 & 0,30 & 0,45 & -0,44 & -0,02\
NGC1703 & SBb(r) & -0,18 & 0,00 & -0,64 & 0,00 & 0,69 & 0,51 & 0,36 & -0,28\
NGC1784 & SBc(r) & -0,30 & 0,00 & -0,22 & 0,05 & 0,81 & 0,51 & 0,24 & 0,02\
NGC1792 & Sbc(rs) & -0,19 & 0,03 & -0,22 & 0,04 & 0,51 & 0,32 & -0,06 & -0,28\
NGC1796 & SBc(rs) & -0,08 & 0,01 & -0,09 & 0,04 & 0,31 & 0,31 & -0,26 & -0,26\
NGC1832 & SBbc(r) & -0,43 & 0,02 & -0,42 & 0,06 & 0,77 & 0,34 & 0,19 & -0,23\
NGC1888 & SBc(s) & -0,14 & 0,05 & -0,26 & 0,05 & 0,43 & 0,29 & 0,11 & -0,15\
NGC1961 & SABc(rs) & -0,33 & 0,09 & – & – & 0,73 & 0,40 & – & –\
NGC2082 & SBb(r) & -0,18 & 0,03 & -0,13 & 0,06 & 0,67 & 0,49 & -0,08 & -0,21\
NGC2090 & Sc(rs) & -0,10 & 0,02 & -0,20 & 0,03 & 0,61 & 0,51 & 0,17 & -0,03\
NGC2206 & SABbc(rs) & -0,25 & 0,05 & – & – & 0,66 & 0,41 & – & –\
NGC2207 & SABbc(rs) & -0,32 & 0,03 & – & – & 0,64 & 0,32 & – & –\
NGC2223 & SABb(r) & -0,25 & 0,02 & -0,57 & 0,08 & 0,88 & 0,63 & 0,54 & -0,03\
NGC2268 & SABbc(r) & -0,08 & 0,02 & – & – & 0,53 & 0,53 & – & –\
NGC2336 & SABbc(r) & -0,21 & 0,03 & -0,61 & 0,05 & 0,60 & 0,39 & 0,37 & -0,24\
NGC2339 & SABbc(rs) & -0,12 & 0,02 & – & – & 0,75 & 0,63 & – & –\
NGC2347 & Sb(r) & -0,19 & 0,03 & – & – & 0,76 & 0,57 & – & –\
NGC2389 & SABc(rs) & -0,21 & 0,08 & – & – & 0,55 & 0,34 & – & –\
NGC2417 & Sbc(rs) & -0,12 & 0,03 & -0,09 & 0,08 & 0,62 & 0,50 & 0,00 & 0,00\
NGC2442 & SABbc(s) & -0,30 & 0,02 & -0,30 & 0,01 & 0,86 & 0,56 & 0,34 & 0,04\
NGC2487 & SBb & -0,38 & 0,03 & -0,42 & 0,08 & 0,93 & 0,55 & 0,41 & -0,01\
NGC2512 & SBb & -0,08 & 0,01 & 0,02 & 0,05 & 0,54 & 0,54 & -0,05 & -0,05\
NGC2565 & SBbc & 0,02 & 0,01 & -0,08 & 0,12 & 0,52 & 0,52 & 0,07 & 0,07\
NGC2595 & SABc(rs) & -0,06 & 0,02 & -0,01 & 0,02 & 0,60 & 0,60 & 0,07 & 0,07\
NGC2608 & SBb(s) & -0,09 & 0,06 & -0,08 & 0,07 & 0,53 & 0,53 & -0,06 & -0,06\
NGC2613 & Sb(s) & -0,16 & 0,01 & -0,42 & 0,04 & 0,47 & 0,31 & 0,28 & -0,14\
NGC2683 & Sb(rs) & -0,08 & 0,01 & -0,23 & 0,02 & 0,34 & 0,34 & 0,07 & -0,16\
NGC2712 & SBb(r) & -0,19 & 0,03 & – & – & 0,63 & 0,44 & – & –\
NGC2715 & SABc(rs) & -0,11 & 0,06 & – & – & 0,27 & 0,16 & – & –\
NGC2776 & SABc(rs) & 0,04 & 0,05 & 0,04 & 0,05 & 0,52 & 0,52 & -0,11 & -0,11\
NGC2815 & SBb(r) & -0,24 & 0,02 & -0,34 & 0,07 & 0,60 & 0,36 & 0,37 & 0,03\
NGC2841 & Sb(r) & -0,12 & 0,01 & -0,24 & 0,03 & 0,68 & 0,56 & 0,36 & 0,12\
NGC2874 & SBbc(r) & -0,26 & 0,04 & -0,40 & 0,04 & 0,62 & 0,36 & 0,23 & -0,17\
NGC2889 & SABc(rs) & -0,26 & 0,02 & -0,39 & 0,10 & 0,87 & 0,61 & 0,37 & -0,02\
NGC2903 & SABbc(rs) & 0,03 & 0,02 & 0,16 & 0,03 & 0,40 & 0,40 & -0,24 & -0,08\
NGC2935 & SABb(s) & -0,12 & 0,05 & -0,07 & 0,10 & 0,76 & 0,64 & 0,13 & 0,13\
NGC2955 & Sb(r) & 0,19 & 0,07 & – & – & 0,21 & 0,40 & – & –\
NGC2964 & SABbc(r) & -0,08 & 0,04 & -0,06 & 0,01 & 0,49 & 0,49 & -0,18 & -0,18\
NGC2989 & SABbc(s) & -0,17 & 0,03 & -0,12 & 0,00 & 0,42 & 0,25 & -0,16 & -0,28\
NGC2997 & SABc(rs) & 0,08 & 0,02 & 0,19 & 0,03 & 0,68 & 0,68 & 0,01 & 0,20\
NGC3001 & SABbc(rs) & -0,09 & 0,06 & -0,12 & 0,06 & 0,58 & 0,58 & 0,02 & -0,10\
NGC3054 & SABb(r) & -0,14 & 0,02 & -0,23 & 0,03 & 0,74 & 0,60 & 0,26 & 0,03\
NGC3079 & SBc(s) & -0,25 & 0,01 & -0,50 & 0,07 & 0,31 & 0,06 & 0,02 & -0,48\
NGC3095 & SABc(rs) & -0,39 & 0,03 & -0,44 & 0,05 & 0,67 & 0,38 & 0,19 & -0,25\
NGC3124 & SABbc(rs) & -0,19 & 0,03 & – & – & 0,80 & 0,61 & – & –\
NGC3145 & SBbc(rs) & -0,25 & 0,01 & -0,12 & 0,12 & 0,72 & 0,47 & 0,18 & 0,06\
NGC3177 & Sb(rs) & 0,04 & 0,02 & -0,21 & 0,05 & 0,54 & 0,54 & 0,51 & 0,30\
NGC3223 & Sb(s) & -0,25 & 0,00 & -0,34 & 0,04 & 0,76 & 0,50 & 0,36 & 0,02\
NGC3281 & Sab(s) & -0,10 & 0,01 & -0,19 & 0,06 & 0,74 & 0,64 & 0,31 & 0,12\
NGC3289 & SB0+(rs) & -0,04 & 0,03 & -0,03 & 0,02 & 0,31 & 0,31 & -0,01 & -0,01\
NGC3310 & SABbc(r) & 0,00 & 0,01 & 0,05 & 0,04 & 0,19 & 0,19 & -0,54 & -0,54\
NGC3318 & SABb(rs) & -0,34 & 0,02 & -0,50 & 0,04 & 0,58 & 0,24 & 0,13 & -0,37\
NGC3333 & SABbc & -0,19 & 0,04 & 0,13 & 0,15 & 0,20 & 0,01 & -0,55 & -0,42\
NGC3347 & SBb(rs) & 0,03 & 0,01 & -0,07 & 0,02 & 0,61 & 0,61 & 0,16 & 0,16\
NGC3351 & SBb(r) & 0,03 & 0,02 & 0,19 & 0,05 & 0,66 & 0,66 & -0,03 & 0,16\
NGC3353 & Sb & 0,13 & 0,06 & 0,13 & 0,05 & 0,17 & 0,30 & -0,57 & -0,44\
NGC3390 & Sb & -0,13 & 0,02 & -0,16 & 0,08 & 0,32 & 0,19 & -0,06 & -0,22\
NGC3521 & SABbc(rs) & -0,07 & 0,03 & – & – & 0,52 & 0,52 & – & –\
NGC3627 & SABb(s) & -0,20 & 0,02 & -0,21 & 0,02 & 0,62 & 0,42 & 0,18 & -0,03\
NGC3628 & Sb & -0,11 & 0,04 & – & – & 0,28 & 0,17 & – & –\
NGC3689 & SABc(rs) & 0,13 & 0,10 & – & – & 0,37 & 0,50 & – & –\
NGC3810 & Sc(rs) & -0,21 & 0,06 & -0,27 & 0,07 & 0,64 & 0,43 & 0,08 & -0,19\
NGC4051 & SABbc(rs) & 0,28 & 0,04 & 0,50 & 0,05 & 0,37 & 0,65 & -0,44 & 0,06\
NGC4088 & SABbc(rs) & -0,36 & 0,01 & – & – & 0,55 & 0,19 & – & –\
NGC4096 & SABc(rs) & -0,20 & 0,05 & -0,22 & 0,03 & 0,32 & 0,12 & -0,15 & -0,37\
NGC4156 & SBb(rs) & -0,01 & 0,08 & 0,32 & 0,17 & 0,72 & 0,72 & -0,31 & 0,01\
NGC4216 & SABb(s) & -0,20 & 0,02 & -0,29 & 0,03 & 0,59 & 0,39 & 0,35 & 0,06\
NGC4254 & Sc(s) & -0,15 & 0,01 & -0,14 & 0,00 & 0,64 & 0,49 & 0,06 & -0,08\
NGC4258 & SABbc(s) & -0,06 & 0,03 & 0,03 & 0,06 & 0,39 & 0,39 & -0,04 & -0,04\
NGC4273 & SBc(s) & -0,03 & 0,09 & – & – & 0,33 & 0,33 & – & –\
NGC4303 & SABbc(rs) & -0,13 & 0,03 & 0,04 & 0,07 & 0,65 & 0,52 & -0,01 & -0,01\
NGC4321 & SABbc(s) & 0,04 & 0,01 & 0,06 & 0,06 & 0,61 & 0,61 & -0,04 & -0,04\
NGC4388 & Sb(s) & -0,03 & 0,01 & 0,09 & 0,01 & 0,17 & 0,17 & -0,34 & -0,34\
NGC4414 & Sc(rs) & -0,17 & 0,01 & -0,37 & 0,15 & 0,75 & 0,58 & 0,38 & 0,01\
NGC4501 & Sb(rs) & -0,24 & 0,01 & -0,34 & 0,03 & 0,76 & 0,52 & 0,36 & 0,02\
NGC4527 & SABbc(s) & -0,24 & 0,03 & -0,14 & 0,05 & 0,68 & 0,44 & 0,08 & -0,06\
NGC4535 & SABc(s) & 0,09 & 0,02 & 0,07 & 0,09 & 0,53 & 0,53 & -0,06 & -0,06\
NGC4536 & SABbc(rs) & -0,29 & 0,02 & -0,17 & 0,06 & 0,54 & 0,25 & -0,08 & -0,25\
NGC4548 & SBb(rs) & -0,16 & 0,03 & -0,28 & 0,01 & 0,87 & 0,71 & 0,49 & 0,21\
NGC4565 & Sb & -0,19 & 0,02 & -0,20 & 0,07 & 0,32 & 0,13 & 0,06 & -0,14\
NGC4579 & SABb(rs) & -0,12 & 0,01 & -0,16 & 0,02 & 0,84 & 0,72 & 0,41 & 0,25\
NGC4593 & SBb(rs) & 0,28 & 0,11 & 0,62 & 0,03 & 0,65 & 0,93 & -0,40 & 0,22\
NGC4647 & SABc(rs) & -0,32 & 0,12 & – & – & 0,83 & 0,51 & – & –\
NGC4651 & Sc(rs) & -0,26 & 0,00 & -0,52 & 0,00 & 0,73 & 0,47 & 0,34 & -0,18\
NGC4666 & SABc & -0,15 & 0,00 & – & – & 0,42 & 0,27 & – & –\
NGC4699 & SABb(rs) & -0,05 & 0,01 & -0,14 & 0,02 & 0,75 & 0,75 & 0,41 & 0,27\
NGC4900 & SBc(rs) & 0,12 & 0,01 & 0,12 & 0,09 & 0,42 & 0,54 & -0,27 & -0,15\
NGC4902 & SBb(r) & -0,39 & 0,09 & – & – & 0,94 & 0,56 & – & –\
NGC4911 & SABbc(r) & -0,17 & 0,06 & -0,34 & 0,01 & 0,86 & 0,69 & 0,41 & 0,07\
NGC4939 & Sbc(s) & -0,29 & 0,02 & -0,36 & 0,06 & 0,70 & 0,41 & 0,24 & -0,12\
NGC5005 & SABbc(rs) & -0,16 & 0,03 & -0,18 & 0,02 & 0,69 & 0,53 & 0,29 & 0,11\
NGC5033 & Sc(s) & -0,30 & 0,03 & -0,11 & 0,06 & 0,65 & 0,35 & 0,15 & 0,04\
NGC5055 & Sbc(rs) & -0,16 & 0,01 & -0,37 & 0,06 & 0,68 & 0,52 & 0,19 & -0,18\
NGC5188 & SABb(s) & -0,21 & 0,10 & -0,34 & 0,06 & 0,62 & 0,41 & 0,19 & -0,15\
NGC5194 & Sbc(s) & -0,14 & 0,01 & -0,24 & 0,02 & 0,58 & 0,44 & 0,04 & -0,20\
NGC5236 & SABc(s) & 0,32 & 0,01 & 0,25 & 0,04 & 0,47 & 0,79 & -0,20 & 0,05\
NGC5248 & SABbc(rs) & -0,07 & 0,02 & -0,10 & 0,03 & 0,58 & 0,58 & 0,06 & -0,04\
NGC5364 & Sbc(rs) & -0,18 & 0,02 & -0,36 & 0,03 & 0,67 & 0,49 & 0,24 & -0,12\
NGC5371 & SABbc(rs) & -0,35 & 0,03 & -0,55 & 0,04 & 0,91 & 0,56 & 0,51 & -0,04\
NGC5426 & Sc(s) & -0,22 & 0,04 & -0,33 & 0,10 & 0,53 & 0,31 & 0,00 & -0,33\
NGC5427 & Sc(s) & -0,09 & 0,00 & – & – & 0,52 & 0,52 & – & –\
NGC5483 & Sc(s) & -0,14 & 0,03 & – & – & 0,63 & 0,49 & – & –\
NGC5530 & Sbc(rs) & -0,29 & 0,04 & -0,50 & 0,02 & 0,74 & 0,45 & 0,50 & 0,00\
NGC5592 & SBbc(s) & -0,08 & 0,01 & -0,05 & 0,05 & 0,52 & 0,52 & -0,04 & -0,04\
NGC5633 & Sb(rs) & -0,32 & 0,15 & – & – & 0,70 & 0,38 & – & –\
NGC5643 & SABc(rs) & 0,07 & 0,14 & 0,21 & 0,08 & 0,58 & 0,58 & -0,10 & 0,11\
NGC5653 & Sb(rs) & 0,31 & 0,10 & -0,09 & 0,07 & 0,24 & 0,55 & -0,12 & -0,12\
NGC5676 & Sbc(rs) & -0,22 & 0,02 & -0,40 & 0,02 & 0,59 & 0,37 & 0,20 & -0,20\
NGC5746 & SABb(rs) & -0,17 & 0,04 & -0,45 & 0,10 & 0,46 & 0,29 & 0,29 & -0,16\
NGC5792 & SBb(rs) & -0,21 & 0,02 & -0,20 & 0,00 & 0,47 & 0,26 & 0,01 & -0,19\
NGC5850 & SBb(r) & -0,23 & 0,04 & -0,26 & 0,05 & 0,90 & 0,67 & 0,45 & 0,19\
NGC5859 & SBbc(s) & -0,05 & 0,06 & – & – & 0,40 & 0,40 & – & –\
NGC5861 & SABc(rs) & -0,26 & 0,06 & – & – & 0,68 & 0,42 & – & –\
NGC5879 & Sbc(rs) & -0,14 & 0,02 & – & – & 0,36 & 0,22 & – & –\
NGC5899 & SABc(rs) & -0,18 & 0,06 & – & – & 0,59 & 0,41 & – & –\
NGC5907 & Sc(s) & -0,23 & 0,01 & -0,30 & 0,04 & 0,17 & -0,06 & -0,21 & -0,51\
NGC5921 & SBbc(r) & -0,15 & 0,02 & -0,27 & 0,02 & 0,73 & 0,58 & 0,23 & -0,04\
NGC5962 & Sc(r) & -0,19 & 0,04 & -0,16 & 0,06 & 0,66 & 0,47 & 0,11 & -0,05\
NGC5970 & SBc(r) & -0,17 & 0,00 & – & – & 0,71 & 0,54 & – & –\
NGC5985 & SABb(r) & -0,13 & 0,02 & -0,31 & 0,03 & 0,67 & 0,54 & 0,27 & -0,04\
NGC5987 & Sb & -0,10 & 0,04 & – & – & 0,63 & 0,53 & – & –\
NGC6052 & Sc & 0,08 & 0,03 & -0,16 & 0,03 & 0,18 & 0,18 & 0,20 & -0,54\
NGC6181 & SABc(rs) & -0,18 & 0,02 & -0,17 & 0,03 & 0,49 & 0,31 & -0,12 & -0,29\
NGC6207 & Sc(s) & 0,06 & 0,02 & -0,04 & 0,02 & 0,20 & 0,20 & -0,39 & -0,39\
NGC6217 & SBbc(rs) & 0,12 & 0,04 & 0,17 & 0,03 & 0,40 & 0,52 & -0,31 & -0,14\
NGC6221 & SBc(s) & -0,18 & 0,02 & 0,03 & 0,03 & 0,62 & 0,44 & -0,06 & -0,06\
NGC6239 & SBb(s) & -0,05 & 0,02 & 0,06 & 0,03 & 0,18 & 0,18 & -0,39 & -0,39\
NGC6384 & SABbc(r) & -0,26 & 0,03 & -0,26 & 0,04 & 0,70 & 0,44 & 0,28 & 0,02\
NGC6412 & Sc(s) & -0,12 & 0,03 & – & – & 0,57 & 0,45 & – & –\
NGC6574 & SABbc(rs) & -0,12 & 0,03 & -0,12 & 0,06 & 0,67 & 0,55 & 0,10 & -0,02\
NGC6643 & Sc(rs) & -0,14 & 0,01 & -0,17 & 0,05 & 0,49 & 0,35 & -0,10 & -0,27\
NGC6699 & SABbc(rs) & -0,22 & 0,02 & -0,24 & 0,04 & 0,76 & 0,54 & 0,14 & -0,10\
NGC6744 & SABbc(r) & -0,10 & 0,03 & 0,20 & 0,09 & 0,71 & 0,61 & 0,40 & 0,60\
NGC6753 & Sb(r) & -0,13 & 0,02 & -0,17 & 0,03 & 0,86 & 0,73 & 0,24 & 0,07\
NGC6764 & SBbc(s) & 0,13 & 0,06 & 0,26 & 0,06 & 0,32 & 0,45 & -0,35 & -0,09\
NGC6769 & SABb(r) & -0,20 & 0,03 & -0,49 & 0,13 & 0,76 & 0,56 & 0,42 & -0,07\
NGC6780 & SABc(rs) & -0,28 & 0,10 & -0,04 & 0,07 & 0,76 & 0,48 & 0,01 & 0,01\
NGC6814 & SABbc(rs) & -0,11 & 0,05 & 0,28 & 0,13 & 0,82 & 0,71 & -0,06 & 0,22\
NGC6872 & SBb(s) & -0,11 & 0,02 & -0,24 & 0,03 & 0,52 & 0,39 & 0,31 & 0,07\
NGC6887 & Sbc & -0,20 & 0,07 & -0,37 & 0,09 & 0,52 & 0,32 & 0,16 & -0,21\
NGC6890 & Sb(rs) & -0,21 & 0,02 & -0,24 & 0,04 & 0,82 & 0,61 & 0,24 & 0,00\
NGC6923 & SBb(rs) & -0,38 & 0,04 & -0,33 & 0,04 & 0,76 & 0,38 & 0,23 & -0,10\
NGC6925 & Sbc(s) & -0,35 & 0,02 & -0,52 & 0,12 & 0,55 & 0,20 & 0,23 & -0,29\
NGC6951 & SABbc(rs) & 0,00 & 0,05 & 0,07 & 0,07 & 0,62 & 0,62 & 0,10 & 0,10\
NGC6984 & SBc(r) & -0,35 & 0,03 & -0,36 & 0,01 & 0,62 & 0,27 & 0,06 & -0,30\
NGC7038 & SABc(s) & -0,13 & 0,03 & -0,27 & 0,03 & 0,63 & 0,50 & 0,25 & -0,02\
NGC7083 & Sbc(s) & -0,20 & 0,02 & -0,27 & 0,02 & 0,63 & 0,43 & 0,13 & -0,14\
NGC7090 & SBc & -0,01 & 0,06 & -0,05 & 0,01 & -0,11 & -0,11 & -0,59 & -0,59\
NGC7125 & SABc(rs) & -0,14 & 0,01 & 0,09 & 0,09 & 0,36 & 0,22 & -0,20 & -0,20\
NGC7126 & Sc(rs) & -0,17 & 0,02 & -0,30 & 0,05 & 0,49 & 0,32 & 0,01 & -0,29\
NGC7137 & SABc(rs) & 0,02 & 0,01 & 0,07 & 0,02 & 0,51 & 0,51 & -0,12 & -0,12\
NGC7171 & SBb(rs) & -0,20 & 0,01 & -0,35 & 0,05 & 0,67 & 0,47 & 0,15 & -0,20\
NGC7177 & SABb(r) & -0,16 & 0,02 & -0,17 & 0,02 & 0,76 & 0,60 & 0,33 & 0,16\
NGC7184 & SBc(r) & -0,28 & 0,01 & -0,33 & 0,09 & 0,50 & 0,22 & 0,11 & -0,22\
NGC7205 & Sbc(s) & -0,29 & 0,01 & -0,44 & 0,02 & 0,60 & 0,31 & 0,11 & -0,33\
NGC7314 & SABbc(rs) & -0,21 & 0,02 & -0,32 & 0,05 & 0,54 & 0,33 & 0,01 & -0,31\
NGC7329 & SBb(r) & -0,20 & 0,04 & -0,35 & 0,01 & 0,78 & 0,58 & 0,38 & 0,03\
NGC7331 & Sb(s) & -0,13 & 0,01 & -0,33 & 0,04 & 0,57 & 0,44 & 0,25 & -0,08\
NGC7339 & SABbc(s) & 0,08 & 0,11 & 0,17 & 0,03 & 0,32 & 0,32 & -0,37 & -0,20\
NGC7412 & SBb(s) & -0,23 & 0,01 & -0,50 & 0,03 & 0,64 & 0,41 & 0,29 & -0,21\
NGC7448 & Sbc(rs) & -0,14 & 0,02 & -0,22 & 0,00 & 0,30 & 0,16 & -0,17 & -0,39\
NGC7479 & SBc(s) & -0,25 & 0,02 & -0,27 & 0,02 & 0,78 & 0,53 & 0,30 & 0,03\
NGC7496 & SBb(s) & 0,23 & 0,03 & 0,33 & 0,06 & 0,35 & 0,58 & -0,42 & -0,09\
NGC7531 & SABbc(r) & -0,23 & 0,01 & -0,38 & 0,02 & 0,60 & 0,37 & 0,19 & -0,19\
NGC7537 & Sbc & -0,22 & 0,03 & -0,08 & 0,04 & 0,30 & 0,08 & -0,42 & -0,42\
NGC7541 & SBbc(rs) & -0,21 & 0,04 & -0,22 & 0,05 & 0,45 & 0,24 & -0,10 & -0,32\
NGC7590 & Sbc(rs) & -0,36 & 0,03 & -0,37 & 0,10 & 0,55 & 0,19 & 0,05 & -0,32\
NGC7606 & Sb(s) & -0,18 & 0,04 & -0,29 & 0,08 & 0,57 & 0,39 & 0,11 & -0,18\
NGC7640 & SBc(s) & -0,20 & 0,02 & -0,01 & 0,02 & 0,02 & -0,18 & -0,59 & -0,59\
NGC7673 & Sc & 0,12 & 0,04 & 0,23 & 0,14 & 0,19 & 0,31 & -0,58 & -0,35\
NGC7716 & SABb(r) & -0,22 & 0,03 & – & – & 0,81 & 0,59 & – & –\
NGC7723 & SBb(r) & -0,09 & 0,04 & 0,00 & 0,05 & 0,58 & 0,58 & 0,00 & 0,00\
NGC7742 & Sb(r) & -0,15 & 0,05 & -0,37 & 0,20 & 0,79 & 0,63 & 0,32 & -0,05\
NGC7755 & SBc(rs) & -0,13 & 0,03 & -0,07 & 0,04 & 0,74 & 0,61 & 0,05 & 0,05\
NGC7757 & Sc(rs) & -0,24 & 0,07 & -0,29 & 0,12 & 0,51 & 0,27 & -0,07 & -0,36\
NGC7782 & Sb(s) & -0,25 & 0,06 & -0,59 & 0,19 & 0,82 & 0,56 & 0,51 & -0,08\
UGC03973 & SBb & 0,13 & 0,10 & 0,30 & 0,07 & 0,30 & 0,43 & -1,02 & -0,72\
UGC04013 & Sb & 0,13 & 0,05 & 0,44 & 0,12 & 0,13 & 0,26 & -1,08 & -0,64\
[ccccccc]{} (a) non–barred face–on galaxies & 26 & -0.14 $\pm$ 0.01 & 0.06 & 22 & -0.19 $\pm$ 0.03 & 0.14(b) barred face–on galaxies & 98 & -0.14 $\pm$ 0.02 & 0.15 & 82 & -0.08 $\pm$ 0.03 & 0.27(c) non–barred edge–on galaxies & 46 & -0.16 $\pm$ 0.01 & 0.07 & 41 & -0.26 $\pm$ 0.02 & 0.12(d) barred edge–on galaxies & 68 & -0.16 $\pm$ 0.01 & 0.10 & 56 & -0.19 $\pm$ 0.03 & 0.20
[ccccccccc]{} G $\geq$ 0.1 & (B-V) & 14 & 11% & 28% & 29% & 43% & 0.4cm72% & 5(36%)\
-0.1 $<$ G $<$ 0.1 & & 32 & 26% & 9% & 53% & 38% & 0.4cm91% & 4(12%)\
G $\leq$ -0.1 & & 78 & 63% & 25% & 47% & 28% & 0.4cm75% & 8(10%)\
G $\geq$ 0.1 & (U-B) & 19 & 18% & 10% & 37% & 53% & 0.4cm90% & 7(37%)\
-0.1 $<$ G $<$ 0.1 & & 30 & 29% & 17% & 53% & 30% & 0.4cm83% & 2(7%)\
G $\leq$ -0.1 & & 55 & 53% & 27% & 42% & 31% & 0.4cm73% & 4(7%)\
[ccccccc]{} G $\geq$ 0.1 & (B$-$V) & 0.34$\pm$0.03 & 0.53$\pm$0.04 & (B$-$V) & 0.36$\pm$0.03 & 0.55$\pm$0.05\
-0.1 $<$ G $<$ 0.1 & & 0.52$\pm$0.03 & 0.52$\pm$0.03 & & 0.57$\pm$0.02 & 0.57$\pm$0.02\
G $\leq$ -0.1 & & 0.64$\pm$0.01 & 0.43$\pm$0.01 & & 0.74$\pm$0.01 & 0.53$\pm$0.01\
G $\geq$ 0.1 & (U$-$B) & -0.35$\pm$0.06 & -0.08$\pm$0.05 & (U$-$B) & -0.31$\pm$0.07 & 0.05$\pm$0.07\
-0.1 $<$ G $<$ 0.1 & & -0.06$\pm$0.03 & -0.06$\pm$0.03 & & -0.05$\pm$0.03 & -0.05$\pm$0.03\
G $\leq$ -0.1 & & 0.19$\pm$0.02 & -0.14$\pm$0.01 & & 0.24$\pm$0.03 & -0.05$\pm$0.02\
[cccccccccccccccc]{} **[NGC 3310]{} & & -0.41 & & & & &\
-0.11 & -0.11 & -0.04 & & & & & 0.42\
**[NGC 5033]{} & +0.01 & & +0.39 & & & +0.01 &\
& & & & & & &\
**[NGC 5194]{} & -0.40 & & -0.50 & -0.47 & & & -0.21\
& & -0.02 & & & 1.41 & &\
**[NGC 5248]{} & & & & & +0.20 & -0.24 &\
-0.38 & & & & 0.68 & & 0.75 &\
**[NGC 782]{} & -0.36 & & & & -0.10 & &\
& & & 0.11 & & & &\
**[NGC 6769]{} & +0.16 & & & & -0.18 & &\
& & & 0.25 & & & &\
**[NGC 6890]{} & -0.19 & & & & -0.06 & &\
& & & 0.04 & & & &\
**[NGC 6923]{} & -0.37 & & & & -0.28 & &\
& & & 0.31 & & & &\
**[NGC 7496]{} & +0.31 & & & & -0.30 & &\
& & & 0.22 & & & &\
******************
[ccccc]{} NGC 3310 & 0.00 & +0.26 & +0.05 & 0.35\
NGC 5194 & -0.14 & -0.12 & -0.24 & +0.09\
NGC 5248 & -0.07 & +0.11 & -0.10 & -0.05\
NGC 782 & -0.31 & -0.28 & -0.53 & -0.47\
NGC 6769 & -0.20 & -0.19 & -0.49 & -0.42\
NGC 6890 & -0.21 & -0.18 & -0.24 & -0.09\
NGC 6923 & -0.38 & -0.38 & -0.33 & -0.22\
NGC 7496 & +0.23 & +0.23 & +0.33 & +0.40\
[^1]: Based partly on observations made at the Pico dos Dias Observatory (PDO/LNA – CNPq), Brazil
[^2]: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
|
---
abstract: 'We extend the relativistic mean-field models with hadron masses and meson-baryon coupling constants dependent on the scalar $\sigma$ field, studied previously to incorporate $\Delta(1232)$ baryons. Available empirical information is analyzed to put constraints on the couplings of $\Delta$s with meson fields. Conditions for the appearance of $\Delta$s are studied. We demonstrate that with inclusion of the $\Delta$s our equations of state continue to fulfill majority of known empirical constraints including the pressure-density constraint from heavy-ion collisions, the constraint on the maximum mass of the neutron stars, the direct Urca and the gravitational-baryon mass ratio constraints.'
address:
- 'Matej Bel University, SK-97401 Banska Bystrica, Slovakia'
- 'National Research Nuclear University “MEPhI”, RU-115409 Moscow, Russia'
author:
- 'E.E. Kolomeitsev'
- 'K.A. Maslov'
- 'D.N. Voskresensky'
title: 'Delta isobars in relativistic mean-field models with $\sigma$-scaled hadron masses and couplings'
---
,
Introduction
============
A nuclear equation of state (EoS) is the key ingredient in the description of neutron stars (NSs) [@Lattimer:2012nd], supernova explosions [@Woosley] and heavy-ion collisions [@Danielewicz:2002pu; @Fuchs]. Relativistic mean-field (RMF) models are widely used for construction of a hadronic EoS. The original model [@Durr; @Walecka1974] included interaction of nucleons with scalar ($\sigma$) and vector ($\omega$) meson mean fields. Next, for the better description of the symmetry energy the isovector ($\rho$) meson field was incorporated and the work [@Boguta77] included a $\sigma$-field self-interaction in the form of the potential $U(\sigma)=b\sigma^3/3+c\sigma^4/4$. The coupling constants $b$ and $c$ were adjusted to describe the saturation properties of the isospin symmetrical nuclear matter: the saturation density $n_0$, the binding and symmetry energies, and the incompressibility coefficient at the nuclear saturation.
At present, there exists a vast number of modifications of the RMF models. They differ by extra terms in the effective Lagrangian related to new fields and their interactions, cf. [@SerotWalecka; @Glendenning; @Weber; @Savushkin2015] and references therein. Various experiments indicate modifications of hadronic properties in hadronic matter [@Metag]. To improve agreement between theoretical descriptions and experimental data RMF models with density-dependent meson-nucleon coupling constants were developed, cf. [@Fuchs; @Typel; @Hofmann; @Niksirc; @Gaitanos; @Long; @Lalazisis; @Typel2005; @Voskresenskaya; @RocaMaza:2011qe; @Dutra:2014qga; @Dutra:2015hxa]. On the other hand, due to a partial restoration of the chiral symmetry in dense and/or hot matter masses of all hadrons except Goldstone bosons, like pions and kaons, are expected to decrease with increasing density and/or temperature, cf. [@Rapp; @Koch]. According to the conjecture of Brown and Rho [@BrownRho] the nucleon mass and the masses of vector $\omega$, $\rho$ and scalar $\sigma$ mesons should obey an approximately universal scaling law. Motivated by these ideas two of us demonstrated in [@Kolomeitsev:2004ff] how one can construct RMF models incorporating simultaneously in-medium modifications of the baryon and meson masses and coupling constants. In [@Kolomeitsev:2004ff] the effective hadron masses are assumed to be $\sigma$-field dependent. The density dependence of the $\sigma$ field can be related to a modification of the chiral condensate in the medium. Also, in the lattice QCD in the strong coupling limit [@Ohnishi:2008yk] meson masses are approximately proportional to the equilibrium value of the chiral condensate, and the latter value decreases with an increase of the baryon density. Remarkably, in the case of infinite matter the effective hadron masses ($m^{*}_h$) and the coupling constants ($g^{*}_h$) enter all relations only in combinations $m^{*\,2}_h/g_h^{*\,2}$ that leads to equivalence between different RMF schemes [@Kolomeitsev:2004ff]. Allowing for differences in scaling functions for hadron masses and coupling constants one can better fulfill various experimental constraints on the EoS.
A comparison between different nucleon EoSs in how well they satisfy various empirical constraints was performed in [@Klahn:2006ir] for the EoSs obtained in the RMF models, in more microscopic approaches [@APR; @Gandolfi:2009nq], and in Skyrme models [@Dutra12]. Some of the previously used constraints [@Klahn:2006ir] were recently tightened and new constraints were formulated. At present there exists an agreement that the EoS of the cold hadronic matter should: ($i$) satisfy experimental information on properties of dilute nuclear matter and not contradict results of microscopically based approaches; ($ii$) fulfill empirical constraints extracted from the description of global characteristics of atomic nuclei, for the baryon density $n$ near the saturation nuclear matter density $n_0\simeq 0.16\,$fm$^{-3}$; ($iii$) not contradict constraints on the pressure of the nuclear mater at densities above $n_0$ extracted from the description of particle transverse and elliptic flows [@Danielewicz:2002pu] and the $K^+$ production [@Lynch] in heavy-ion collisions; ($iv$) allow for the heaviest known compact stars PSR J1614-2230 with the mass $1.97\pm 0.04\,M_{\odot}$ and PSR J0348+0432 with the mass $2.01\pm 0.04 \,M_{\odot}$ [@Demorest:2010bx; @Antoniadis:2013pzd] ($M_\odot$ is the solar mass); ($v$) allow for an adequate description of the compact star cooling, which is possible, if the most efficient direct Urca (DU) neutrino processes $n\to p+e+\bar{\nu}_e$, $p+e\to n+\nu_e$ do not occur in the majority of the known pulsars [@Kolomeitsev:2004ff; @Blaschke:2004vq; @Grigorian:2016leu][^1]; ($vi$) explain the gravitational mass and total baryon number of pulsar PSR J0737-3039(B) with at most 1% deviation from the baryon number predicted for this object [@Podsiadlowski; @Kitaura:2005bt]; ($vii$) yield a mass-radius relation comparable with the empirical constraints [@Bogdanov:2012md; @Hambaryan2014; @Heinke:2014xaa]; ($viii$) being extended to non-zero temperatures, appropriately describe heavy-ion collision data.
Analysis performed in many papers demonstrated that it is most difficult to reconcile the constraint on the maximum NS mass, $1.97\,M_\odot$, cf. [@Demorest:2010bx; @Antoniadis:2013pzd], and the constraints on the stiffness of the EoS extracted from the analyses of the flow in heavy-ion collisions [@Danielewicz:2002pu; @Fuchs].
In [@Kolomeitsev:2004ff] the model MW(n.u., $z=0.65$), labeled in [@Klahn:2006ir] as the KVOR model, was constructed. As shown in [@Klahn:2006ir] the KVOR model allowed to satisfy appropriately the majority of experimental constraints known to that time including the flow constraint. In [@Khvorostukhin:2006ih; @Khvorostukhin:2008xn] the model was extended to finite temperatures and successfully applied to the description of heavy-ion collisions. However, the KVOR EoS supplemented by the Baym-Pethick-Sutherland EoS for the NS crust [@Baym:1971pw] yields $M_{\rm max}[{\rm KVOR}]=2.01\,M_{\odot}$ that fits the constraint [@Demorest:2010bx; @Antoniadis:2013pzd] only marginally. A possibility of the population of hyperon Fermi seas in dense beta-equilibrium matter (BEM) was not incorporated. The problems with the EoS worsen, however, when strangeness is included, because the appearance of hyperons leads to a softening of the EoS and to the reduction of the maximum NS mass. It is possible to explain observed massive NSs only if one artificially forbids the appearance of hyperons that cannot be reconciled with the known information on binding energies of hyperons in nuclear matter extracted from hypernuclei, see [@Djapo:2008au; @Glendenning] and references therein. This is called the “hyperon puzzle”. The difference between NS masses with and without hyperons proves to be so large for the reasonable choices of hyperon coupling constants in the standard RMF approach, that in order to solve the puzzle one has to start with a very stiff nucleon EoS that hardly agrees with the results of microscopically-based calculations using the variational [@APR] and auxiliary-field diffusion Monte Carlo [@Gandolfi:2009nq] methods. Such an EoS would also be incompatible with the restrictions on the EoS stiffness extracted from the analyses of the particle flows in heavy-ion collisions [@Danielewicz:2002pu; @Fuchs]. All suggested explanations require additional assumptions, see discussion in [@Fortin:2014mya].
In recent papers [@Maslov:2015msa; @Maslov:2015wba] we proposed two modifications of the KVOR model [@Kolomeitsev:2004ff]. One extension of the model (KVORcut) demonstrates that the EoS stiffens, if a growth of the scalar-field magnitude with an increase of the density is bounded from above at some value for baryon densities exceeding a certain value above $n_0$. This can be realized, if the nucleon – vector-meson coupling constant changes rapidly as a function of the scalar field slightly above the desired value. The other version of the model (MKVOR) assumes a smaller value of the nucleon effective mass at the nuclear saturation density and uses a saturation of the scalar field in the isospin asymmetric matter induced by a strong variation of the nucleon – isovector-meson coupling constant as a function of the scalar field. A possibility of hyperonization of the matter in NS interiors was taken into account. The resulting EoSs fulfill a majority of known empirical constraints including the pressure-density constraint from heavy-ion collisions, direct Urca constraint, gravitational-baryon mass constraint for the pulsar J0737-3039B, and the constraint on the maximum mass of the NSs.
Similar problem may arise if new baryon species are incorporated in RMF models. The next in the mass order are $\Delta(1232)$ isobars. Their appearance in NS interiors may lead to similar effects as for hyperons. In [@Maslov:2015msa; @Maslov:2015wba] the $\Delta$ isobars were not included.
The $\Delta$ baryons play the very important role in nuclear physics [@Cattapan02]. They contribute essentially to the pion polarization operator in the nuclear medium leading to an enhancement of the pion softening with an increase of the baryon density and promoting thereby a pion condensation at nucleon densities above a critical density, $n>n_c^{\pi}>n_0$, cf. [@Migdal78; @EricsonWeise; @Migdal:1990vm]. With some assumptions about $\pi N\Delta$ and/or $\Delta\Delta \sigma$ interactions in dense nucleon matter one speculated in [@Migdal78; @Boguta1982] about a possibility of density isomer states. Also, $\Delta$s are produced copiously in energetic heavy-ion collisions [@Metag] and their in-medium modifications may lead to important observable consequences [@Cubero:1987pr; @Voskresensky:1993ud; @Khvorostukhin:2006ih; @Khvorostukhin:2008xn].
During a long time the presence of $\Delta$ baryons in NSs was regarded as an important but unresolved issue [@ShapiroTeukolsky; @Sawyer72]. In the RMF model, in which $\Delta$s couple to meson fields with the same strength as nucleons [@Glendenning] the critical density for the appearance of $\Delta$ isobars was estimated as $\sim 10 n_0$. Therefore, implying that in BEM the critical density for the appearance of $\Delta$s should be also high, one devoted much less effort to the study of $\Delta$ baryons in NSs compared to the investigation of the possible appearance of hyperons. The issue was reconsidered in [@Xiang; @Chen; @Schurhoff; @Lavagno] and more recently in [@Drago2014; @Cai:2015hya; @Drago:2015cea]. Using different density dependencies of the nuclear symmetry energy and assumptions about the baryon-meson coupling constants the authors came to conclusion about feasible effects of $\Delta$ on both the composition and structure of NSs. References [@Drago2014; @Drago:2015cea] formulated the problem as the “$\Delta$ puzzle", which could exist on equal footing with the hyperon puzzle.
In this work we include $\Delta$ resonances in the RMF models with scaled hadron masses and couplings — KVORcut03 and MKVOR — suggested recently in [@Maslov:2015msa; @Maslov:2015wba]. In the absence of $\Delta$s these models have appropriately passed mentioned above constraints. We analyze, if within these models one is able to construct the appropriate EoS with hyperons and $\Delta$ baryons, satisfying presently known experimental constraints.
Our work is organized as follows. In Section \[sec:model\] we formulate our generalized RMF model with $\sigma$-field scaled hadron masses and couplings with inclusion of $\Delta$ isobars. In Section \[sec:eos\] we first investigate KVORcut03 and MKVOR models with $\Delta$ baryons (i.e., KVORcut03$\Delta$ and MKVOR$\Delta$ models). We show that in the MKVOR$\Delta$ model the effective nucleon mass in isospin-symmetric matter (ISM) drops to zero at $n\sim (4-6)n_0$, if one exploits a relevant value for the $\Delta$ potential $U_\Delta \sim -(50-100)$ MeV. Then within this model for a higher density the hadronic EoS cannot be used and should be replaced to the quark one. In order continue to deal with the hadron description we slightly modify the MKVOR model and label it as MKVOR\*. Results of numerical calculations are presented in Section \[Numerical\]. We demonstrate that within the KVORcut03-based and MKVOR\*-based models one is still able to construct the appropriate EoS with inclusion of hyperons and $\Delta$s, satisfying presently known experimental constraints. Our final results are summarized in the Conclusion.
Lagrangian and energy-density {#sec:model}
=============================
RMF model with scaled hadron masses and couplings. General formalism
--------------------------------------------------------------------
We will closely follow the approach described in [@Kolomeitsev:2004ff; @Maslov:2015msa; @Maslov:2015wba] and include now besides the full SU(3) ground-state baryon octet also the isospin quadruplet of $\Delta$ baryons $\Delta = (\Delta^-, \Delta^0, \Delta^+, \Delta^{++})$. In the mean-field approximation we can disregard all complications related to the structure of the wave function of the spin-$3/2$ baryons and treat $\Delta$ as spin-$1/2$ fermions with the bare mass $m_\Delta=1232$ MeV and the spin degeneracy factor 4. Baryons $b = (n,p,$ $\Lambda,$ $\Sigma^{\pm,0},$ $\Xi^{-,0}; \Delta^{\pm,0,++})$ interact with meson mean fields, $m=(\sigma, \omega, \rho,\phi)$, $\sigma$ is the scalar meson and $\omega, \rho,\phi$ are vector mesons. The baryon contribution to the Lagrangian density is $$\begin{aligned}
\mathcal{L}_{\rm bar}&=
\sum_{b} \bar{\Psi}_{b}\big[i \FMslash{D} -m_b\Phi_b\big] \Psi_b, \quad \FMslash{D}=\gamma^{\mu}D_\mu\,, \\ D_\mu &= \partial_\mu + i g_{\om b} \chi_{\om b} \omega_\mu + i g_{\rho b} \chi_{\rho b}\vec{t}_b \vec{\rho}_\mu + i g_{\phi b} \chi_{\phi b} \phi_\mu\,.\nonumber
\label{Lag-bar}\end{aligned}$$ Here $\Psi_{b}$ stands for the bispinor of the spin-$1/2$ baryon (and symbolically for the Rarita-Schwinger spinor with contracted indices for spin-$3/2$ particle). The summation runs over all twelve baryonic states $b$; $\gamma^\mu$ are Dirac $\gamma$-matrices, $\vec{t}_b$ is the baryon isospin operator, which projection is expressed through the baryon electric charge $Q_b$ and strangeness $S_b$ as $t_{3b}=-\frac12+Q_b-\frac12 S_b$ (recall $S_{N,\Delta}=0$, $S_{\Lambda,\Sigma}=-1$ and $S_{\Xi}=-2$). The meson field contribution to the Lagrangian density is $$\begin{aligned}
\mathcal{L}_{\rm mes} &=& \half \partial_\mu \sigma \partial^\mu \sigma
- \half m_\sigma^{2}\Phi_\sigma^2 \sigma^2 - {U}
-\quart\om_{\mu \nu} \om^{\mu \nu} + \half m_\omega^{2}\Phi_\omega^2\, \om_\mu \om^\mu
\nonumber \\
& -& \quart\vec{\rho}_{\mu \nu} \vec{\rho}\,^{\mu \nu}
+ \half m_\rho^{2}\Phi_\rho^2 \vec{\rho}_\mu \vec{\rho}^{\,\mu}
- \quart\phi_{\mu \nu} \phi^{\mu \nu}+ \half m_\phi^{2}\Phi_\phi^2\, \phi_\mu \phi^\mu
,
\label{Lag-mes}\end{aligned}$$ where $\om_{\mu \nu} = \partial_\mu \om_\nu - \partial_\nu \om_\mu$, $\phi_{\mu \nu} = \partial_\mu \phi_\nu - \partial_\nu \phi_\mu$ and for the $\rho$ meson we take into account self-interaction via the non-Abelian long derivative $\vec{\rho}_{\mu \nu} = \partial_\mu \vec{\rho}_\nu - \partial_\nu \vec{\rho}_\mu + g_\rho' \chi_\rho' [\vec \rho_\mu \times \vec \rho_\nu]$. The latter term proves to be important in the discussion of a charged $\rho$ condensation proposed in [@v97; @Kolomeitsev:2004ff]. In the given work we suppress this possibility.
Within the approach of Ref. [@Kolomeitsev:2004ff] the effective coupling constants in matter depend on the $\sigma$ field via the scaling functions as $g_{\sigma b}^*=g_{\sigma b} \chi_{\sigma b}(\sigma)$, $g_{\omega b}^* =g_{\omega b} \chi_{\omega b}(\sigma)$, $g_{\rho b}^* =g_{\rho b} \chi_{\rho b}(\sigma)$, $ g_{\phi b}^* =g_{\phi b}
\chi_{\phi b}(\sigma)$, $g_\rho^{'*} =g_\rho'\chi_\rho'(\sigma)$. The potential $U(\sigma)$ allows for a self-interaction of the $\sigma$ field. In matter the bare masses of baryons, $m_b$, and mesons, $m_{m}$, are replaced by the effective masses $m_b^*=m_b\Phi_b(\sigma)$, $m_{m}^*=m_m\Phi_m(\sigma)$.
The full Lagrangian density of the model is given by the sum $\mathcal{L} = \mathcal{L}_{\rm bar}
+ \mathcal{L}_{\rm mes} + \mathcal{L}_{\rm lept}$, where to describe the BEM we also include the Lagrangian density of light leptons: electrons and muons, $\mathcal{L}_{\rm lept} = \sum_l \bar{\psi}_l (i \partial_\mu \gamma^\mu - m_l) \psi_l$, $l=e,\mu$; $\psi_l$ stands for the lepton bispinor and $m_l$ is the bare lepton mass. Masses of all particles are taken the same as in [@Maslov:2015wba; @Kolomeitsev:2004ff].
The $\sigma$ field dependence enters the scaling function $\chi_{mb}$ and $\Phi_{b(m)}$ through an auxiliary variable $$\begin{aligned}
f = g_{\sigma N} \chi_{\sigma N} (\sigma) \frac{\sigma}{m_N}\,.\end{aligned}$$ As in [@Kolomeitsev:2004ff; @Khvorostukhin:2006ih; @Khvorostukhin:2008xn; @Maslov:2015msa; @Maslov:2015wba] we exploit the universal scaling functions for the nucleon and meson masses: $$\begin{aligned}
\Phi_N (f)=\Phi_m (f)=1-f,
\label{PhiN}\end{aligned}$$ but allow for a variation of the scaling functions of coupling constants $\chi_{m b}$. We suppose that $\chi_{\omega b}(f) = \chi_{\omega N}(f)$, $\chi_{\rho b}(f)
= \chi_{\rho N}(f)$. Then the scaling function $\Phi_b$ for all baryons including hyperons and $\Delta$ isobars can be written as $$\begin{aligned}
\Phi_b (f)=\Phi_N\big(g_{\sigma b}\chi_{\sigma b}\frac{\sigma}{m_b}\big)\equiv
\Phi_N\big(x_{\sigma b}\xi_{\sigma b}\, \frac{m_N}{m_b}\,f\big)\,,
\quad \xi_{\sigma b}=\frac{\chi_{\sigma b}}{\chi_{\sigma N}}\,,\end{aligned}$$ where $\xi_{\sigma b}$ is a function of $f$.
With the help of equations of motion for vector fields, in the standard way we recover the energy-density functional for the cold infinite baryonic matter of an arbitrary isospin composition [@Glendenning; @Weber; @Kolomeitsev:2004ff; @Maslov:2015wba]: $$\begin{aligned}
E[\{n_b\};f] &=& \sum_{b } (2 s_b+1) E_{\rm kin}(p_{\rmF,b},m_b^*(f)) + \sum_{l=e,\mu} 2E_{\rm kin}(p_{\rmF l}, m_l)\nonumber\\
& +& \frac{m_N^4 f^2}{2 C_\sigma^2} \eta_\sigma(f)
+ \frac{1}{2 m_N^2} \Big[\frac{C_\om^2 n_B^2}{ \eta_\om(f)}
+\frac{C_\rho^2 n_I^2}{\eta_\rho(f)}
+ \frac{C_\phi^2 n_S^2}{\eta_\phi(f)}
\Big]\,,
\label{En}\\
C_M &=& g_{MN} \frac{m_N}{m_M}\,,\,\,M=(\sigma,\om,\rho)\,,\quad C_\phi=C_\om \frac{m_\om}{m_\phi}\,,
\nonumber \end{aligned}$$ where $s_b$ stands for the fermion spin. The fermion energy is given by $$\begin{aligned}
E_{\rm kin}(m,p_\rmF) &=&\frac{1}{16 \pi ^2}\left(p_\rmF \sqrt{m^2 + p_\rmF^2} (m^2+2 p_\rmF^2)-m^4 {\rm arcsinh}(p_\rmF/m) \right)\,,
\nonumber\end{aligned}$$ with the Fermi momentum of species $b$ related to the number density as $p_{\rmF,b}=(6\pi^2\,n_b/(2s_b+1))^{1/3}$. In Eq. (\[En\]) we introduced effective densities of baryon number, isospin and strangeness, $$\begin{aligned}
&& n_B = \sum_b x_{\om b} n_b, \quad n_I = \sum_b x_{\rho b} t_{3b} n_b, \quad
n_S = \sum_b x_{\phi b} n_b, \,
\nonumber\\
&& \mbox{with}\quad x_{\om b} = \frac{g_{\om b}}{g_{\om N}}\,,\quad
x_{\rho b} = \frac{g_{\rho b}}{g_{\rho N}}\,,\quad
x_{\phi b} = \frac{g_{\phi b}}{g_{\om N}},
\label{nBIS}\end{aligned}$$ which determine the contributions from mean fields of the vector mesons to the total energy density.
The key difference of our approach from the standard non-linear Walecka-like RMF models is the presence of the scaling functions for the vector meson fields $\eta_{\om,\rho,\phi}$, which stand for the ratios of the scaling functions for the hadron mass and the coupling constant $$\begin{aligned}
\eta_\om (f) = \frac{\Phi_\om^2 (f)}{\chi_{\om N}^2 (f)}\,, \quad \eta_\rho (f) = \frac{\Phi_\rho^2 (f)}{\chi_{\rho N}^2 (f)}\,, \quad \eta_\phi (f) = \frac{\Phi_\phi^2 (f)}{\chi_{\phi N}^2 (f)}\,.
\label{eta-def}\end{aligned}$$ We stress that, as long as we consider an infinite system, there is actually no need to specify the scaling functions $\Phi_\om$, $\chi_\om$, $\Phi_\rho$, $\chi_\rho$, $\Phi_\phi$, and $\chi_\phi$ separately, but only their combinations [@Kolomeitsev:2004ff].
The scalar-field self-interaction potential ${U}(\sigma)$ can be hidden in the scaling function $\eta_{\sigma}(f)$ that we further assume: $$\begin{aligned}
\eta_{\sigma}(f)=\frac{\Phi_{\sigma}^2[\sigma(f)]}{\chi_{\sigma N}^2[\sigma(f)]} + \frac{ 2 \, C_{\sigma}^2}{m_N^4 f^2} {U}[\sigma(f)]\,.
\label{Uinetaf}\end{aligned}$$
The equation of motion for the remaining field variable $f$ follows from the minimization of the energy density (\[En\]), $$\begin{aligned}
\frac{m_N^3\,f}{C_\sigma^2 } \eta_\sigma(f)&=& n_{B,\rm sc}(f,\{n_b\}) + n_{\rm MF}(f,\{n_b\})\,,
\label{eq_fn}\end{aligned}$$ where the source of the scalar field is now not only the baryon scalar density $$\begin{aligned}
n_{B,\rm sc}(f,\{n_b\}) = -\sum_{b} \frac{m_b}{m_N} \Phi'_b(f)(2s_b+1) \rho_{\rm sc} (m_b \Phi_b(f), p_{\rmF,b}),
\nonumber\\
\rho_{\rm sc}(m,p_\rmF) =
\frac{1}{4 \pi ^2} \big(m p_\rmF \sqrt{m^2 + p_\rmF^2}- m^3 {\rm arcsinh}(p_\rmF/m)\big) \,,
\label{rhoS}\end{aligned}$$ but also meson contributions due to the mean-field scaling functions $$\begin{aligned}
n_{\rm MF}(f,\{ n_b \})
= \frac{C_\om^2 \eta'_\om (f)n_B^2}{2m_N^3\eta^2_\om(f)}
+ \frac{C_\rho^2 \eta'_\rho(f)n_I^2}{2m_N^3\eta^2_\rho(f)}
+ \frac{C_\phi^2 \eta'_\phi(f)n_S^2}{2m_N^3\eta^2_\phi(f)}
-\frac{m_N^3 f^2}{2 C_\sigma^2 } \eta_\sigma'(f).
\label{rho-eta}\end{aligned}$$
The chemical potential for the baryon species $b$ can be calculated as $\mu_b=
\frac{\partial}{\partial n_b} E[\bar{f},\{n_b\}]$, where $\bar{f}$ is the solution of Eq. (\[eq\_fn\]) for given partial densities of baryons $\{n_b\}$, or explicitly $$\begin{aligned}
\mu_b=\sqrt{p_{\rmF,b}^2+m_b^2\Phi_b^2(\bar{f})}
+\frac{1}{m_N^2} \Big[x_{\om b}\frac{C_\om^2 n_B}{ \eta_\om(\bar{f})}
+x_{\rho b}t_{3b}\,\frac{C_\rho^2 n_I}{\eta_\rho(\bar{f})}
+ x_{\phi b}\frac{C_\phi^2 n_S}{\eta_\phi(\bar{f})}
\Big].
\label{mub}\end{aligned}$$
The composition is determined by the conditions of chemical equilibrium with respect to the processes which can occur in the medium. If we consider the nuclear matter on a short time-scale, so that weak processes have no time to occur, hyperons cannot appear but the $\Delta$ admixture can be created and balanced by fast strong processes $N N \leftrightarrow \Delta N$ and $NN\leftrightarrow \Delta\Delta$. The latter ones impose the relations among chemical potentials $$\begin{aligned}
\label{eq4muD}
\mu_{\Delta^-}=2\mu_n-\mu_p\,,\quad \mu_{\Delta^0}=\mu_n\,,\quad \mu_{\Delta^+}=\mu_p\,,
\quad \mu_{\Delta^{++}}=2\mu_p -\mu_n\,,\end{aligned}$$ where the nucleon chemical potentials, $\mu_n$ and $\mu_p$, are fixed by the total baryon, $n_B$, and isospin, $n_I$, densities. These conditions will be used to determine the $\Delta$ amount in the ISM, which is defined by the condition $n_I=0$, and therefore $\mu_n=\mu_p$. In a long-living system like a NS the weak processes have enough time to occur and we deal with the BEM. Thus, the composition of the NS core is determined by conditions of the $\beta$-equilibrium, which impose the relations among the particle chemical potentials $$\begin{aligned}
\mu_b=\mu_n - Q_b\,\mu_e \,,\quad \mu_e =\mu_\mu,
\label{chempot-i}\end{aligned}$$ and by the electro-neutrality condition $$\begin{aligned}
\sum_{b} Q_b\,n_b-n_e-n_\mu=0\,,
\label{electroneut}\end{aligned}$$ where lepton densities $n_l$ are given by $n_l=(\mu_l^2-m_l^2)^{3/2}/(3\pi^2)$, $l=e,\mu$. Solving Eqs. (\[chempot-i\]) and (\[electroneut\]), one can obtain the particle densities $n_i$, $i=(b,l)$, as functions of the total baryon density $n_B \equiv n= \sum_{b} n_b$. Finally, pressure of the matter in the $\beta$-equilibrium can be calculated as $$\begin{aligned}
P[n, n_i]=\sum_i\mu_i\, n_i -E[\bar{f}(n),\{n_i\}]\,.
\label{press}\end{aligned}$$ The sum runs here over all baryons and leptons. For densities $n< 0.7 n_0$ we match the RMF EoS with the BPS crust EoS, see Appendix A in [@Maslov:2015wba] for details. The final NS configuration follows from the solution of the Tolman–Oppenheimer–Volkoff equation.
Now, it remains to specify the ratios of the coupling constants in (\[nBIS\]).
Couplings for hyperons
----------------------
The coupling constants of hyperons to vector mesons can be related to those of nucleons with the help of the SU(6) symmetry relations: $$\begin{aligned}
&x_{\om \Lambda} = x_{\om \Sigma}= 2 x_{\om \Xi}=\frac{2}{3} \,,
\quad
x_{\rho\Lambda}=0\,, \quad x_{\rho \Sigma} = 2x_{\rho \Xi} = 2\,, \nonumber\\
&x_{\phi \Lambda} = x_{\phi \Sigma} = \frac12 x_{\phi \Xi} = -\frac{\sqrt{2}}{3} \,,
\quad x_{\phi N} = 0.
\label{gHm}\end{aligned}$$ The scalar meson coupling constants are constrained by hyperon potentials, $U_H(n_0)$, or, equivalently, by the hyperon binding energies in the nucleon ISM at saturation, which are deduced from extrapolations of hyper-nucleus data, $$\begin{aligned}
x_{\sigma H}=\frac{x_{\omega H} n_0 C_{\omega}^2\eta_\om(\bar{f}_0)/{m_N^2}
-U_{H}(n_0)}{m_N-m_N^{*} (n_0)}\,,
\label{EHbind}\end{aligned}$$ where we put $\xi_{\sigma H}(\bar{f}_0)=1$, and $\bar{f}_0$ is the solution of equation of motion in the ISM at saturation, $n_p=n_n=n_0/2$. Note that the $\eta_\om$ scaling will be chosen later so that $\eta_\om(\bar{f}_0)\approx 1$. As in [@Maslov:2015msa; @Maslov:2015wba] we will use the values $$U_{\Lambda }(n_0) = -28 \,{\rm MeV},\quad U_{\Sigma }(n_0) = 30 \,{\rm MeV},\quad U_{\Xi }(n_0) = -15 \,{\rm MeV}\,.$$
The described scheme leaves us a freedom for choosing the scaling functions $\eta_\phi(f)$ and $\xi_{\sigma H}(f)$. Following [@Maslov:2015msa; @Maslov:2015wba], we consider two choices. The first choice (which we label by H$\phi$ suffix) is when we incorporate the $\phi$-meson mean field with the very same scaling of the $\phi$ mass as for all other hadrons, $\Phi_\phi=1-f$, but use unscaled coupling constants $\chi_{\phi b}=1$, $$\begin{aligned}
\eta_\phi=(1-f)^2\,,\quad\mbox{and}\quad \xi_{\sigma H}=1\,.
\label{hyp-Hphi}\end{aligned}$$ In the second choice (labeled by H$\phi\sigma$ suffix) we use $$\begin{aligned}
\eta_\phi=(1-f)^2\,,\quad\mbox{and}\quad \xi_{\sigma H}=
\left\{\begin{array}{cc}
1\,, \quad\mbox{for} \quad n = n_0\\
0\,, \quad\mbox{for} \quad n \geq n_{cH}\end{array}\right.
\,,
\label{hyp-Hphisig}\end{aligned}$$ where $n_{cH}$ is the critical density for hyperonization. With this assumption $\xi_{\sigma H}$ decreases reaching zero for the baryon density $n = n_{cH}$ and for $n \geq n_{cH}$ we exploit vacuum masses for the hyperons. Note that the KVOR model extended to the high temperature regime in Ref. [@Khvorostukhin:2008xn] (the SHMC model) matches well the lattice data up to temperature 250 MeV provided all the baryon-$\sigma$ coupling constants except the nucleon ones are artificially suppressed, that partially motivates our second choice of $\xi_{\sigma \rm H}=0$ for densities at which the hyperons are produced. Introducing the scalings (\[hyp-Hphi\]), (\[hyp-Hphisig\]) allowed us to resolve the hyperon puzzle within our models [@Maslov:2015msa; @Maslov:2015wba].
Couplings for $\Delta$ baryons
------------------------------
The coupling constants of the $\Delta$ resonances are poorly constrained empirically, due to unstable nature of the $\Delta$ particles and the complicated pion-nucleon dynamics in-medium. Simplest is the universal choice of the couplings of the $\Delta$ with $\sigma$, $\om$, $\rho$ fields, which is usually argued by a naive quark counting [@Glendenning]: $$\begin{aligned}
x_{\om\Delta}=x_{\rho\Delta}=x_{\sigma\Delta}=1\,, \quad x_{\phi\Delta}=0.
\label{x-QC}\end{aligned}$$
The range of possible deviations from the universal law was investigated in [@SerotWalecka; @Wehrberger1; @Kosov; @Oliveira; @Zschiesche], see also [@Glendenning]. The choice of coupling parameters (\[x-QC\]) in $\sigma$ and $\om$ sectors assumes that potentials acting on $\Delta$ and nucleons are the same. There are, however, experimental evidences that these potentials can be essentially different already in the ISM. To allow for a deviation from the universal scaling we, similarly to the hyperon case, cf. Eq. (\[EHbind\]), include an additional constraint on the $x_{\sigma\Delta}$ from the potential of the $\Delta$ baryon, $U_\Delta(n_0)$, in the ISM at saturation density $n_0$: $$\begin{aligned}
x_{\sigma \Delta}=\frac{x_{\omega \Delta}C_{\omega}^2 n_0\eta_\om(\bar{f}_0)/m_N^2
-U_{\Delta}(n_0)}{m_N-m_N^{*} (n_0)}\,,\quad x_{\om\Delta}=x_{\rho\Delta}=1\,, \quad x_{\phi\Delta}=0.
\label{x-QCU}\end{aligned}$$ Here we continue to use the quark counting relation for $x_{\om\Delta}$ and $x_{\rho\Delta}$ and the Iizuka-Zweig-Okubo suppression of the $\phi$ meson coupling to not strange baryons [@Okubo].
Unfortunately, the value $U_\Delta(n_0)$ is poorly constrained by existing data. Results of various analyses are contradictive. From the analysis of electromagnetic excitations of $\Delta$s within a relativistic quantum-hadrodynamic scheme reference [@Wehrberger1] concluded that $0{\stackrel{\scriptstyle <}{\phantom{}_{\sim}}}x_{\sigma\Delta}-x_{\om\Delta}{\stackrel{\scriptstyle <}{\phantom{}_{\sim}}}0.2$. Reference [@Jin] using the QCD sum rule estimated the coupling of the $\Delta$ to the $\om$ field to be half of the strength estimated from the quark counting, $x_{\om\Delta}\simeq 0.4$–0.5, whereas the coupling to the scalar field was estimated as $x_{\sigma \Delta}\simeq 1.3$. Calculations [@Kosov] within the standard non-linear Walecka model showed that with such coupling parameters the ISM at $n=n_0$ would be metastable since there appears a second and much deeper minimum in the energy at the density $n\sim 3\,n_0$. These coupling parameters correspond to the potential $U_\Delta(n_0)$, being 3–5 times deeper than the nucleon potential. A possibility for a large value of the $U_\Delta$ potential was advocated in [@Connell], where it was demonstrated that the electron–nucleus scattering can be described with $U_\Delta (n_0)\simeq -115$MeV if the momentum dependence of the $\Delta$-nucleus potential is included. Following the relation (\[x-QCU\]) the variation of the $\Delta$ potential in the interval $-150\,{\rm MeV}\le U_\Delta(n_0)\le -50\,{\rm MeV}$ corresponds to the variation $1.49(1.34)\ge x_{\sigma\Delta}\ge 0.94$ (0.94) for the KVORcut03(MKVOR) models (for $x_{\omega\Delta}=1$). Note that, if we assume the same mass-scaling for $\Delta$s and nucleons, $\Phi_\Delta =\Phi_N$, that corresponds to $x_{\sigma \Delta}=1.32$, we obtain $U_\Delta (n_0)\simeq -119$MeV for KVORcut03 and $U_\Delta (n_0)\simeq -146$MeV for MKVOR models.
However, it seems us rather unrealistic, if $\Delta$ baryons having similar internal quark structure as the nucleons had feel a much different potential. The same argumentation was used in [@Migdal:1990vm; @Voskresensky:1993ud] where the authors utilized $U_\Delta (n)\simeq U_N (n)$ with the nucleon potential $U_N (n_0)\simeq -(50\mbox{--}60)$MeV.
The coupling of the $\Delta$ baryon to the $\sigma$ field can be estimated, if one applies the chiral symmetry constraints to the $\pi\Delta$ scattering amplitude. A contribution to energy-independent isospin-symmetrical part of the pion-baryon scattering amplitude can be described from one side by the pion-baryon sigma-term and from the other side by the exchange of the $\sigma$ meson $$\begin{aligned}
\frac{g_{\sigma B}g_{\sigma\pi\pi}}{m_\sigma^2} \approx \frac12\frac{\Sigma_{\pi B}}{f_\pi^2m_\pi}\,,
\label{gsbb-sigterm}\end{aligned}$$ where $\Sigma_{\pi B}$ is the pion-baryon sigma-term, $f_\pi$ is the pion decay constant, $m_\pi$ is the pion mass, and $g_{\sigma\pi\pi}$ is the $\sigma\pi\pi$ coupling constant. The similar relation was used in [@BLRT94] \[see Eq. (23) there\] for the kaon-nucleon scattering. From the relation (\[gsbb-sigterm\]) we estimate the coupling parameter $$\begin{aligned}
x_{
\sigma\Delta} \approx {\Sigma_{\pi \Delta}}/{\Sigma_{\pi N}}.
\label{xs-sigt}\end{aligned}$$ The sigma-terms are evaluated in the quark model [@Lubov] as $$\begin{aligned}
\Sigma_{\pi N}= 43.3\pm 4.4\,{\rm MeV}\,,\quad
\Sigma_{\pi \Delta} =32\pm 3\,{\rm MeV}\,.
\label{sigterm}\end{aligned}$$ Calculations in the framework of the chiral perturbation theory [@Cavalcante] give similar results $\Sigma_{\pi N}= 45.8\,{\rm MeV}$, and $\Sigma_{\pi \Delta} =32.1\,{\rm MeV}$. Equation (\[xs-sigt\]) with the parameters (\[sigterm\]) yields the interval for the $x_{\sigma\Delta}$ values, $ 0.90{\stackrel{\scriptstyle >}{\phantom{}_{\sim}}}x_{\sigma\Delta} {\stackrel{\scriptstyle >}{\phantom{}_{\sim}}}0.61$. The latter interval corresponds to a shallow attractive or even repulsive $\Delta$ potential $-43(-40)\,{\rm MeV}{\stackrel{\scriptstyle <}{\phantom{}_{\sim}}}U_\Delta(n_0){\stackrel{\scriptstyle <}{\phantom{}_{\sim}}}+10(+33)\,{\rm MeV}$ for our KVOR(MKVOR) models, provided we take $x_{\omega\Delta}=1$.
Studying the electron-nucleus scattering data Ref. [@Koch1] introduced a density-dependent average binding potential $U_\Delta (n)\simeq - 55\, n(r)/n_0$MeV. Reference [@Nakamura] supported this estimate from the analysis of neutrino-induced pion production on carbon. On the other hand, from the study of the pion-nucleus scattering data Ref. [@Horikawa] concluded that the real part of the $\Delta$-nucleus potential is as shallow as $-30$MeV. Similar estimation is suggested in Ref. [@EricsonWeise]. Since pions interact mainly close to the nucleus surface, larger values of the potential are expected at $n_0$, so for the linear density dependence one may expect that $U_{\Delta}(n_0)\sim U_N (n_0)$, in agreement with estimates [@Migdal:1990vm; @Koch1]. Following analyses of electron-nucleus [@Koch; @Connell; @Wehrberger] and pion-nucleus [@Horikawa; @Nakamura] scattering and photoabsorption [@Alberico] the authors in [@Drago2014] estimated a range of uncertainty for the $\Delta$ potential as $-30\,{\rm MeV}+ U_N (n_0)< U_{\Delta}(n_0)<U_N(n_0)$ that with $U_N (n_0)\simeq -(50\mbox{--}60)$MeV leads to the constraint $-90$MeV$< U_{\Delta}(n_0)<-50$MeV. The authors [@Song; @Ferini; @Cozma; @Guo] studying threshold conditions for pion and $\Delta$ productions in heavy-ion collisions arrived at inequality $U_N(n_0)<U_{\Delta}(n_0)<\frac{2}{3}U_N(n_0)$ that leads to inequality $-60\,{\rm MeV} < U_{\Delta}(n_0)<-40$MeV. The most involved calculation in [@Riek:2008uw] basing on a self-consistent and covariant many-body approach for the pion and $\Delta$ isobar propagation in ISM, from the study of the photoproduction off nuclei adjusted the set of Migdal parameters and predicted $U_{\Delta}(n_0)=-50$MeV.
Below we will use the value $-50$MeV as a most realistic estimate of the $\Delta$ potential. We shall see that in this case effects of $\Delta$s within our models of EoS prove to be not so strong. To test the limits of the models we also allow for an enhancement of the $U_{\Delta}(n_0)$ attraction varying it in the interval $-(-150\mbox{--}100)\,{\rm MeV}\le U_\Delta(n_0)\le -50\,{\rm MeV}$.
For $\xi_{\sigma \Delta}=0$ at $n>n_{c,\Delta}$, that corresponds to $m^{*}_\Delta =m_\Delta$ for $n>n_{c,\Delta}$, the $\Delta$ baryons do not appear in any of the models considered below. Therefore, studying possible $\Delta$ effects on the EoS we refuse this possibility and exploit a more realistic choice of $\xi_{\sigma \Delta}=1$ throughout the text. In Sect. \[Numerical\] we use the traditional choice for the $\Delta$ coupling constants, $x_{\omega\Delta}=x_{\rho\Delta}=1$, and then in Sect. \[sec:variation\] we allow for their variation.
KVORcut03, MKVOR and MKVOR\* models {#sec:eos}
====================================
We focus now on two models KVORcut03 and MKVOR proposed in [@Maslov:2015msa; @Maslov:2015wba], which proved to satisfy well many constraints on the hadronic EoS, and we extend them now including $\Delta$ baryons. These two models utilize so called cut-mechanisms of slowing down the growth of $f$ field after it reaches some value with the density increase. The cut mechanism allows to stiffen the EoS, as was recently demonstrated in [@Maslov:cut]. In the KVORcut03 model it is achieved by a sharp variation of the $\eta_\om (f)$ scaling function, whereas in the MKVOR model a sharp variation is included in the $\eta_\rho (f)$ scaling function. The latter is done to keep the EoS not too stiff in ISM to fulfill the flow constraint from heavy-ion collisions [@Danielewicz:2002pu] and to make the EoS as stiff as possible in BEM to safely satisfy the constraint on the maximum mass of a compact star. The $\rho$ field is coupled to the isospin density that makes the $f$-saturation mechanism very sensitive to the composition of the BEM. As we shall see, the incorporation of $\Delta$ baryons leads in the MKVOR model (now labeled as MKVOR$\Delta$ model) to a problem that the nucleon effective mass in ISM drops to zero at some density (e.g., at $n\sim 6\,n_0$ for $U_\Delta (n_0)\sim -50$MeV) and for higher densities the description in terms of hadronic degrees of freedom becomes invalid. To prolong the hadronic description in ISM for higher densities we propose below a minimal modification of the MKVOR model (labeled MKVOR\*), which prevents the effective nucleon mass from vanishing at any density.
----------------------- ----------------- --------------- --------- -------------- --------- --------- --------- ---------------
\[0cm\]\[0cm\][EoS]{} $\mathcal{E}_0$ $n_0$ $K$ $m_N^*(n_0)$ $J$ $L$ $K'$ $K_{\rm sym}$
\[MeV\] \[fm$^{-3}$\] \[MeV\] $[m_N]$ \[MeV\] \[MeV\] \[MeV\] \[MeV\]
KVORcut03 $- 16$ 0.16 275 0.805 32 71 422 -86
MKVOR $- 16$ 0.16 240 0.730 30 41 557 -158
----------------------- ----------------- --------------- --------- -------------- --------- --------- --------- ---------------
: Coefficients of the energy expansion (\[Eexpans\]) near $n_0$ for KVORcut03 and MKVOR models.
\[tab:sat-param\]
The properties of our model at the nuclear saturation density $n_0$ are illustrated in Table \[tab:sat-param\], where we collect coefficients of the expansion of the nucleon binding energy per nucleon near $n_0$ for KVORcut03 and MKVOR models, $$\begin{aligned}
& \mathcal{E} = \mathcal{E}_0 + \frac{1}{2}K\epsilon^2
-\frac{1}{6}K'\epsilon^3 +\beta^2\widetilde{\mathcal{E}}_{\rm sym}(n) +
O(\beta^4, \epsilon^4)\,,
\nonumber\\
& \widetilde{\mathcal{E}}_{\rm sym}(n)=J + L\epsilon +\frac{K_{\rm sym}}{2}\epsilon^2+\dots\,,
\label{Eexpans}\end{aligned}$$ in terms of small $\epsilon=(n-n_0)/3n_0$ and $\beta=(n_n-n_p)/n$ parameters. The parameters for the MKVOR\* and MKVOR models are identical.
KVOR and KVORcut models
-----------------------
Now we introduce the scaling functions. First, we remind the choice for scaling functions in the KVOR model [@Kolomeitsev:2004ff]: $$\begin{aligned}
&& \eta^{\rm KVOR}_\sigma = 1 + 2 \frac{C_\sigma^2}{ f^{2}}\, \big(\frac{b}{3} f^3 + \frac{c}{4} f^4\big)
\,,\quad
\eta^{\rm KVOR}_\omega = \Big[\frac{1 + z \bar{f}_0}{1 + z f}\Big]^\alpha\,,\quad \bar{f}_0=f(n_0)\,,
\nonumber\\
&& \eta^{\rm KVOR}_\rho = \Big[1 + 4\,\frac{C_\om^2}{C_\rho^2}\,(1-[\eta^{\rm KVOR}_\om (f)]^{-1})\Big]^{-1}\,.
\label{eta-KVOR}\end{aligned}$$ The scaling functions (\[eta-KVOR\]) are plotted in Fig. \[Fig-1-new\]. The $\eta^{\rm KVOR}_\sigma$ function is just a reparametrization of the $\sigma$ self-interaction potential $U(f)$ proposed by Boguta and Bodmer [@Boguta77] in terms of the scaling function. The function $\eta^{\rm KVOR}_\omega$ is chosen to be a decreasing function of $f$ smaller than 1 for $f>f_0$, that leads to an increase of the $\omega$ meson contribution to the energy density and to a stiffening of the EoS. The choice of $\eta^{\rm KVOR}_\rho$ is made to guarantee a monotonous decrease of the effective nucleon mass with a density increase in the BEM for densities relevant for NSs. Such a $m^*(n)$ decrease is in a line with ideas of the partial restoration of the chiral symmetry and Brown-Rho scaling. An increase of $\eta^{\rm KVOR}_\rho$ with increase of $f$ allows to suppress the symmetry energy and the proton fraction in the NS for $n>n_0$, helping to fulfill the DU constraint on the efficiency of the NS cooling, cf. [@Blaschke:2004vq; @Kolomeitsev:2004ff; @Grigorian:2005fn; @Klahn:2006ir; @Grigorian:2016leu].
![Scaling functions $\eta_\sigma$ (left panel), $\eta_\om$ (middle panel), and $\eta_\rho$ (right panel) as functions of the scalar field $f$ for the KVOR, KVORcut03, MKVOR and MKVOR\* models. For the $\eta_\rho (f)$ we show also variations of the function defined in (\[zetaf\]) with parameters (\[tail123\]). Vertical and horizontal bars indicate the maximum values of $f$ ($f_{\rm lim}$) reachable at densities available in NSs.[]{data-label="Fig-1-new"}](Fig-1-f){width="14cm"}
For the KVORcut models the scaling functions were chosen in [@Maslov:2015wba] in the following form $$\begin{aligned}
&&\eta^{\rm KVORcut}_\sigma(f) = \eta^{\rm KVOR}_\sigma \,,\quad
\eta^{\rm KVORcut}_\omega(f) = \eta^{\rm KVOR}_\omega + a_\omega \theta_{b_\om}(f-f_\om)
\,,
\nonumber\\
&&\eta^{\rm KVORcut}_\rho (f) =\eta^{\rm KVOR}_\rho\,.\quad
\label{eta-KVORcut}\end{aligned}$$ We introduced here the switch functions $$\begin{aligned}
\theta_y(x)=\frac12 \big[1+ \tanh(y x)\big]\end{aligned}$$ with the limits $\theta_y(-\infty)=0$ and $\theta_y(+\infty)=1$. In the limit $y\to +\infty$ this function turns into the Heaviside step function, $\theta_y(x)\to (1+{\rm sign}(x))/2$. For the model KVORcut03, parameters determining the scaling functions and the EoS, see Eq. (\[En\]), are collected in Table \[tab:param-KVORcut03\], $\bar{f}_0=f(n_0)$.
$C_\sigma^2$ $C_\om^2$ $C_\rho^2$ $b \cdot 10^3$ $c \cdot 10^3$ $\alpha$ $z$ $a_\om$ $b_\om$ $f_\om$
-------------- ----------- ------------ ---------------- ---------------- ---------- -------- --------- --------- ---------
179.56 87.600 100.64 7.7354 0.34462 1 $-$0.5 0.11 46.78 0.365
: Parameters of the KVORcut03 model.
\[tab:param-KVORcut03\]
Functions $\eta_\sigma(f)$, $\eta_\omega(f)$, $\eta_\rho (f)$ for models which we consider in the given paper are presented in Fig. \[Fig-1-new\]. Vertical and horizontal bars indicate the maximum values of $f$ reachable in NSs for the EoSs under consideration. For these EoSs they correspond to central densities for stars with $M=M_{\rm max}$. For the models KVOR and KVORcut03 the functions $\eta_\rho (f)$, $\eta_\sigma(f)$ are smooth functions of $f$ and the cut-procedure is applied to the $\eta_\omega(f)$, which decreases rapidly in the interval $0.3 < f < 0.4$. The field variable $f$ proves to be restricted from above by the value $f_{\rm lim}$ (being slightly above 0.3) and very weakly depends on the isospin composition of the matter. With $\eta_\sigma (f)$, $\eta_\rho (f)$ and $\eta_\sigma(f)$ functions under consideration there is a single solution $f(n)$. The functions $f(n)$, being solutions of Eq. (\[eq\_fn\]) for ISM and BEM, are shown in Fig. \[fig:eta\_r\] (left panel). For the KVORcut03 model in both cases $f(n)$ grows from zero at $n=0$ to the value $\simeq 0.3$ at $n\simeq 2n_0$, and with a further increase of the density the growth is terminated at the limiting value $f_{\rm lim}$, which is slightly above 0.3 both in ISM and BEM.
We checked that in the KVOR and KVORcut, also in KVOR- and KVORcut- based models, when hyperons and $\Delta$ baryons are included, Eq. (\[eq\_fn\]) for $f$ has only one solution for any density and equilibrium isospin composition.
![ Scalar field $f$ as a function of the nucleon density $n$ in the ISM and BEM for KVORcut03, MKVOR, and MKVOR\* models. Note that in BEM the functions $f(n)$ for MKVOR and MKVOR\* models are identical.[]{data-label="fig:eta_r"}](Fig-2-f){width="5cm"}
MKVOR and MKVOR\* models
------------------------
The model MKVOR proposed in [@Maslov:2015msa; @Maslov:2015wba] is characterized by the following scaling functions: $$\begin{aligned}
\eta^{\rm MKVOR}_\sigma(f) &= \Big[1 - \frac{2}{3} C_\sigma^2 b f -
\frac{1}{2} C_\sigma^2 \Big(c -
\frac{8}{9} C_\sigma^2 b^2\Big) f^2 + \frac{1}{3} d f^3\Big]^{-1} \,,
\nonumber\\
\eta^{\rm MKVOR}_\omega(f) &= \eta^{\rm KVORcut}_\omega(f)\,,
\label{eta-MKVOR}\\
\eta^{\rm MKVOR}_\rho(f) &= a_\rho^{(0)} + a_\rho^{(1)} f +
\frac{a_\rho^{(2)} f^2}{1 + a_\rho^{(3)} f^2} +
\beta \exp\big(- \Gamma(f)(f - f_\rho)^2 \big)\,,
\nonumber\\
\Gamma(f) &= {\gamma }
\Big[{1 + \frac{d_\rho (f-\bar{f}_0)}{1 + e_\rho (f-\bar{f}_0)^2}
}\Big]^{-1}\,,
\nonumber\end{aligned}$$ with the parameters listed in Table \[tab:param-MKVOR\].
The scaling functions $\eta_\sigma (f)$, $\eta_\om (f)$ and $\eta_\rho (f)$ for the MKVOR model are shown in Fig. \[Fig-1-new\] (see also Fig. 4 in [@Maslov:2015wba]). Vertical and horizontal bars indicate the maximum values of $f$ reachable in NSs with the maximum masses. In the MKVOR model the “cut” mechanism limiting the growth of the $f$ field with a density increase is not operative in ISM, since $\eta_\sigma (f)$, $\eta_\om (f)$ are chosen as smooth functions of $f$. The strong variation of the scaling with $f$ is implemented in this model in the $\rho$-meson sector (in the $\eta_\rho (f)$ function). The $\rho$-meson term does not contribute in ISM. Oppositely, in the BEM the magnitude of the scalar field $f(n)$ becomes limited from above. This mechanism allows us to push up the maximum NS mass and simultaneously satisfy the constraint deduced from the analysis of the particle flows in heavy-ion collisions. The $\eta_\rho (f)$ determined by Eq. (\[eta-MKVOR\]) with parameters from Table \[tab:param-MKVOR\] is indicated in Fig. \[Fig-1-new\] by “tail 1".
$C_\sigma^2$ $C_\om^2$ $C_\rho^2$ $b \cdot 10^3$ $c \cdot 10^3$ $d$ $\alpha$ $z$ $a_\om$ $b_\om$
-------------- ----------- ------------ ---------------- ---------------- ---------------- ---------------- ---------------- ---------- ----------
234.15 134.88 81.842 4.6750 $-$2.9742 $-$0.5 0.4 0.65 0.11 7.1
$f_\om$ $\beta$ $\gamma$ $f_\rho$ $a_\rho^{(0)}$ $a_\rho^{(1)}$ $a_\rho^{(2)}$ $a_\rho^{(3)}$ $d_\rho$ $e_\rho$
0.9 3.11 28.4 0.522 0.448 $-$0.614 3 0.8 $-$4 6
: Parameters of the MKVOR model.
\[tab:param-MKVOR\]
![ [*Left panel:*]{} Nucleon concentrations and magnitude of the scalar field, $f(n)$, as functions of the nucleon density in the BEM for the MKVOR model. For $n>3.21 n_0$ besides the original branch 1 (labeled as MKVOR) with the limit $\lim_{n\to 0}f(n)= 0$, there appear extra two branches 2,3 labeled as MKVOR(br2) and MKVOR(br3). Branches 1,2 correspond to local minima in $E(f)$, whereas branch 3, to a local maximum. Nucleon concentrations are shown for branches 1 and 2 only. [*Middle panel:*]{} pressure $P(n)$ for branches 1 and 2. Vertical line indicates points of equal energies, horizontal line is the MC line. [*Right panel:*]{} The NS mass as a function of the central density for the branch 1 (MKVOR) and for the EoS with a first-order phase transition from the MKVOR branch to the MKVOR(br2) branch. []{data-label="MKVORbranches"}](Fig-3-f){width="14cm"}
A general comment concerning scaling functions is in order. The $\eta_\om (f)$ and $\eta_\rho (f)$ functions for the KVOR model were chosen originally in [@Kolomeitsev:2004ff] in a rather simple form (\[eta-KVOR\]) following the pragmatic reasons. For such a choice of the scaling functions in the KVOR and KVORcut-based models there exists always only one solution of Eq. (\[eq\_fn\]) for any $n$. In the MKVOR model a more complicated $f$-dependence of the scaling functions is chosen to satisfy the known experimental constraints, especially to better fulfill simultaneously the flow and maximum compact star mass constraints. In [@Maslov:2015msa; @Maslov:2015wba] we used the solution $f(n)$, which starts from the origin $f=0$, $n=0$. However, for the original choice of the $\eta_\rho (f)$ function (shown in Fig. \[Fig-1-new\] by the line labeled with “tail 1”) besides the solution starting at the origin (branch 1) there appear two new solutions (branches 2 and 3) for densities $n>3.21 n_0$. All these branches of solutions for $f(n)$ in BEM are depicted on the left panel of Fig. \[MKVORbranches\]. Branches 1, 2, and 3 are determined as zeros of the function $D(f,n)=\frac{\partial E(f,n)}{\partial f}$. For branches 1 and 2 we find $(\frac{\partial D(f)}{\partial f})_{f_{1,2}}>0$ and hence branches 1 and 2 correspond to minima of the energy-density functional $E(f)$. Oppositely for the branch 3 we have $(\frac{\partial D(f)}{\partial f})_{f_3}<0$ and therefore this branch is related to a maximum of $E(f)$. Thus, the branch 3 can be disregarded. On the left panel of Fig. \[MKVORbranches\] we also show the neutron and proton concentrations for branches 1 (labeled as MKVOR) and 2 (labeled as MKVOR(br2)). On the middle panel of Fig. \[MKVORbranches\] we show the pressure of the BEM as a function of density, $P(n)$, for branches 1 and 2. At the densities $n<n_1^{\rm MC}$ the system follows the branch 1 (line MKVOR). Transition from the branch 1 to the branch 2 is a first-order phase transition. Within the density range $n_1^{\rm MC}<n<n_2^{\rm MC}$ the pressure and baryon chemical potential follow the Maxwell construction (MC) line determined by equations $P(n)=P(n_1^{\rm MC})=P(n_2^{\rm MC})$ and $\mu_B (n) =\mu_B (n_1^{\rm MC}) =\mu_B (n_2^{\rm MC})$.[^2] For $n>n_2^{\rm MC}$ the system follows the branch 2 (line MKVOR(br2)). The vertical line indicates points of equal energy. On the right panel of Fig. \[MKVORbranches\] we show the NS mass as a function of the central density. We see that the NS configurations constructed with $f(n)$ taken along the branch 1 (solid line) would lead to a higher NS mass at fixed central density than those constructed with the transition from the branch 1 to the branch 2 (dashed line). Thus the first-order phase transition from the branch 1 to the branch 2 is indeed energetically favorable in the given model. Note that three-branch solutions appear also, when one considers the ordinary RMF models in ISM at high temperature, see Fig. 3 in [@Glendenning87].
Working in the framework of the purely hadronic model we see no weighty reason for a phase transition to occur at $n$ of the order of several $n_0$ with a jump in the scalar-field magnitude. Therefore, we will avoid this possibility in the given paper, although a further study of such a transition might be of interest, if considered as a simplified model for a first-order hadron-quark phase transition.
In [@Maslov:2015msa; @Maslov:2015wba], we considered only the solution with $f$ corresponding to branch 1. Other branches correspond to the values of $f(n)$ larger than $f_{\rm lim}$, where $f_{\rm lim}$ is the maximum value on branch 1 reachable in the BEM in the center of the NS with $M=M_{\rm max}$. Therefore, additional unwanted solutions can be eliminated in the MKVOR and MKVOR-based models by an appropriate variation of the $\eta_\rho$ function for $f>f_{\rm lim}$. To demonstrate this we propose a modification of the $\eta_\rho$ scaling function $$\begin{aligned}
&\eta_\rho^{\rm MKVOR}(f) \to \left\{
\begin{array}{lc}
\eta_\rho^{\rm MKVOR}(f)\,, & f\le f_\rho^*\\
1/[a_0
+ a_1 z
+ a_2 z^2
+ a_3 z^3
+ a_4 z^4]
\,, & f >f_\rho^*
\end{array}
\right.\,,
\label{zetaf}\\
&\qquad\qquad\qquad z=f/f_\rho^*-1\,, \quad f^*_\rho = 0.64 \,,
\nonumber
\end{aligned}$$ where we change its “tail” for $f>f_\rho^* >f_{\rm lim}$. Parameters $a_{0}$, $a_{1}$, and $a_{2}$ follow from the continuity of the function and its first two derivatives in the point $f=f_\rho^*$: $$\begin{aligned}
& a_0 = \eta^{-1}_\rho(f^*_\rho), \quad
a_1 = -f^*_\rho\,\eta_\rho'(f^*_\rho)\,a_0^2, \quad
a_2 = a_1^2/ a_0 - a_0^2 \eta_\rho''(f^*_\rho)\, f^{*2}_\rho /2.
\label{tail-par1}\end{aligned}$$ Here we skip the superscript MKVOR on $\eta_\rho$ for the sake of brevity. Other parameters $a_3$ and $a_4$ control the slope of the tail of the scaling function. So, together with the original parametrization (\[eta-MKVOR\]), which we now label “tail 1”, we consider several other choices: $$\begin{aligned}
&\mbox{tail 2}: a_3=-10\,,\quad a_4=0\,;
\nonumber\\
&\mbox{tail 3}: a_3=0\,,\phantom{-1}\quad a_4=0\,;
\label{tail123}\\
&\mbox{tail 4}: a_3=0\,,\phantom{-1}\quad a_4=100\,.
\nonumber\end{aligned}$$ From now on, under the MKVOR model we will understand the model with $\eta_\rho$ having appropriate continuation for $f>f_{\rm lim}$ which removes multiple solutions, e.g. with one of tails 2, 3, or 4 shown in Fig. \[Fig-1-new\]. We have verified that for the choices (\[tail123\]) the unwanted solutions with large values of $f$ are absent in all MKVOR-based models, which we studied previously in [@Maslov:2015msa; @Maslov:2015wba] (without and with hyperons) and consider below (without and with hyperons and $\Delta$s). For $f<f_{\rm lim}<f_\rho^*$, the $\eta_\rho(f)$ function coincides exactly with that for the originally introduced scaling function, see Fig. \[Fig-1-new\].
Below we will see that in the presence of $\Delta$ baryons, i.e., within the MKVOR$\Delta$ model, the effective nucleon mass vanishes at some density in the ISM. To cure this problem within our hadronic model we will introduce additional cut-mechanism in the $\om$ sector, keeping the same $\eta_\sigma(f)$ and $\eta_\rho(f)$ as in MKVOR model, the latter function with the tail modification (\[zetaf\]) serving for the uniqueness of the $f(n)$ solution in BEM. In such a modified MKVOR model, which we label as MKVOR\*, we use $$\begin{aligned}
\label{etaMKVOR*}
& \eta^{\rm MKVOR*}_\omega(f) = \eta_\omega^{\rm MKVOR}(f)
\theta_{b_\om}(f_\om^*-f) + \frac{c_\om}{(f/f_\om^*)^{\alpha_\om}+1} \theta_{b_\om}(f-f_\om^*)\,,
\nonumber\\
&f_\om^*=0.95\,,\quad b_\om=100\,,\quad \alpha_\om=5.515\,,\quad c_\om=0.2299\,.\end{aligned}$$ For $f<f_\om^*$ the scaling function $\eta^{\rm MKVOR*}_\om(f)$ fits that for the original MKVOR model. For $f>f_\om^*=0.95$, $\eta^{\rm MKVOR*}_\om(f)$ sharply decreases. Thereby, we limit the rapid growth of the scalar field $f(n)$ with a density increase not only in BEM, as it was in the original MKVOR model, but also in the ISM.
The functions $f(n)$ for the MKVOR and MKVOR\* models in ISM and BEM are demonstrated in Fig. \[fig:eta\_r\]. In the BEM the cut mechanism, implemented in the MKVOR model in the $\rho$ sector, fixes the magnitude of the scalar field at the level $f_{\rm lim}\approx 0.6$, and $m_N^{*} $ reaches the minimum value $\simeq 0.4m_N$ for $n {\stackrel{\scriptstyle >}{\phantom{}_{\sim}}}4 n_0$. Since the chosen cut value $f_\om^*=0.95$ is larger than $f_{\rm lim}$, all results for the MKVOR-based models and the corresponding MKVOR\*-based models coincide exactly in BEM. In the ISM the effective nucleon mass continuously decreases in MKVOR model with a density increase (for $n=8\,n_0$ it reaches $\simeq 0.05\,m_N$). In the MKVOR\* model the cut-mechanism is implemented in the $\omega$ sector and is operative in ISM. With $f_\om^*=0.95$, for $n=8\,n_0$ we have $m^*_N\simeq 0.1\,m_N$. The saturation in $f(n)$ sets in only for $n{\stackrel{\scriptstyle >}{\phantom{}_{\sim}}}5\,n_0$ and for smaller densities the quantities $f(n)$ in the MKVOR and MKVOR\* models follow the same curve in ISM. Due to that the nucleon and kaon flow constraints, which restrict the allowed range for pressure in ISM in the density interval $n_0< n{\stackrel{\scriptstyle <}{\phantom{}_{\sim}}}4.5\, n_0$, see [@Maslov:2015msa; @Maslov:2015wba], are fulfilled in the MKVOR\* model as well as in MKVOR one.
The following remark is in order. Unless we take into account finite-size effects, the effective meson masses and coupling constants enter the energy density functional only in $\eta_M $ combinations. Thus, we can extract the $\chi_M (n)$ dependence only if we assume particular dependence $m^*_M(f(n))$, as the Brown-Rho scaling law (\[PhiN\]) in our case. Varying the latter we would get different functions $\chi_M (n)$.
Results of numerical calculations {#Numerical}
=================================
KVORcut03-based models
----------------------
First we consider the influence of the presence of $\Delta$ baryons in ISM. In contrast to the standard non-linear Walecka models [@Boguta1982; @Kosov] the KVORcut03 model proves to be much less sensitive to the inclusion of $\Delta$ baryons. For the parameter set (\[x-QCU\]) the critical density for the appearance of $\Delta$s in the KVORcut03$\Delta$ model is shown in Fig. \[fig:ncD-ISM-cut03\]. We see that for the realistic values of the potential[^3] ($U_\Delta{\stackrel{\scriptstyle >}{\phantom{}_{\sim}}}-60\,{\rm MeV}$) the $\Delta$ baryons do not appear in the ISM up to very high densities. The reason for the robustness of the KVORcut03 model against the $\Delta$ appearance is that the $f(n)$ stops to grow for densities $n{\stackrel{\scriptstyle >}{\phantom{}_{\sim}}}2\, n_0$ and has a smaller magnitude that would be in the non-linear Walecka models with the same $m_N^*(n_0)$, see Figs. 1–3 in [@Maslov:2015wba]. This is a genuine feature of all “cut”-models, which we have considered in [@Maslov:2015wba]. As the result, the effective $\Delta$ mass remains rather large that inhibits the growth of the $\Delta$ population.
![Critical density for the appearance of $\Delta$ baryons, $n_{c,\Delta}$, as a function of the $\Delta$ potential $U_\Delta$ in the ISM for the KVORcut03$\Delta$ model with the $\Delta$ parameter set (\[x-QCU\]). []{data-label="fig:ncD-ISM-cut03"}](Fig-4-f){width="5cm"}
In Fig. \[fig:cut03-conc\] we show the composition of BEM vs. the total baryon density for three different versions of KVORcut03 model: with $\Delta$ baryons only (hyperons are artificially excluded) labeled KVORcut03$\Delta$ and with $\Delta$s and hyperons, incorporated according to the schemes (\[hyp-Hphi\]) and (\[hyp-Hphisig\]), labeled as KVORcut03H$\Delta\phi$ and KVORcut03H$\Delta\phi\sigma$, respectively. In the KVORcut03$\Delta$ model the $\Delta^-$ baryons appear in the BEM for the realistic value of the potential $U_\Delta=-50$MeV at densities $n>n_{{\rm c},\Delta^-} \simeq 5.4\, n_0$. Other $\Delta$ species ($\Delta^0$ and $\Delta^+$) do not appear up to maximum densities reachable in NS interiors. In the presence of hyperons, i.e., in the KVORcut03H$\Delta\phi$ and KVORcut03H$\Delta\phi\sigma$ models, $\Delta$ baryons do not appear. Similar inhibiting action of hyperons on the $\Delta$ population was noticed also in [@Drago2014]. For the $\Delta$ potential of $-100$MeV, in all models $\Delta^-$s appear at approximately the same density, $n_{{\rm c},\Delta^-}\simeq 2.6\,n_0$. In the KVORcutH$\Delta\phi$ model $\Delta^-$ appear at the same critical density as $\Lambda$’s. In the KVORcutH$\Delta\phi\sigma$ model $\Delta^-$s appear before hyperons. In both cases in the presence of hyperons the $\Delta^-$ concentration remains tiny (does not exceed 5%). Other $\Delta$ species ($\Delta^0$ and $\Delta^+$) do not appear in KVORcut03-based models in NSs.
In Fig. \[fig:cut03-Udep\] we show the dependence of critical densities for the appearance of $\Delta^-$ and $\Delta^0$ baryons (left panel) and those for hyperons (right panel) on the value of the $\Delta$ potential. Vertical bars on right panel indicate densities at which $n_{{\rm c},\Delta^-}$ coincides with the critical density of the corresponding hyperon species. In the KVORcut03$\Delta$ model the value $n_{{\rm c},\Delta^-}$ monotonously decreases from $n_{{\rm c},\Delta^-}= 5.4 \, n_0$ for $U_\Delta = -50$ MeV to $n_{{\rm c},\Delta^-}= 2.3 \, n_0$ for $U_\Delta = -100$ MeV, and to $n_{{\rm c},\Delta^-}= 1.6 \, n_0$ for $U_\Delta = -150$ MeV, the latter deep potential we consider as unrealistic. In the KVORcut03H$\Delta\phi$ model at $U_\Delta > -95$ MeV and in the KVORcut03H$\Delta\phi\sigma$ models at $U_\Delta > -85$ MeV $\Delta$s do not appear at any relevant densities. For $U_\Delta <-100$ MeV and for $U_\Delta <-85$ MeV models KVORcut03H$\Delta\phi$ and KVORcut03H$\Delta\phi\sigma$, respectively, follow the same curve as KVORcut03$\Delta$. This happens because for the KVORcut03H$\Delta\phi$ model at $U_\Delta < -100$MeV (for the KVORcut03H$\Delta\phi\sigma$ model at $U_\Delta < -85$MeV) the critical density for $\Delta^-$ becomes smaller (see the right panel of Fig. \[fig:cut03-Udep\]) than the smallest among critical densities for hyperons and the latter ones do not inhibit the $\Delta$ population thereby. On the right panel we also see that in the KVORcut03H$\Delta\phi$ model the hyperon species appear in the BEM with a growth of the density in the following order: first $\Lambda$s, then $\Xi^-$s after them $\Sigma^+$s and $\Xi^0$s as the latest ones. In the KVORcut03H$\Delta\phi\sigma$ model the order changes: there are no $\Lambda$s, $\Xi^-$s appear first, then $\Xi^0$ and then $\Sigma^+$s.
![Dependence of the critical density for the $\Delta$ (left panel) and hyperon (right panel) appearance in BEM on the $\Delta$ potential for the KVORcut03$\Delta$, KVORcut03H$\Delta\phi$, and KVORcut03H$\Delta\phi\sigma$ models with the $\Delta$ parameters given by Eq. (\[x-QCU\]). Vertical bars on the right panel indicate densities, at which $n_{{\rm c},\Delta^-}$ coincides with the critical density of the corresponding hyperon species.[]{data-label="fig:cut03-Udep"}](Fig-6-f){width="10cm"}
As demonstrated in Ref. [@Maslov:2015wba], the critical density for DU processes on nucleons for the KVORcut03 model is $2.85\,n_0$ with the corresponding star mass $1.68\,M_\odot$. The critical star masses for the DU reactions on hyperons in KVORcut03H$\phi$ and KVORcut03H$\phi\sigma$ models are $1.51\,M_\odot$ and $1.91\,M_\odot$, respectively. So, these models satisfy both the “weak" ($M>1.35 M_{\odot}$) and “strong" ($M>1.5 M_{\odot}$) DU constraints introduced in [@Kolomeitsev:2004ff; @Klahn:2006ir]. The presence of $\Delta$ baryons would shift the critical densities for the appearance of hyperons and, therewith, the critical densities for the processes involving them, e.g., $H\to N+l^-+\bar{\nu}_l$ and $\Delta^-\to\Lambda + e +\bar{\nu}_e$, to even higher values. As pointed out in [@Prakash-DU] the DU processes on $\Delta^-$ ($\Delta^-\to n+ l^- +\bar{\nu}_l$) are forbidden, if the DU processes on nucleons are forbidden because $n_{\Delta^-}<n_p$. Therefore, to understand, if our model with $\Delta$ baryons satisfies the DU constraints, it is sufficient to consider how the presence of $\Delta$ baryons influences the critical density of the nucleon DU reactions. On the left panel of Fig. \[fig:cut03-Udep-1\] we show the critical density, $n_{\rm DU}$ and the critical NS mass for the DU reactions on nucleons, $M_{\rm DU}$, as functions of the value of the $\Delta$ potential. For potentials $U_\Delta>-95$MeV in KVORcut03H$\Delta\phi$ and for $U_\Delta>-93$MeV in KVORcut03H$\Delta\phi\sigma$ models $n_{\rm DU}$ is not influenced by the $\Delta$s. For deeper potentials the critical density $n_{\rm DU}$ and the corresponding star mass $M_{\rm DU}$ decrease with a decrease of the potential and the $M_{\rm DU}$ becomes smaller than $1.5\,M_\odot$ for $U_\Delta< -109$MeV and $M_{\rm DU}<1.35\,M_\odot$ for $U_\Delta<-125$MeV. For an unrealistically deep potential $U_\Delta < -110$MeV, $n_{\rm DU}$ and $M_{\rm DU}$ for KVORcut03$\Delta$, KVORcut03H$\Delta\phi$ and KVORcut03H$\Delta\phi\sigma$ models coincide with each other.
On the right panel of Fig. \[fig:cut03-Udep-1\] we show the maximum mass of NSs as a function of the value of the $\Delta$ potential. For the KVORcut03$\Delta$ model $M_{\rm max}$ decreases from 2.17 $M_\odot$ at $U_\Delta =- 50$MeV to $2.13\,M_\odot$ at $U_\Delta\simeq -150$MeV but still remains well above the empirical constraint. For KVORcut03H$\Delta\phi\sigma$ and especially for KVORcut03H$\Delta\phi$ models the $U_\Delta$ dependence is very weak. For KVORcut03H$\Delta\phi\sigma$ model for $U_\Delta <-85$MeV the maximum mass slightly decreases with deepening of the potential but still remains above the empirical constraint and for the KVORcut03H$\Delta\phi$ model the maximum star mass proves to be on the lower border of the allowed empirical constraint for all $U_\Delta$. Here, we would like to pay attention to a peculiar behaviour of $M_{\rm max}(U_\Delta)$ in the interval $-130<U_\Delta <-150$ MeV: the maximum NS mass slightly increases with the deepening of the $U_\Delta$.
![ Critical density and critical NS mass for the DU reactions on nucleons (left panel) and the maximum NS mass (right panel) as functions of the $\Delta$ potential for the KVORcut03$\Delta$, KVORcut03H$\Delta\phi$, and KVORcut03H$\Delta\phi\sigma$ models with the $\Delta$ parameters given by Eq. (\[x-QCU\]). On the left panel the curves for the KVORcut03$\Delta$ and KVORcut03H$\Delta\phi\sigma$ models coincide. The horizontal band on the right panel shows the uncertainty range for the measured mass of PSR J0348+0432 ($2.01\pm 0.04\,M_\odot$). []{data-label="fig:cut03-Udep-1"}](Fig-7-f){width="10cm"}
![The NS mass as a function of the central baryon density in KVORcut03, KVORcut03$\Delta$ (left panel), KVORcut03H$\phi$, KVORcut03H$\Delta\phi$ (middle panel), KVORcut03H$\phi\sigma$, and KVORcut03H$\Delta\phi\sigma$ (right panel) models for $U_\Delta=-150$MeV. The $\Delta$ parameters are taken as in Eq. (\[x-QCU\]). The horizontal band shows the uncertainty range for the mass of PSR J0348+0432 ($2.01\pm 0.04\,M_\odot$).[]{data-label="fig:cut03-Mn"}](Fig-8-f){width="14cm"}
![NS mass-radius plot for the same models as in Figs. \[fig:cut03-conc\] and \[fig:cut03-Mn\] and $U_\Delta=-50$MeV and $-150$MeV together with constraints from thermal radiation of the isolated NS RX J1856 [@Trumper] and from QPOs in the LMXBs 4U 0614+09 [@Straaten]. The $\Delta$ parameters are taken as in Eq. (\[x-QCU\]). The band shows the uncertainty range for the mass of pulsar J0348+0432 [@Antoniadis:2013pzd]. For $U_\Delta=-50$MeV the lines for KVORcut03H$\phi$ and KVORcut03H$\Delta\phi$ models, and for KVORcut03H$\phi\sigma$ and KVORcut03H$\Delta\phi\sigma$ models coincide since $\Delta$ do not appear. []{data-label="fig:cut03-MR"}](Fig-9-f){width="14cm"}
In Refs. [@Drago2014; @Drago:2015cea] the authors argue that the appearance of $\Delta$s in a NS with the given central density $n_{\rm cen}$ results in a notable reduction of the NS mass. We find, however, that in the KVORcut03$\Delta$ model the star mass decreases in average by $0.002M_\odot$ at a given central density compared to that for KVORcut03 model for $U_\Delta=-50$ and by $0.02M_\odot$ for $-100$MeV. To see a stronger influence of $\Delta$s on $M$ we should allow for still stronger $\Delta$ attraction. In Fig. \[fig:cut03-Mn\] we show the dependence of the NS mass on the central density for $U_\Delta=-150$MeV for KVORcut03 and KVORcut03$\Delta$ (left panel), KVORcut03H$\phi$ and KVORcut03H$\Delta\phi$ (middle panel), and KVORcut03H$\phi\sigma$ and KVORcut03H$\Delta\phi\sigma$ models (right panel). We see that even for unrealistically deep potential $U_\Delta=-150$MeV in all cases the mass reduction does not exceed $0.1\,M_\odot$ for all values of $n_{\rm cen}$, whereas the BEM composition is more sensitive to the value of $U_\Delta$ (see Fig. \[fig:cut03-conc\] and discussion above).
In Fig. \[fig:cut03-MR\] we compare the mass-radius relations for NSs calculated in the KVORcut03 and KVORcut03$\Delta$ models for $U_\Delta =-50$MeV and $-150$MeV (left panel), and for $U_\Delta =-150$MeV in the KVORcut03H$\phi$ and KVORcut03H$\Delta\phi$ models (middle panel), and in the KVORcut03H$\phi\sigma$ and KVORcut03H$\Delta\phi\sigma$ models (middle panel). In the latter two cases we show the results for $U_\Delta =-150$MeV only, since for $U_\Delta =-50$MeV in these models $\Delta$ baryons do not appear. We see that in the KVORcut03$\Delta$ model with $U_\Delta =-50$MeV the radius $R$ at fixed $M$ is practically unchanged compared to that in the KVORcut03 model. Even for $U_\Delta =-150$MeV in all considered models $R$ changes only slightly (by $<0.5$km) for almost all masses. The changes in $R$ at fixed $M$ are higher only for $M>2M_{\odot}$ in KVORcut03 and KVORcut03$\Delta$ models.
Concluding this section, we summarize that for the chosen realistic values of the $\Delta$ potential ($U_\Delta =-50$MeV) in the KVORcut03$\Delta$ model the influence of $\Delta$s is minor and in the KVORcut03H$\Delta\phi$ and KVORcut03H$\Delta\phi\sigma$ models $\Delta$s do not appear at all. The hyperons inhibit the appearance of $\Delta$s. Only for a very attractive potential $U_\Delta \sim-150$MeV the $\Delta$ baryons start contributing sizeably within these models.
MKVOR\*-based models {#sec:MKVOR}
--------------------
The equations of state obtained in the MKVOR- and MKVOR\*-based models are more strongly affected by the $\Delta$ potential than the EoSs in the KVORcut-based models because the effective nucleon mass in the former two models is smaller at given density than in the latter models. Therefore, further focusing on the MKVOR\*-based models we restrict by consideration of potentials $U_\Delta >-100$MeV.
![ Effective nucleon mass as a function of the density in the ISM at various values of the $\Delta$ potential. The results obtained in the MKVOR model are shown by dashed line and the results for the MKVOR$\Delta$ model are shown by solid lines for densities where $\Delta$ baryons can exist. The values of the potential $U_\Delta$ in MeV are indicated by labels on the lines. Horizontal ticks mark the points where solid lines branch out from dashed line for the intermediate values of $U_\Delta$.[]{data-label="fig:mkv-meff"}](Fig-10-f){width="5cm"}
In Fig. \[fig:mkv-meff\] we show the effective nucleon mass in ISM as a function of the density for MKVOR model and MKVOR$\Delta$ model for various values of $U_\Delta$. We see that the effective nucleon mass reaches zero at some density $n=n_{{\rm c},f=1}(U_\Delta)$. Hence, for $n>n_{{\rm c},f=1}(U_\Delta)$ the hadron description of ISM within MKVOR$\Delta$-based models is impossible. Thus, the density $n_{{\rm c},f=1}(U_\Delta)$ is the endpoint of our hadronic EoS for a certain $U_\Delta$. At this point the MKVOR model should be matched with a quark model in order to proceed to higher densities. To extend the purely hadron description to higher densities we minimally modify the $\omega$ sector of the MKVOR model and introduce a cut for $f>f^*_{\omega}=0.95$ according to Eq. (\[etaMKVOR\*\]). The so-modified MKVOR$\Delta$ model we denote as the MKVOR\*$\Delta$ model.
![[*Left panel:*]{} Effective nucleon mass as a function of the density in the ISM for the MKVOR\* model (dashed line) and the MKVOR\*$\Delta$ model (dotted and solid lines) for several values of the potential $U_\Delta$ indicated by labels in MeV. Bold dots show the values of $m^*_N$ related to the critical density $n_{c,\Delta}(U_\Delta)$ at which $\Delta$ baryons may exist in the ISM. [*Middle panel:*]{} Concentration of $\Delta$s in the ISM as a function of the density for the MKVOR\*$\Delta$ model. Full dots show critical densities and concentrations for the appearance of $\Delta$s. The dash-dotted line connecting the full dots shows $n_{c,\Delta}(U_\Delta)$ as a function of $U_\Delta$, which variation steps are indicated by vertical bars. [*Right panel:*]{} Pressure as a function of the density in the ISM for MKVOR\* model (dashed line) and MKVOR\*$\Delta$ model (dotted and solid lines). The hatched region indicates the nucleon flow [@Danielewicz:2002pu] constraints in heavy-ion collisions.[]{data-label="fig:mkvstar-meff"}](Fig-11-f){width="14cm"}
On the left panel of Fig. \[fig:mkvstar-meff\] we show the effective nucleon mass in ISM as a function of the density for MKVOR\* model (dashed line) and for MKVOR\*$\Delta$ model (solid and dotted lines) for several values of $U_\Delta$. For $U_\Delta>-67$MeV, the effective mass $m_N^*$ decreases monotonously in the MKVOR\*$\Delta$ model with an increase of $n$ and approaches a limiting non-zero value $m_N^*[\rm lim]\simeq 0.079\, m_N$ for large $n$. For potentials deeper than $-67$MeV the curve $m_N^* (n)$ receives a back-bending segment (dotted lines) between two points with $\rmd m_N^*/\rmd n =\infty$. One of these points is explicitly marked by the bold dot in the main frame on the left panel, whereas the presence of the second point is seen only in the insertion, where the curve for $U_\Delta =-100$ MeV is shown. With a further increase of $n$, after the back-bending region, $m_N^* (n)$ decreases monotonously in MKVOR\*$\Delta$ model (solid lines) tending to the same limiting non-zero value as for the MKVOR\* model.
On the middle panel of Fig. \[fig:mkvstar-meff\] we show the $\Delta$ baryon concentrations, $n_{\Delta}$, in the MKVOR\*$\Delta$ model for ISM as functions of $n$, for the same values of $U_\Delta$ as on the left panel. The back-bending region for $U_\Delta <-67$MeV is also manifested in this figure (dotted lines) between two points, $n_{\rm L}$ and $n_{\rm R}$ ($n_{\rm L}<n_{\rm R}$) in which $\rmd n_\Delta/\rmd n =\infty$. One of them, $n_{\rm R}$, corresponding to a smaller $n_\Delta$ is exemplified in the figure insertion only for $U_\Delta=-100$MeV. The point $n_{\rm L}$ corresponds to a higher value of $n_\Delta$ and is indicated by solid dots in the main frame of Fig. \[fig:mkvstar-meff\]. For densities between these points the equation $\mu_N(n,n_\Delta)=\mu_\Delta(n,n_\Delta)$, determining the $\Delta$ abundance as a function of $n$, has several solutions (two or three depending on the density). The density $n_{\rm L}$ is the smallest density at which the $\Delta$ baryons can exist in the ISM. With the deepening of the potential $U_\Delta$ this critical density is shifted to lower values and the corresponding starting concentration of $\Delta$s increases. For densities $n>n_{\rm L}$ on the upper branch of solutions $n_\Delta(n)$, shown by solid line, $n_\Delta(n)$ increases monotonously with a density increase and the $\Delta$ concentration on this branch is the higher, the more attractive the potential $U_\Delta$ is. For $U_\Delta\ge -67$MeV the density points $n_{\rm R}$ and $n_{\rm L}$ coalesce and disappear, and the back-bending region disappears too.
On the right panel of Fig. \[fig:mkvstar-meff\] we show the pressure as a function of the density for the MKVOR\* model (dashed line) and for the MKVOR\*$\Delta$ model (dotted and solid lines) for densities where $\Delta$s are present for several potentials $U_\Delta$. For the MKVOR and MKVOR\* models the pressure $P(n)$ starts violating the particle-flow constraint of [@Danielewicz:2002pu] at $n>4.06\, n_0$ (dashed line escapes the hatched region). We see that in the MKVOR\*$\Delta$ model with $-83\,{\rm MeV} <U_\Delta <-65$MeV, the constraint is fulfilled for densities $n_0<n<4.5\, n_0$. This means that, if the constraint suggested in [@Danielewicz:2002pu] is confirmed by subsequent more detailed analyses, this circumstance could be considered as a constraint on $U_\Delta$. For $U_\Delta >-56$MeV, $P(n)$ undergoes a smooth bend in the critical point for the $\Delta$ appearance. Such a behaviour is typical for a third-order phase transition. Contrary, for $U_\Delta <-56$MeV, the curve $P(n)$ demonstrates the behaviour typical for a first-order phase transition with three solutions of the equation $P(n)=P_0={\rm const}$ in some interval of $P_0$. For $-67<U_\Delta <-56$MeV there exists an ordinary spinodal region with a negative incompressibility. Interestingly, for potentials $U_\Delta$ deeper than $-67$MeV there appears a specific back bending of the $P(n)$ curve for densities $n_{\rm L}<n< n_{\rm R}$ with $n_{\rm L(M)}$ introduced above. Note that at these densities we have $\rmd P/\rmd n =\infty$, and $n_{\rm L}$ is marked by the dot in the main frame in the figure and the presence of the second point $n_{\rm R}$ is exemplified in the insertion. There are two narrow spinodal regions close to these points and the curve connecting these two points (dotted line) with a positive incompressibility. A thermodynamical equilibrium between the states with and without $\Delta$s is established along a line on the $P$–$n$ diagram connecting points of equal pressures and equal baryon chemical potentials of both states: $P(n_1^{\rm MC})=P(n^{\rm MC}_2)$ and $\mu (n_1^{\rm MC})=\mu(n_2^{\rm MC})$. These MC lines are depicted by short dashed lines on the right panel of Fig. \[fig:mkvstar-meff\]. Note that back-bending behaviour of $P(n)$ has been found for ordinary RMF models in ISM at high temperatures [@Glendenning87]. For $U_\Delta =-91.4$MeV the curve $P(n)$ touches a zero line at $n=3.15\,n_0$. For $U_\Delta <-91.4$MeV the function $P(n)=0$ crosses zero at two values of the density for $n>n_0$. One of these zeros corresponds to an unstable state (left one), the other (right one) to a metastable state. Note that a first-order phase transition owing to the appearance of $\Delta$s that we obtained within MKVOR\*$\Delta$ model for $U_\Delta <-56$ MeV could manifest itself as an increase of the pion yield at typical energies and momenta corresponding to the $\Delta\to \pi N$ decays in heavy-ion collision experiments.
![ Paths of a first-order phase transition in the ISM for MKVOR\*$\Delta$ model for $U_\Delta =-90$MeV illustrated in various thermodynamical quantities. [*Panel A:*]{} Pressure $P(n)$ and chemical potential $\mu(n)$ as function of density for equilibrium concentration of $\Delta$ baryons following from Eq. (\[eq4muD\]). [*Panel B:*]{} Normalized energy density $E(n,n_\Delta)$ as a function of $\Delta$ concentration for a fixed total density indicated by labels (in $n_0$). [*Panel C:*]{} Pressure as a function of the chemical potential for the equilibrium $\Delta$ concentration. [*Panel D:*]{} Energy per baryon $E/n-m_N$ vs. total density for the equilibrium $\Delta$ concentration. Line styling of the corresponding parts of the curves is the same on all panels, e.g., thick lines show the equilibrium evolution of the system through the MC. []{data-label="Fig-P-new"}](Fig-12-f){width="14cm"}
In the case of a usual van-der-Waals EoS there is no back bending region of the $P(n)$ curve for any density and in the corresponding spinodal region the incompressibility is negative. In our case of the MKVOR\*$\Delta$ model the usual spinodal region exists only for potentials $-67\,{\rm MeV}<U_\Delta <-56$MeV. As we have mentioned, for $U_\Delta <-67$MeV besides a spinodal region there appears an unusual back banding region, where the incompressibility is again positive (between two points in which $dP/d n =\infty$). It is interesting to study this phenomenon in a more detail. Therefore, in Fig. \[Fig-P-new\] we present various thermodynamic quantities in the phase-transition region for the MKVOR\*$\Delta$ model for $U_\Delta=-90$MeV. For this potential the pressure is positive for any density $n>n_0$.
On panel A of Fig. \[Fig-P-new\] we show $P(n)$ and $\mu_N (n)$. On panel B we illustrate the dependence of the energy density on the $\Delta$ concentration. On panel C we present the $P(\mu)$ dependence. On panel D the energy per particle is plotted as a function of the density. All these quantities are calculated for the equilibrium concentration of $\Delta$ baryons. Bold curves on all panels demonstrate the path of the system being at equilibrium. The horizontal segments on panels A and C corresponding to $P=P^{\rm MC}=49.6\,{\rm MeV/fm^3}$ and $\mu_N =\mu_{N}^{\rm MC}=1070$MeV are the MC lines, on panel C they correspond to a point labeled MC. The difference in the energy per particle and the $\Delta$ concentration between the end points on the MC line can be inferred from position of the MC points on panels D and B, respectively. Labels “$\Delta$" and “no $\Delta$" mark the parts of the equilibrium curve (thick solid lines) with and without $\Delta$ baryons, respectively. Along the MC line on panel A one can speak only about an averaged density of the matter, which varies between $n_1^{\rm MC}=2.84\,n_0$ and $n_2^{\rm MC}=3.63\,n_0$ according to the equation $n=\bar{n}=n_1^{\rm MC}(1-f_\Delta)+f_\Delta n_2^{\rm MC}$, where $f_\Delta$ is the relative fraction of the volume occupied by the “$\Delta$” phase. The $\Delta$ concentration rises from $x_{\Delta,1}=0$ in the beginning of the MC line to $x_{\Delta,2}=0.43$ according to the equation $\bar{x}_\Delta=x_{\Delta,2} (n_2^{\rm MC}/\bar{n})(\bar{n}-n_1^{\rm MC})/(n_2^{\rm MC}-n_1^{\rm MC})$. To clarify the balance between the phases with and without $\Delta$s beyond the MC line, let us consider the system at two fixed pressures $P=P_1>P^{\rm MC}$ and $P=P_2<P^{\rm MC}$ (short-dash lines on panel A). In the former case the system, being initially placed in state 1 without $\Delta$ (on dash-dotted line) or state $1''$ with a low $\Delta$ concentration should after a while come to stable state $1'$ (on thick solid line) with an equilibrium concentration of $\Delta$, since $\mu''_1>\mu_1>\mu'_1$. The corresponding chemical potentials are indicated also on graphs $\mu(n)$ and $P(\mu)$. The state 1 with $P_1$ and $\mu_1$ corresponds to the state usually named as an “overheated gas”. Similarly, if at the fixed pressure $P_2$ one starts in state $2'$ ($P_2,\,\mu'_2$) on the “$\Delta$” part of thick solid curve with a large $\Delta$ concentration, the system will evolve to state 2 without $\Delta$s, since $\mu'_2>\mu_2$. The same happens if one starts in an intermediate state $2''$ on the back-bent piece of solid line since $\mu''_2>\mu'_2>\mu_2$. Continuing an analogy with the ordinary liquid-gase phase transition, state $2'$ can be named as a “supercooled liquid". In equilibrium $P(\mu)$ should be maximum, hence the system undergoing a first-order phase transition follows in equilibrium the path shown by thick lines on panel C.
To illustrate how the system chooses the appropriate concentration of $\Delta$ baryons we consider energy density of the system $E(n,n_\Delta)$ as a function of $n_\Delta$ for various fixed values of the total density $n$. On panel B we plot the dimensionless ratio $E(n,n_\Delta)/E(n,0)$ to get rid off the common $n$ dependence. For densities $n{\stackrel{\scriptstyle <}{\phantom{}_{\sim}}}3.171\,n_0$ the curve is monotonously increasing with an increase of $n_\Delta$ with the global minimum at $n_\Delta =0$ that corresponds to the “no $\Delta$” curve on panel A. The density $n\approx 3.171\,n_0$ corresponds to the point $\rmd P/\rmd n=\infty$ and $\rmd \mu/\rmd n=\infty$ on panel A. For $3.171\,n_0{\stackrel{\scriptstyle <}{\phantom{}_{\sim}}}n {\stackrel{\scriptstyle <}{\phantom{}_{\sim}}}3.258\,n_0$, the curve $E(n={\rm const}, n_\Delta)$ has two local extrema in which $\partial E(n,n_\Delta)/\partial n_\Delta=\mu_\Delta-\mu_N=0$ and, therefore, they correspond to the chemical equilibrium between $\Delta$ and nucleons in ISM \[see Eq. (\[eq4muD\])\]. One extremum (for a smaller value of $n_\Delta$) is the local maximum of the energy density and the second one is the local minimum. The energy density at this minimum is, however, still higher then for $n_\Delta=0$, so the state without $\Delta$s is energetically preferable, see also panel D where the “nose” formed by two solutions with $n_\Delta\neq 0$ is above dash-dotted line for $n_\Delta=0$ at $n<3.258\,n_0$. At $n\approx 3.258\,n_0$ the energy densities of the ISM without $\Delta$s and with the $\Delta$ concentration $n_\Delta/n\approx0.38$ become equal. This situation is shown on panel B by the curve labeled with E and by the dots with label E on panels A, C, and D. For all higher densities the “$\Delta$" state is preferable since its energy is smaller, and $\Delta$ concentration increases with a growth of the density. On panel B in the density interval $3.389\,n_0<n<3.3957\,n_0$ there exist two local minima of $E$, one at a tiny concentration $n_\Delta/n{\stackrel{\scriptstyle <}{\phantom{}_{\sim}}}0.005$ (see lower graph on panel D) and the other much deeper one at $n_\Delta/n\sim 0.4$. The former state is metastable and the latter is stable. This density range corresponds to the spinodal instability region shown in the insertion on the $P(n)$ graph on panel A. Dashed line connecting extrema of $E(n={\rm const},n_\Delta)$ on panel B is related to the back-bending pieces on panel A. For densities $n{\stackrel{\scriptstyle >}{\phantom{}_{\sim}}}3.3957\,n_0$ there remains only one global minimum at large $\Delta$ concentrations. On panel D the curve between two MC points is determined by the condition $\bar{\cal E}_\Delta={\cal E}_1+ {\cal E}_2 (n_2^{\rm MC}/\bar{n})(\bar{n}-n_1^{\rm MC})/(n_2^{\rm MC}-n_1^{\rm MC})$, where ${\cal E}_1=(E(n)/n)|_{n_1^{\rm MC}}$, on the curve “no $\Delta$", ${\cal E}_2=(E(n))|_{n_2^{\rm MC}}$, on the curve “$\Delta$".
![Baryon concentrations and magnitude of the scalar field, $f(n)$, in the BEM for the MKVOR\*$\Delta$, MKVOR\*H$\Delta\phi$, and MKVOR\*H$\Delta\phi\sigma$ models for $U_{\Delta}=-50$ MeV (upper row) and $U_{\Delta}=-100$ MeV (lower row). The results are obtained with $\Delta$ parameters taken as in Eq. (\[x-QCU\]). []{data-label="fig:mkv-conc"}](Fig-13-f){width="14cm"}
In BEM all results for the MKVOR- and MKVOR\*-based models coincide. In Fig. \[fig:mkv-conc\] we demonstrate $f(n)$ and baryon concentrations in the MKVOR\*$\Delta$, MKVOR\*H$\Delta\phi$ and MKVOR\*H$\Delta\phi\sigma$ models in BEM for two values of the $\Delta$-potential: $U_\Delta = - 50$ MeV and $U_\Delta = -100$ MeV. In all these models $f(n)$ first increases with an increase of the density and for $n{\stackrel{\scriptstyle >}{\phantom{}_{\sim}}}3n_0$ becomes approximately constant (about 0.6). In the MKVOR\*$\Delta$ model $\Delta^-$s appear at density $2.51\, n_0$ for $U_\Delta = - 50$ MeV and at $1.74 \, n_0$ for $U_\Delta = -100$ MeV. Then the $\Delta^-$ concentration increases significantly with an increase of $n$. In both MKVOR\*H$\Delta\phi$ and MKVOR\*H$\Delta\phi\sigma$ models $\Delta^-$s appear at smaller densities than hyperons but their presence does not change substantially the NS composition compared to the case without $\Delta$s, cf. Fig. 25 in [@Maslov:2015wba]. With an increase of the $\Delta$ attraction from $-50$MeV to $-100$MeV we observe in all models a decrease of the critical density $n_{\rm c,\Delta^-}$ from $\sim 2.5\,n_0$ to $\sim 1.7\,n_0$. In the MKVOR\*H$\Delta\phi$ model with a density increase there appear first $\Lambda$ and then $\Xi^-$ hyperons. The critical densities of their appearance increase with a decrease of the $U_\Delta$. In the MKVOR\*H$\Delta\phi\sigma$ model only $\Xi^-$ hyperons arise. For $U_\Delta = - 100$ MeV in all models there appears also a small fraction of $\Delta^0$s in the centers of the most massive NSs.
![The critical density for the appearance of $\Delta$ baryons (left panel) and hyperons (right panel) in BEM as a function of the $\Delta$ potential for the MKVOR\*H$\Delta\phi$, and MKVOR\*H$\Delta\phi\sigma$ models with the $\Delta$ parameters given by Eq. (\[x-QCU\]).[]{data-label="fig:mkv-Udep-nc"}](Fig-14-f){width="10cm"}
In Fig. \[fig:mkv-Udep-nc\] we demonstrate the dependence of the critical densities for the appearance of $\Delta$ baryons (left panel) and hyperons (right panel) on the value of the $\Delta$ potential. In the MKVOR\*$\Delta$-based models the critical density for $\Delta^-$ baryons depends much weaker on $U_\Delta$ than that in the KVORcut03$\Delta$-based models and is systematically smaller, cf. Fig. \[fig:cut03-Udep\]. The critical densities for $\Delta^{0}$ are also smaller in the MKVOR\*$\Delta$-based models. $\Delta^+$ and $\Delta^{++}$ baryons do not appear in all models even in most massive NSs but could arise, if $U_{\Delta}$ were deeper. The early appearance of $\Delta^-$s in MKVOR\*$\Delta$-based models shifts $n_{\rm c,\Lambda}$ and $n_{\rm c,\Xi^-}$ to higher values the stronger, the deeper the $U_\Delta$ potential is.
![[*Left panel:*]{} Gravitational-baryon NS mass constraint for MKVOR\* and MKVOR\*$\Delta$ models. The double-hatched rectangle is the constraint for the pulsar J0737-3039(B) [@Podsiadlowski]. The two empty rectangles show the variation of the constraint, when the assumed loss of the baryon mass during the progenitor-star collapse amounts to $0.3\% M_\odot$ and $1\% M_\odot$. [*Right panel:*]{} Baryon mass as a function of the $\Delta$ potential for the NS with $M_{\rm G} = 1.249M_{\odot}$ for the MKVOR\*$\Delta$ model. Double-hatched and empty bands show the corresponding experimental constraints. []{data-label="fig:pods"}](Fig-15-f){width="10cm"}
Studies of pulsar B in the double pulsar system J0737-3039 [@Podsiadlowski] suggested a test of the nuclear matter EoS provided a formation mechanism of the PSRJ0737-3039 system and the assumption of a negligible baryon loss of companion B during its creation are valid. In Fig. \[fig:pods\] we show the gravitational mass $M_{\rm G}$ versus the baryon mass $M_{\rm B}$ of a NS. The double-hatched rectangle (left panel) and band (right panel) show the constraint from [@Podsiadlowski]. The two empty rectangles on the left panel show the allowed variation of the constraint due to the assumed loss of the baryon number during the progenitor star collapse equal to $0.3\% M_{\odot}$ (see the corresponding empty band on the right panel) and to $1\% M_{\odot}$. Approximately the same constraint box (from $0.3\% M_{\odot}$ to $1\% M_{\odot}$) was proposed in the work [@Kitaura:2005bt], which found in their model that the mass loss of the collapsing O–Ne–Mg core during the explosion leaves the NS with a baryon mass of $M=1.36 \pm 0.002M_{\odot}$. However, many EoSs do not satisfy even this weaker constraint, see Ref. [@Klahn:2006ir]. The KVORcut03 curve matches marginally this “weak" constraint, cf. Fig. 17 in [@Maslov:2015wba]. Note that curves for all KVORcut03-based models (with inclusion of hyperons and $\Delta$s) for $U_\Delta >-100$ MeV coincide with the curve for KVORcut03 model. The MKVOR model fits marginally the “strong" constraint (the curve touches the left boundary of the hatched box, cf. [@Maslov:2015wba]). For the MKVOR\*$\Delta$ model the agreement with the strong constraint is improved, and the better, the more attractive the assumed $\Delta$ potential is. Similar behaviour was observed also in Ref. [@Drago:2015cea]. We also allowed for a variation $x_{\omega\Delta}$ and $x_{\rho\Delta}$ in limits $0.9\leq x_{\omega\Delta},x_{\rho\Delta}\leq 1$. This dependence is shown in the figure.
![NS mass as a function of the central baryon density in the MKVOR\*, MKVOR\*$\Delta$ (left panel), MKVOR\*H$\phi$ and MKVOR\*H$\Delta\phi$ (middle panel), and MKVOR\*H$\phi\sigma$ and MKVOR\*H$\Delta\phi\sigma$ (right panel) models with the $\Delta$ parameters taken as in (\[x-QCU\]) for $U_{\Delta}=-50$MeV (solid lines) and $-100$MeV (dashed lines). The horizontal band on the right panel shows the uncertainty range for the mass of PSR J0348+0432 ($2.01\pm 0.04\,M_\odot$). []{data-label="fig:mkv-Mn"}](Fig-16-f){width="14cm"}
Figure \[fig:mkv-Mn\] shows the NS masses as a functions of the central density with and without $\Delta$s for MKVOR\*$\Delta$ , MKVOR\*H$\Delta\phi$, and MKVOR\*H$\Delta\phi\sigma$ models. Despite the presence of $\Delta$s affects the NS composition substantially, the star mass changes rather weakly. For a realistic value of the $\Delta$ potential, $U_\Delta=-50$MeV, the decrease of the NS mass is tiny. For a deep $\Delta$ potential, $U_\Delta=-100$MeV, a change of the NS mass does not exceed $0.2\,M_\odot$. The maximum NS mass changes even less, by ${\stackrel{\scriptstyle <}{\phantom{}_{\sim}}}0.05$$M_\odot$ only, so the maximum mass constraint is safely fulfilled even after the inclusion of $\Delta$ baryons and hyperons.
![The critical density and the critical NS mass for the DU reactions on nucleons (left panel) and the maximum masses of the NSs (right panel) as functions of the $\Delta$ potential for the MKVOR\*$\Delta$, MKVOR\*H$\Delta\phi$, and MKVOR\*H$\Delta\phi\sigma$ models with $\Delta$ parameters given by Eq. (\[x-QCU\]). Lines for MKVOR\*$\Delta$ and MKVOR\*H$\Delta\phi\sigma$ models coincide. The horizontal band on the right panel shows the uncertainty range for the mass of PSR J0348+0432 ($2.01\pm 0.04\,M_\odot$). []{data-label="fig:mkv-Udep-nd-mm"}](Fig-17-f){width="10cm"}
The critical density and the critical NS mass of the DU reactions on nucleons in BEM are shown on the left panel of Fig. \[fig:mkv-Udep-nd-mm\] as functions of the $\Delta$ potential. The general trend is the same as for the KVORcut03-based models: the deepening of the $U_\Delta$ potential leads to a larger proton concentration and an earlier start of the DU reactions on nucleons. The DU constraint $M_{\rm DU}>1.35\,M_\odot$ proves to be fulfilled for $U_\Delta {\stackrel{\scriptstyle >}{\phantom{}_{\sim}}}-96$MeV, the constraint $M_{\rm DU}>1.5\,M_\odot$ holds for $U_\Delta {\stackrel{\scriptstyle >}{\phantom{}_{\sim}}}-88$MeV. As seen on the right panel of Fig. \[fig:mkv-Udep-nd-mm\], the maximum mass of the NS decreases only slightly with a deepening of the potential $U_\Delta$ and remains substantially larger than the maximum among well-measured masses of the pulsars ($2.01\pm 0.04\,M_\odot$ for PSR J0348+0432).
![NS mass as a function of radius for MKVOR\*, MKVOR\*$\Delta$ (left panel), MKVOR\*H$\phi$, MKVOR\*H$\Delta\phi$ (middle panel), MKVOR\*H$\phi\sigma$ and MKVOR\*H$\Delta\phi\sigma$ (right panel) models with the $\Delta$ parameters taken as in Eq. (\[x-QCU\]), for $U_{\Delta}=-50$MeV and $-100$MeV. The empirical constraints are the same as in Fig. \[fig:cut03-MR\]. []{data-label="fig:mkv-Udep-mr"}](Fig-18-f){width="14cm"}
Finally, Fig. \[fig:mkv-Udep-mr\] shows that the inclusion of $\Delta$s in MKVOR\*-based models with or without hyperons does not change noticeably the mass-radius relation for NSs for $U_\Delta=-50$MeV. For $U_\Delta=-100$MeV the radius of the NS with the mass $1.5\,M_\odot$ decreases by $\sim $ 0.5km.
Concluding, the MKVOR\*-based models with $\Delta$ baryons included with a realistic value for the $\Delta$ potential, $U_\Delta=-50$MeV, remain conform to astrophysical constraints as the models without $\Delta$s. In ISM the influence of $\Delta$s on the EoS proves to be stronger than in BEM, since in the ISM the effective baryon mass is smaller then in the BEM at the same baryon density.
Additional variation of $\Delta$ parameters {#sec:variation}
===========================================
The relation $g_{\omega\Delta}=g_{\rho\Delta}$ that follows from SU(6) symmetry can be relaxed if one assumes SU(3) symmetry. The SU(3) symmetrical Lagrangian involving the baryon decuplet $\Delta^{abc}_\nu$ and vector-meson nonet $(V_\mu)^a_b$ ($a,b,c=1,2,3$ are the indices in the SU(3) flavor space) has only two terms with a vector coupling $
\mathcal{L}_{\Delta V}=
g_0\left(\bar{\Delta}_{acd}^\nu\gamma^\mu \Delta^{acd}_\nu\right) (V_\mu)^b_b+g_1\left(\bar{\Delta}_{acd}^\nu\gamma^\mu \Delta^{bcd}_\nu (V_\mu)^a_b \right)
$, where the summation over the indices is implied. With the standard definitions of SU(3) multiplets as, e.g. in [@LutzK2002], we find the relations $$\begin{aligned}
g_{\om\Delta}=g_1+2g_0\,,\quad g_{\rho\Delta}=\frac23 g_1\,,\quad g_{\phi\Delta}=\sqrt{2}\, g_0\,.\end{aligned}$$ Taking into account the Iizuka-Zweig-Okubo suppression [@Okubo] of the $\phi$ meson coupling to not strange baryons and requiring, therefore, $g_{\phi\Delta}=0$ we find the relation $g_{\rho\Delta}=\frac23 g_{\om\Delta}$. This relation can be rewritten as $$\begin{aligned}
x_{\rho\Delta}=\frac23 x_{\om\Delta}\frac{C_{\om}m_\om}{C_{\rho}m_\rho}
\,.
\label{x-QCUR}\end{aligned}$$ With the parameters for the models from Tables \[tab:param-KVORcut03\] and \[tab:param-MKVOR\] we get $x_{\rho \Delta}=0.63\,x_{\om \Delta}$ for the KVORcut03 model and $x_{\rho \Delta}=0.87\,x_{\om \Delta}$ for the MKVOR model instead of the relation $x_{\om\Delta}=x_{\rho \Delta}=1$ that we used exploiting the SU(6) symmetry. Therefore, to check a sensitivity of the results to these poorly known parameters we allow now for a variation of $x_{\om\Delta}$, $x_{\rho \Delta}$ near unity.
In Fig. \[fig:cut03xr-xom\] we show the maximum NS mass as a function of the parameter $x_{\rho\Delta}$ at $x_{\om\Delta}=1$ and $U_{\Delta}=-100$MeV (left panel) and of the parameter $x_{\om\Delta}$ at $x_{\rho\Delta}=1$ and $U_{\Delta}=-100$MeV (right panel) for KVORcut03$\Delta$-based models. We see that for models with hyperons — KVORcut03H$\Delta\phi$ and KVORcut03H$\Delta\phi\sigma$ — the maximum NS mass is rather insensitive to the variation of $x_{\om\Delta}$ and $x_{\rho\Delta}$, whereas the maximum NS mass in the KVORcut03$\Delta$ model is more sensitive to these variations. We proved that for $U_{\Delta}=-50$MeV, $\Delta$ baryons do not appear in the KVORcut03H$\Delta\phi$ and KVORcut03H$\Delta\phi\sigma$ models, and the dependence on $x_{\om\Delta}$ and $x_{\rho\Delta}$ parameters in the KVORcut03$\Delta$ model is weaker for $U_{\Delta}=-50$MeV than for $U_{\Delta}=-100$MeV. For all relevant values of the coupling parameters the KVORcut03$\Delta$ and KVORcut03H$\Delta\phi\sigma$ models do appropriately fulfill the maximum NS mass constraint. The KVORcut03H$\Delta\phi$ and KVORcut03H$\phi$ models without $\Delta$s fulfill this constraint only marginally.
![ Maximum NS mass as a function of the parameter $x_{\rho\Delta}$ at $x_{\om\Delta}=1$ and $U_{\Delta}=-100$MeV (left panel) and of the parameter $x_{\om\Delta}$ at $x_{\rho\Delta}=1$ and $U_{\Delta}=-100$MeV (right panel) for KVORcut03-based models. The horizontal band shows the uncertainty range for the mass of PSR J0348+0432 ($2.01\pm 0.04\,M_\odot$). []{data-label="fig:cut03xr-xom"}](Fig-19-f){width="10cm"}
In Fig. \[fig:mkv-xo\] we show the maximum NS mass as a function of the parameter $x_{\om\Delta}$ at $x_{\rho\Delta}=1$ for $U_{\Delta}=-50$MeV (left panel) and for $U_{\Delta}=-100$MeV (right panel) for the MKVOR-based models. In Fig. \[fig:mkv-xr\] we demonstrate the maximum NS mass as a function of the parameter $x_{\rho\Delta}$ at $x_{\om\Delta}=1$ for $U_{\Delta}=-50$MeV (left panel) and for $U_{\Delta}=-100$MeV (right panel) for MKVOR\*-based models. Here, all the models MKVOR\*$\Delta$, MKVOR\*H$\Delta\phi$, MKVOR\*H$\Delta\phi\sigma$ appropriately fulfill the maximum NS mass constraint in the whole range of varied parameters.
![Maximum NS mass as a function of the parameter $x_{\om\Delta}$ at $x_{\rho\Delta}=1$ for $U_{\Delta}=-50$MeV (left panel) and for $U_{\Delta}=-100$MeV (right panel) for MKVOR\*-based models. The horizontal band shows the uncertainty range for the mass of PSR J0348+0432 ($2.01\pm 0.04\,M_\odot$). []{data-label="fig:mkv-xo"}](Fig-20-f){width="10cm"}
![Maximum NS mass as a function of the parameter $x_{\rho\Delta}$ at $x_{\om\Delta}=1$ for $U_{\Delta}=-50$MeV (left panel) and for $U_{\Delta}=-100$MeV (right panel) for MKVOR\*-based models. The horizontal band shows the uncertainty range for the mass of PSR J0348+0432 ($2.01\pm 0.04\,M_\odot$). []{data-label="fig:mkv-xr"}](Fig-21-f){width="10cm"}
Conclusion
==========
In [@Maslov:2015msa; @Maslov:2015wba] we proposed several relativistic mean-field (RMF) models with scaled hadron masses and coupling constants depending self-consistently on the scalar mean-field. These models are the extensions of the KVOR model proposed in [@Kolomeitsev:2004ff] and then successfully tested in [@Klahn:2006ir] against various experimental constraints. Within these models all hadron masses are assumed to decrease universally with the scalar field growth, whereas the meson-nucleon coupling constants can vary differently. The aim in [@Maslov:2015msa; @Maslov:2015wba] was to construct an RMF model, which satisfies presently known experimental constraints put on the equation of state (EoS) from various analyses of atomic nuclei, heavy-ion collisions and pulsars. Especial challenge is that the EoS of the beta-equilibrium matter (BEM) should be sufficiently stiff to support the existence of neutron stars (NSs) with the mass $>2\,M_\odot$ and, simultaneously, the EoS of the isospin symmetrical matter (ISM) should respect the constraint derived from flows of particles produced in heavy-ion collisions [@Danielewicz:2002pu]. We have exploited a novel mechanism of stiffening of the EoS in the framework of a RMF model described in [@Maslov:cut] (named the cut-mechanism), which assumes a limitation of the growth of the scalar field at densities above some chosen one. It is achieved by a special choice of the scaling functions.
In the given work we focused on extensions of the models KVORcut03 and MKVOR, which we have formulated in [@Maslov:2015wba]. The KVORcut03 model exploits the cut mechanism in the $\om$ sector, whereas MKVOR model uses the cut mechanism in the $\rho$ sector. In previous works [@Maslov:2015msa; @Maslov:2015wba] we allowed for occupation of the hyperon Fermi seas in dense BEM. We exploited the choice of the couplings of the hyperons (H) with $\omega$, $\rho$ and $\phi$ fields in vacuum according to SU(6) symmetry. The $H\sigma$ coupling was constrained by the experimental information on the hyperon potentials in nuclei. We demonstrated in [@Maslov:2015wba] that with two choices for inclusion of hyperons (in the KVORcut03H$\phi$ and KVORcut03H$\phi\sigma$, and MKVORH$\phi$ and MKVORH$\phi\sigma$ models) the experimental constraints on the EoS continue to be fulfilled. By this we resolved the so-called “hyperon puzzle" in the framework of thus constructed RMF models: the EoS satisfies the experimental constraint on the minimal value of the maximum mass of the NSs. However, we disregarded in mentioned works a possibility of the filling of Fermi seas of $\Delta$ isobars. As argued in [@Drago2014; @Cai:2015hya; @Drago:2015cea] besides the hyperon puzzle there exists the similar $\Delta$ puzzle. Therefore, in the present paper we incorporate $\Delta$s in our models.
The coupling constants of the $\Delta$ resonances are poorly constrained empirically, due to unstable nature of the $\Delta$ particles and the complicated pion-nucleon dynamics in medium. Basing on SU(6) symmetry relations, we exploited the universal choice of the couplings of $\Delta$s with $\omega$ and $\rho$ fields in vacuum. The $\sigma\Delta$ coupling was constrained by choosing a value for the $\Delta$ potential at nuclear saturation density $U_\Delta (n_0)$ where $n_0\simeq 0.16$fm$^{-3}$. We varied the value $U_\Delta (n_0)$ in broad limits. Then we also allowed for a variation of the $\Delta$ coupling constants with $\omega$ and $\rho$ fields. The $\phi\Delta$ coupling is held zero.
We demonstrated that within the KVORcut03$\Delta$ model $\Delta$s do not appear in the ISM up to extremely high densities if we choose an appropriate value of the $\Delta$ potential, $U_\Delta (n_0)=U_N (n_0) \sim -50$ MeV, cf. Fig. \[fig:ncD-ISM-cut03\]. The critical density for the appearance of $\Delta$s decreases, if we allow for a more attractive potential $U_\Delta (n_0)$ (that is not excluded by the data) but even for the unrealistically large attraction with $U_\Delta =-150$ MeV, the critical density of the appearance of $\Delta$s, $n_{c,\Delta}$, remains as high as 5$n_0$. In the BEM for the chosen realistic value of the potential, $U_\Delta (n_0)=-50$ MeV, $\Delta^-$ baryons arise only at densities $n> 5\, n_0$, cf. Fig. \[fig:cut03-conc\]. Other $\Delta$ species ($\Delta^0$ and $\Delta^+$) do not appear up to maximum densities reachable in NS interiors.
In the presence of hyperons $\Delta$ baryons do not appear in the KVORcut03H$\Delta\phi$ and KVORcut03H$\Delta\phi\sigma$ models for $U_\Delta (n_0)=-50$ MeV but could arise if $U_\Delta (n_0)$ were more attractive. Therefore, we artificially increased the $\Delta$-nucleon attraction allowing $U_\Delta (n_0)$ to vary within the range of $-$(50150)MeV to investigate how it could affect the EoS in all our KVORcut03-based models. The critical value of the NS mass for the begining of the DU reactions on nucleons proves to be above $1.5 M_{\odot}$ for $U_\Delta (n_0)>-109$ MeV, cf. left panel of Fig. \[fig:cut03-Udep-1\]. The maximum NS mass in the KVORcut03$\Delta$ model for $U_\Delta (n_0) =-50$ MeV is 2.17 $M_{\odot}$ that is only by $0.01\, M_\odot$ less than that in the original KVORcut03 model. It decreases only slightly for more attractive potentials $U_\Delta $, cf. Fig. \[fig:cut03-Udep-1\], right. In the KVORcut03H$\Delta\phi\sigma$ model the maximum NS mass is $\simeq 2.08\, M_\odot$ and in the KVORcut03H$\Delta\phi$ model $\simeq 1.97\,M_\odot$, being in both cases almost independent on the value of $U_\Delta$. Thus, even for such an unrealistically attractive potential $U_\Delta (n_0)= -150$MeV, the maximum mass constraint remains satisfied (although marginally for the KVORcut03H$\phi$, KVORcut03H$\Delta\phi$ models), cf. Figs. \[fig:cut03-Mn\] and \[fig:cut03-MR\]. The NS radius changes only slightly (by less than 0.5km) even for $U_\Delta =-150$MeV in the KVORcut03H$\phi$, KVORcut03H$\Delta\phi$ models.
It proved to be that within the MKVOR$\Delta$ model in ISM the nucleon effective mass $m_N^*$ vanishes at $n=n_{{\rm c},f=1}$, cf. Fig. \[fig:mkv-meff\] (e.g., $n_{{\rm c},f=1}\simeq 5.8\,n_0$ for $U_\Delta (n_0)=-50$ MeV). Thus in the given model the hadron EoS should be unavoidably replaced to the quark one for higher densities. To extend application of a hadronic model to densities $n>n_{{\rm c},f=1}$ we formulated a modification of the MKVOR model, which introduces the cut mechanism both in the $\om$ and $\rho$ sectors. We label it as the MKVOR\* model, see scaling functions and $f(n)$ in Figs. \[Fig-1-new\] and \[fig:eta\_r\], respectively. The MKVOR\* model differs from the MKVOR model in the scaling function in the $\om$ sector only for large values of the scalar field, $0.95<f<1$, that corresponds to densities $n{\stackrel{\scriptstyle >}{\phantom{}_{\sim}}}5n_0$. This limits a decrease of the nucleon effective mass in the ISM. For BEM $f{\stackrel{\scriptstyle <}{\phantom{}_{\sim}}}0.6$ and results for MKVOR\*-based models coincide with those for the corresponding MKVOR-based models.
The MKVOR\* model is more sensitive to inclusion of $\Delta$s than KVORcut03 model since in the former model the effective nucleon mass is smaller. In the MKVOR\*$\Delta$ model, as in MKVOR one, the effective nucleon mass in ISM demonstrates a back-bending behaviour in some density region provided $U_\Delta$ is chosen to be more attractive than $-67$MeV. For $U_\Delta >-67$MeV the effective nucleon mass decreases monotonously with an increase of the density, cf. Fig. \[fig:mkvstar-meff\] (left). The $\Delta$ concentration demonstrates a similar behavior, cf. Fig. \[fig:mkvstar-meff\] (middle). The pressure as a function of the density in ISM, cf. Fig. \[fig:mkvstar-meff\] (right), for $U_\Delta >-56$MeV has a behaviour typical for a third-order phase transition. For $U_\Delta <-56$MeV the transition to the state with non-zero $\Delta$ concentration is of the first order. For $-67\,{\rm MeV}<U_\Delta <-56$MeV there is one spinodal region, whereas for $U_\Delta <-67$MeV the $P(n)$ curve has a back bending in some density interval, and there exist two spinodal regions. This example is in detail studied, cf. Fig. \[Fig-P-new\]. The presence of a first-order phase transition owing to the appearance of $\Delta$s could manifest itself through an increase of a pion yield at typical energies and momenta corresponding to the $\Delta$ decays in heavy-ion collision experiments.
In BEM $\Delta$s appear in the MKVOR\*$\Delta$ model already at $n=2.5\,n_0$ for $U_\Delta = - 50$MeV and at $n = 1.7\,n_0$ for $U_\Delta = - 100$MeV. In MKVOR\*H$\Delta\phi$ and MKVOR\*H$\Delta\phi\sigma$ models $\Delta$s appear at smaller densities than hyperons but their presence does not substantially change the NS compositions compared with the case without $\Delta$s. The critical densities of $\Lambda$ and $\Xi^-$ hyperons increase with a decrease of $U_\Delta$, opposite to that occurs for the concentration of $\Xi^0$. For $U_\Delta = -50$MeV, $\Xi^0$ do not arise, cf. Figs. \[fig:mkv-conc\] and \[fig:mkv-Udep-nc\]. Despite the presence of $\Delta$s affects substantially the NS composition, the star mass changes rather weakly, cf. Fig. \[fig:mkv-Mn\]. For a realistic value of the $\Delta$ potential, $U_\Delta=-50$MeV, the NS mass decrease proves to be tiny. For a deep $\Delta$ potential, $U_\Delta=-100$MeV, change of the NS mass does not exceed $0.2\,M_\odot$. The maximum NS mass changes even smaller (by ${\stackrel{\scriptstyle <}{\phantom{}_{\sim}}}0.05$$M_\odot$) so that the maximum mass constraint is safely fulfilled even after the inclusion of both $\Delta$ baryons and hyperons. The DU constraint $M_{\rm DU} > 1.5\,M_{\odot}$ proves to be fulfilled for $U_\Delta >- 88$MeV, cf. Fig. \[fig:mkv-Udep-nd-mm\] (left panel). The maximum mass of the NS decreases only slightly with a deepening of $U_\Delta$ and remains substantially larger than the maximum measured pulsar mass ($2.01 \pm 0.04M_\odot$ for PSR J0348+0432), cf. Fig. \[fig:mkv-Udep-nd-mm\], right. Inclusion of $\Delta$s in MKVOR-based models with or without hyperons does not change noticeably the mass-radius relation for NSs for $U_\Delta=-50$MeV. For $U_\Delta=-100$MeV the radius of the NS with the mass $1.5\,M_\odot$ decreases by $\sim $ 0.5km, cf. Fig. \[fig:mkv-Udep-mr\].
Concluding, we included $\Delta$ isobars in the RMF models with scaled effective hadron masses and couplings. We demonstrated that for reasonable values of the $\Delta$ potential (in the range of $-(50\mbox{--100})$MeV) and for the ratios of the coupling constants given by SU(6) model ($x_{\om\Delta}=x_{\rho\Delta}$=1, see Eq. (\[x-QCU\])) the KVORcut03$\Delta$-based and MKVOR\*$\Delta$-based models appropriately satisfy the constraints considered previously in [@Maslov:2015msa; @Maslov:2015wba] within the KVORcut-based and MKVOR-based models with and without hyperons, excluding $\Delta$ isobars. Thus, we demonstrated that within our models the $\Delta$ puzzle is resolved as well as the hyperon puzzle.
Acknowledgement {#acknowledgement .unnumbered}
===============
We thank M. Borisov and F. Smirnov for the interest in this work. The reported study was funded by the Russian Foundation for Basic Research (RFBR) according to the research project No 16-02-00023-A. The work was also supported by the Slovak Grant No. VEGA-1/0469/15, by “NewCompStar”, COST Action MP1304 and by the Ministry of Education and Science of the Russian Federation (Basic part). Computing was partially performed in the High Performance Computing Center of the Matej Bel University using the HPC infrastructure acquired in Project ITMS 26230120002 and 26210120002 (Slovak infrastructure for high-performance computing) supported by the Research & Development Operational Programme funded by the ERDF. E.E.K. thanks the Laboratory of Theoretical Physics at JINR (Dubna) for warm hospitality and acknowledges the support by grant of the Plenipotentiary of the Slovak Government to JINR.
[99]{} \#1[\#1]{}
J. M. Lattimer, Ann. Rev. Nucl. Part. Sci. [**62**]{} (2012) 485.
S.E. Woosley, A. Heger, and T.A. Weaver, Rev. Mod. Phys. [**74**]{} (2002) 1015.
P. Danielewicz, R. Lacey, and W. G. Lynch, Science [**298**]{} (2002) 1592.
C. Fuchs, Prog. Part. Nucl. Phys. [**56**]{} (2006) 1.
H.P. Dürr, Phys. Rev. [**103**]{} (1956) 469.
J.D. Walecka, Ann. Phys. (N.Y.) [**83**]{} (1974) 491.
J. Boguta and A.R. Bodmer, Nucl. Phys. A [**292**]{} (1977) 413; J. Boguta and H. Stöcker, Phys. Lett. B [**120**]{} (1983) 289; P.-G. Reinhard, M. Rufa, J. Maruhn, W. Greiner, and J. Friedrich, Z. Phys. A [**323**]{} (1986) 13; W. Pannert, P. Ring, and J. Boguta, Phys. Rev. Lett. [**59**]{} (1987) 2420.
B.D. Serot and J.D. Walecka, Adv. Nucl. Phys. [**16**]{} (1986) 1; P.-G. Reinhard, Rep. Prog. Phys. [**52**]{} (1989) 439.
N.K. Glendenning, [*Compact Stars: Nuclear Physics, Particle Physics, and General Relativity,*]{} second ed., Springer-Verlag, New York, 2000.
F. Weber, [*Pulsars as Astrophysical Laboratories for Nuclear and Particle Physics,*]{} IoP Publishing, Bristol, 1999.
L.N. Savushkin, Phys. Part. Nucl. [**46**]{} (2015) 859.
V. Metag, Prog. Part. Nucl. Phys. [**30**]{} (1993) 75.
S. Typel and H.H. Wolter, Nucl. Phys. A [**656**]{} (1999) 331.
F. Hofmann, C.M. Keil, and H. Lenske, Phys. Rev. C [**64**]{} (2001) 034314.
T. Nikšić, D. Vretenar, P. Finelli, and P. Ring, Phys. Rev. C [**66**]{} (2002) 024306.
T. Gaitanos, M. Di Toro, S. Typel, V. Baran, C. Fuchs, V. Greco, and H.H. Wolter, Nucl. Phys. A [**732**]{} (2004) 24.
W. Long, J. Meng, N. Van Giai, and S.-G. Zhou, Phys. Rev. C [**69**]{} (2004) 034319.
G.A. Lalazissis, T. Nikšić, D. Vretenar, and P. Ring, Phys. Rev. C [**71**]{} (2005) 024312.
S. Typel, Phys. Rev. C [**71**]{} (2005) 064301.
M.D. Voskresenskaya and S. Typel, Nucl. Phys. A [**887**]{} (2012) 42.
X. Roca-Maza, X. Viñas, M. Centelles, P. Ring, and P. Schuck, Phys. Rev. C [**84**]{} (2011) 054309.
M. Dutra, O. Lourenço, S.S. Avancini, B.V. Carlson, A. Delfino, D.P. Menezes, C. Providência, S. Typel, and J.R. Stone, Phys. Rev. C [**90**]{} (2014) 055203.
M. Dutra, O. Lourenço and D.P. Menezes, Phys. Rev. C [**93**]{} (2016) 025806.
R. Rapp and J. Wambach, Adv. Nucl. Phys. [**25**]{} (2000) 1.
V. Koch, Int. J. Mod. Phys. E [**6**]{} (1997) 203.
G.E. Brown and M. Rho, Phys. Rev. Lett. [**66**]{} (1991) 2720; G.E. Brown and M. Rho, Phys. Rep. [**396**]{} (2004) 1.
E.E. Kolomeitsev and D.N. Voskresensky, Nucl. Phys. A [**759**]{} (2005) 373.
A. Ohnishi, N. Kawamoto, and K. Miura, Mod. Phys. Lett. A [**23**]{} (2008) 2459.
T. Klähn, D. Blaschke, S. Typel, E.N.E. van Dalen, A. Faessler, C. Fuchs, T. Gaitanos, H. Grigorian, A. Ho, E.E. Kolomeitsev, M.C. Miller, G. Röpke, J. Trümper, D.N. Voskresensky, F. Weber, and H.H. Wolter, Phys. Rev. C [**74**]{} (2006) 035802.
A. Akmal, V.R. Pandharipande, and D.G. Ravenhall, Phys. Rev. C [**58**]{} (1998) 1804.
S. Gandolfi, A.Y. Illarionov, S. Fantoni, J.C. Miller, F. Pederiva, and K. E. Schmidt, Mon. Not. R. Astron. Soc. [**404**]{} (2010) L35.
M. Dutra, O. Lourenço, J.S. Sa Martins, A. Delfino, J.R. Stone, and P.D. Stevenson, Phys. Rev. C [**85**]{} (2012) 035201.
W.G. Lynch, M.B. Tsang, Y. Zhang, P. Danielewicz, M. Famiano, Z. Li, and A.W. Steiner, Prog. Part. Nucl. Phys. [**62**]{} (2009) 427.
P. Demorest, T. Pennucci, S. Ransom, M. Roberts, and J. Hessels, Nature [**467**]{} (2010) 1081.
J. Antoniadis, P.C.C. Freire, N. Wex, T.M. Tauris, R.S. Lynch, M.H. van Kerkwijk, M. Kramer, and C. Bassa, Science [**340**]{} (2013) 6131.
D. Blaschke, H. Grigorian, and D.N. Voskresensky, Astron. Astrophys. [**424**]{} (2004) 979.
H. Grigorian, D.N. Voskresensky and D. Blaschke, Eur. Phys. J. A [**52**]{} (2016) 67.
G. Taranto, G.F. Burgio, and H.-J. Schulze, Mon. Not. R. Astron. Soc. [**456**]{} (2016) 1451.
P. Podsiadlowski, J.D.M. Dewi, P. Lesaffre, J.C. Miller, W.G. Newton, and J.R. Stone, Mon. Not. R. Astron. Soc. [**361**]{} (2005) 1243.
F.S. Kitaura, H.T. Janka, and W. Hillebrandt, Astron. Astrophys. [**450**]{} (2006) 345.
S. Bogdanov, Astrophys. J. [**762**]{} (2013) 96.
V. Hambaryan, R. Neuhäuser, V. Suleimanov, and K. Werner, J. Phys.: Conf. Series [**496**]{} (2014) 012015.
C.O. Heinke, H.N. Cohn, P.M. Lugger, N.A. Webb, W.C.G. Ho, J. Anderson, S. Campana, S. Bogdanov, D. Haggard, A.M. Cool, and J.E. Grindlay, Mon. Not. R. Astron. Soc. [**444**]{} (2014) 443.
A.S. Khvorostukhin, V.D. Toneev, and D.N. Voskresensky, Nucl. Phys. A [**791**]{} (2007) 180.
A.S. Khvorostukhin, V.D. Toneev, and D.N. Voskresensky, Nucl. Phys. A [**813**]{} (2008) 313.
G. Baym, C. Pethick, and P. Sutherland, Astrophys. J. [**170**]{} (1971) 317.
J. Schaffner-Bielich, Nucl. Phys. A [**804**]{} (2008) 309; H. Djapo, B.J. Schaefer, and J. Wambach, Phys. Rev. C [**81**]{} (2010) 035803.
M. Fortin, J.L. Zdunik, P. Haensel, and M. Bejger, Astron. Astrophys. 576 (2015) A68.
K.A. Maslov, E.E. Kolomeitsev, and D.N. Voskresensky, Phys. Lett. B [**748**]{} (2015) 369.
K.A. Maslov, E. E. Kolomeitsev, and D.N. Voskresensky, Nucl. Phys. A [**950**]{} (2016) 64.
G. Cattapan and L.S. Ferreira, Phys. Rept. [**362**]{} (2002) 303.
A.B. Migdal, Rev. Mod. Phys. [**50**]{} (1978) 107.
T. Ericson and W. Weise, [*Pions and Nuclei*]{}, Oxford Univ. Press, Oxford, 1988.
A.B. Migdal, E.E. Saperstein, M.A. Troitsky and D.N. Voskresensky, Phys. Rept. [**192**]{} (1990) 179.
J. Boguta, Phys. Lett. B [**109**]{} (1982) 251.
M. Cubero, M. Schönhofen, H. Feldmeier, and W. Nörenberg, Phys. Lett. B [**201**]{} (1988) 11.
D.N. Voskresensky, Nucl. Phys. A [**555**]{} (1993) 293.
S.L. Shapiro and S.A. Teukolsky, [*Black Holes, White Dwarfs, and Neutron Stars*]{}, Wiley-VCH, 1983, Section 8.11.
R.F. Sawyer, Astrophys. J. [**176**]{} (1972) 205.
H. Xiang and H. Guo, Phys. Rev. C [**67**]{} (2003) 038801.
Y.J. Chen, H. Guo and Y. Liu, Phys. Rev. C [**75**]{} (2007) 035806; Y.J. Chen and H. Guo, Comm. Theor. Phys. [**49**]{} (2008) 1283; Y.J. Chen, Y. Yuan and Y. Liu, Phys. Rev. C [**79**]{} (2009) 055802.
T. Schurhoff, S. Schramm, and V. Dexheimer, Astrophys. J. [**724**]{} (2010) L74.
A. Lavagno, Phys. Rev. C [**81**]{} (2010) 044909.
A. Drago, A. Lavagno, G. Pagliara and D. Pigato, Phys. Rev. C [**90**]{} (2014) 065809.
B. J. Cai, F. J. Fattoyev, B. A. Li, and W. G. Newton, Phys. Rev. C [**92**]{} (2015) 015802.
A. Drago, A. Lavagno, G. Pagliara, and D. Pigato, Eur. Phys. J. A [**52**]{} (2016) 40.
D.N. Voskresensky, Phys. Lett. B [**392**]{} (1997) 262.
K. Wehrberger, Phys. Rep. [**225**]{} (1993) 273.
D.S. Kosov, C. Fuchs, B.V. Martemyanov, and A. Faessler, Phys. Lett. B [**421**]{} (1998) 37.
J.C.T. De Oliveira, M. Kyotoku, M. Chiaparini, H. Rodrigues and S.B. Duarte, Mod. Phys. Lett. A [**15**]{} (2000) 1529.
D. Zschiesche, P. Papazoglou, S. Schramm, J. Schaffner-Bielich, H. Stöcker and W. Greiner, Phys. Rev. C [**63**]{} (2001) 025211.
S. Okubo, Phys. Lett. 5 (1963) 165; G. Zweig, CERN report TH-412 (1964); J. Iizuka, Prog. Theor. Phys. Suppl. 38 (1966) 21.
X. Jin, Phys. Rev. C [**51**]{} (1995) 2260.
J. O’Connell and R. Sealock, Phys. Rev. C [**42**]{} (1990) 2290.
G.E. Brown, C.H. Lee, M. Rho, V. Thorsson, Nucl. Phys. A [**567**]{} (1994) 937.
V.E. Lyubovitskij, Th. Gutsche, A. Faessler, and E.G. Drukarev, Phys. Rev. D [**63**]{} (2001) 054026 .
I.P. Cavalcante, M.R. Robilotta, J. Sá Borges, D. de O. Santos, and G.R.S. Zarnauskas, Phys. Rev. C [**72**]{} (2005) 065207.
J. Koch and N. Ohtsuka, Nucl. Phys. A [**435**]{} (1985) 765.
S. Nakamura, T. Sato, T.S. Lee, B. Szczerbinska, and K. Kubodera, Phys. Rev. C [**81**]{} (2010) 035502.
Y. Horikawa, M. Thies and F. Lenz, Nucl. Phys. A [**345**]{} (1980) 386.
K. Wehrberger, C. Bedau, and F. Beck, Nucl. Phys. A [**504**]{} (1989) 797.
W. Alberico, G. Gervino, and A. Lavagno, Phys. Lett. B [**321**]{} (1994) 177.
T. Song and C.M. Ko, arXiv:1403.7363 (2014).
G. Ferini, M. Colonna, T. Gaitanos, and M. Di Toro, Nucl. Phys. A [**762**]{} (2005) 147.
M.D. Cozma, Phys. Lett. B [**753**]{} (2016) 166.
W.-M. Guo, G.-Ch. Yong, and W. Zuo, Phys. Rev. C [**92**]{} (2015) 054619.
F. Riek, M.F.M. Lutz, and C.L. Korpa, Phys. Rev. C [**80**]{} (2009) 024902.
K.A. Maslov, E.E. Kolomeitsev, and D.N. Voskresensky, Phys. Rev. C [**92**]{} (2015) 052801.
H. Grigorian and D.N. Voskresensky, Astron. Astrophys. [**444**]{} (2005) 913.
D. N. Voskresensky, M. Yasuhira and T. Tatsumi, Phys. Lett. B [**541**]{} (2002) 93; D. N. Voskresensky, M. Yasuhira and T. Tatsumi, Nucl. Phys. A [**723**]{} (2003) 291; T. Maruyama, T. Tatsumi, D. N. Voskresensky, T. Tanigawa and S. Chiba, Nucl. Phys. A [**749**]{} (2005) 186.
N.K. Glendenning, Nucl. Phys. A [**469**]{} (1987) 600.
M. Prakash, M. Prakash, J.M. Lattimer, C.J. Pethick, Astrophys. J. [**390**]{} (1992) L77.
J.E. Trümper, V. Burwitz, F. Haberl, and V.E. Zavlin, Nucl. Phys. B (Proc. Suppl.) [**132**]{} (2004) 560.
S. van Straaten, E.C. Ford, M. van der Klis, M. Méndez, and P. Kaaret, Astrophys. J. [**540**]{} (2000) 1049.
M.F.M. Lutz and E.E. Kolomeitsev, Nucl. Phys. A [**700**]{} (2002) 193.
[^1]: The problem with the large contribution from nucleon DU reaction to NS cooling can be avoided, if one uses very large neutron or proton pairing gaps [@Taranto-DU].
[^2]: Here, we disregard a possibility of a mixed pasta phase following an observation of [@VYT] that with taking into account of finite size effects the description of the pasta phase might be close to description given by the MC.
[^3]: Shortening notation, below we will use $U_\Delta$ instead of $U_\Delta (n_0)$.
|
---
abstract: |
We study the Neighbor Aided Network Installation Problem (NANIP) introduced previously which asks for a minimal cost ordering of the vertices of a graph, where the cost of visiting a node is a function of the number of neighbors that have already been visited. This problem has applications in resource management and disaster recovery. In this paper we analyze the computational hardness of NANIP. In particular we show that this problem is NP-hard even when restricted to convex decreasing cost functions, give a linear approximation lower bound for the greedy algorithm, and prove a general sub-constant approximation lower bound. Then we give a new integer programming formulation of NANIP and empirically observe its speedup over the original integer program.
**Keywords**: Infrastructure Network; Disaster Recovery; Permutation Optimization; Neighbor Aided Network Installation Problem.
author:
- Alexander Gutfraind
- Jeremy Kun
- 'Ádám D. Lelkes'
- Lev Reyzin
bibliography:
- 'nanip.bib'
title: 'Network installation and recovery: approximation lower bounds and faster exact formulations'
---
Introduction
============
We motivate our study with an example from infrastructure networks. It is well known that many vital infrastructure systems can be represented as networks, including transport, communication and power networks. Large parts of these networks can be severely damaged in the event of a natural disaster. When faced with large-scale damage, authorities must develop a plan for restoring the networks. A particularly challenging aspect of the recovery is the lack of infrastructure, such as roads or power, necessary to support the recovery operations. For example, to clear and rebuild roads, equipment must be brought in, but many of the access roads are themselves blocked and damaged. Abstractly, as the recovery progresses, previously recovered nodes provide resources that help reduce the cost of rebuilding their neighbors. We call this phenomenon “neighbor aid”.
Recently, [@Gutfraind14] introduced and analyzed a simple model of neighbor aided recovery in terms of a convex discrete optimization problem called the *Neighbor Aided Network Installation Problem* (NANIP). We will henceforth use the terms “recover” and “install” interchangeably. For simplicity, we assume that during the recovery of a network all of its nodes and edges must be visited and restored. They asked how to optimize the recovery schedule in order to minimize the total cost? This is also the question we address herein.
In the NANIP model, the cost of recovering a node depends only on the number of its already recovered neighbors, capturing the intuition that neighbor aid is the determining factor of the cost of rebuilding a new node. NANIP offers a stylized model for disaster recovery of networks (among other applications) but the interest in disaster recovery of networks is not new. A partial list of existing studies include [@Guha99; @nurre2010restoring; @Lee07; @Adibi94; @Bertoli02; @coffrin2011strategic]. A common framework is to consider infrastructure systems as a set of interdependent network flows, and formulate the problem of minimizing the cost of repairing such damaged networks. Another class of models [@Hentenryck10] develops a stochastic optimization problem for stockpiling resources and then distributing them following a disaster. More abstract problems related to NANIP are the single processor scheduling problem [@Karp61], the linear ordering problem [@Mitchell96], and the study of tournaments in graph theory [@West01].
NANIP assumes that certain tasks are dependent and cannot be performed in parallel, but unlike many scheduling problems, there are no partial order constraints. Similarly to traveling salesman problem (TSP) [@schrijver2005history], the NANIP problem also asks for an optimal permutation of the vertices of the graph but, unlike in the case of the traveling salesman problem, the cost associated with visiting a given node could depend on *all* of the nodes visited before the given node. Another key difference between NANIP and TSP is that in NANIP it is allowed to visit nodes that are not neighbors of any previously-visited nodes. As we will see, such disconnected traversals provide $\Omega(\log(n))$ multiplicative improvements over connected ones.
Since neighbor aid is assumed to reduce the cost of recovery, we are mainly interested in decreasing cost functions. Furthermore, since convexity for decreasing functions captures the “law of diminishing returns”, i.e. that as the number of recovered neighbors increases, the per-node value of the aid provided by one neighbor decreases, convex decreasing functions are of special interest. Although [@Gutfraind14] gave NP-hardness of NANIP for general cost via a straightforward reduction from Maximum Independent Set, the cost function used there was increasing, thus leaving the complexity of the convex decreasing case an open question. In this paper we show this problem is NP-hard as well. We also provide a new convex integer programming formulation and analyze the performance of the greedy algorithm, showing that its worst case approximation ratio is $\Theta(n)$.
Preliminaries
=============
An instance of NANIP is specified by an undirected graph $G=(V,E)$ and a real-valued function $f: \mathbb{N} \to \mathbb{R}_{\geq 0}$. The function $f$ represents the cost of installing a vertex $v$, where the argument is the number of neighbors of $v$ that have already been installed. Hence, the domain of $f$ is the non-negative integers, bounded by the maximum degree of $G$ (for terminology see [@West01]). The goal is to find a permutation of the nodes that minimizes the total cost of the network installation. The cost of installing node $v_t \in V$ under a permutation $\sigma$ of $V$ is given by $$f(r(v_t, G, \sigma))\,,$$ where $r(v_t, G, \sigma)$ is the number of nodes adjacent to $v_t$ in $G$ that appear before $v_t$ in the permutation $\sigma$. The total cost of installing $G$ according to $\sigma$ is given by $$C_G(\sigma) = \sum_{t=1}^{n} f(r(v_t, G, \sigma)).
\label{eq:general-NANIP}$$
The problem is illustrated in Fig. \[fig:illustration\]. Generally, the choice of $f$ depends on the application, and $f$ will often be convex decreasing.
[0.4]{} ![Illustrations of NANIP. (a) Simple instance. When $f(2)=2$, $f(1)=1$ and $f(k\geq2)=0$, the naive installation sequence $\sigma=(A,B,C,D,E)$ gives cost of $4=2+1+1+0+0$, but all optimal solutions have cost $3$.\[fig:illustration\] (b) Actual metro stations and their connections in downtown Chicago “Loop”. With the same $f$, any optimal sequences must recover Clark/Lake (CL) station before at least one of its neighbors.](simple "fig:"){width="50.00000%"}
[0.6]{} ![Illustrations of NANIP. (a) Simple instance. When $f(2)=2$, $f(1)=1$ and $f(k\geq2)=0$, the naive installation sequence $\sigma=(A,B,C,D,E)$ gives cost of $4=2+1+1+0+0$, but all optimal solutions have cost $3$.\[fig:illustration\] (b) Actual metro stations and their connections in downtown Chicago “Loop”. With the same $f$, any optimal sequences must recover Clark/Lake (CL) station before at least one of its neighbors.](Chicago_Loop_schematic "fig:"){width="100.00000%"}
We assume that $G$ is connected and undirected, unless we note otherwise. If $G$ has multiple connected components, NANIP could be solved on each component independently without affecting the total cost.
We begin by quoting a preliminary lemma from [@Gutfraind14] which establishes that all the arguments used in calculating the node costs must sum to $m$, the number of edges in the network.
\[lem:edge-decomp\] For any network $G$, and any permutation $\sigma$ of the nodes of $G$, $$\sum_{t=1}^n r(v_t,G,\sigma) = m \label{eq:edge-decomp}\,.$$
One application of this lemma is the case of a linear cost function $f(k)=ak+b$, for some real numbers $a$ and $b$. With such a function the optimization problem is trivial in that all installation permutations have the same cost.
In the next section we will prove hardness results about NANIP; let us recall some relevant definitions.
An optimization problem is called *strongly NP-hard* if it is NP-hard and the optimal value is a positive integer bounded by a polynomial of the input size.
An algorithm is an *efficient polynomial time approximation scheme (EPTAS)* for an optimization problem if, given a problem instance and an approximation factor $\varepsilon$, it runs in time $O(F(\varepsilon) n^c)$ for some constant $c$ and some function $F$ and finds a solution whose objective value is within an $\varepsilon$ fraction of the optimum. An EPTAS is called a *fully polynomial time approximation scheme (FPTAS)* it runs in polynomial in the size of the problem instance and $\frac{1}{\varepsilon}$.
A strongly NP-hard optimization problem cannot have an FPTAS unless P=NP: otherwise, if $n$ denotes the input size and $p$ denotes the polynomial such that the optimum value is bounded by $p(n)$, setting $\varepsilon=\frac{1}{2p(n)}$ for the FPTAS would yield an exact polynomial time algorithm.
Some NP-hard problems become efficiently solvable if a natural parameter is fixed to some constant. Such problems are called fixed parameter tractable.
FPT, the set of *fixed parameter tractable* problems, is the set of languages $L$ of the form $\langle x,k\rangle$ such that there is an algorithm running in time $O(F(k) n^c)$ for some function $F$ and constant $c$ deciding whether $\langle x,k\rangle\in L$.
An example of a fixed parameter tractable problem is the vertex cover problem (where the parameter is the size of the vertex cover). Problems believed to be fixed parameter intractable include the graph coloring problem (the parameter being the number of colors) and the clique problem (with the size of the clique as parameter).
For parametrized languages, there is a natural fixed parameter tractable analogue of polynomial time reductions. These so-called *fpt-reductions* are used to define hardness for classes of parametrized languages, similarly to how NP-hardness is defined using polynomial time reductions. One important class of parametrized languages is $W[1]$. For the definition of $W[1]$ and for more background on parametrized complexity, we refer the reader to the monograph of Downey and Fellows [@DowneyF13]. They proved that under standard complexity-theoretic assumptions, $W[1]$ is a strict superset of $FPT$; consequently, $W[1]$-hard problems are fixed parameter intractable. We will use this fact to show the fixed parameter intractability of NANIP.
Convex decreasing NANIP is NP-hard {#sec:computation}
==================================
We now consider the hardness of solving NANIP with convex decreasing cost functions.
\[thm:np-hard\] The Neighbor Aided Network Installation Problem is strongly NP-hard when $f$ is convex decreasing; as a consequence it admits no FPTAS.
We reduce from CLIQUE, that is, the problem of deciding given a graph $G =
(V,E)$ whether it contains as an induced subgraph the complete graph on $k$ vertices. Given a graph $G = (V,E)$ with $n=|V|$ and an integer $k$, we construct an instance of NANIP on a graph $G'$ with a convex cost function $f(i)$ as follows. Define $G'$ by adding $k$ new vertices $u_1, \dots, u_k$ to $G$ which are made adjacent to every vertex in $V$ but not to each other, establishing an independent set of size $k$. Define the cost function
$$f(i) = f_k(i) =
\begin{cases}
\hfill k-i \hfill & \text{ if $i \leq k$} \\
\hfill 0 \hfill & \text{ otherwise} \\
\end{cases}$$
Let $M = \sum_{i=0}^{k} f(i)=\frac{k(k+1)}2$. In a traversal $\sigma$ whose first $k$ vertices yield cost $M$, every new vertex must be adjacent to every previously visited vertex, i.e. the vertices form a $k$-clique. Moreover, $M$ is the lower bound on the cost incurred by the first $k$ vertices of any traversal of $G'$.
Suppose that $G$ has a clique of size $k$, and denote by $v_1, \dots, v_k$ the vertices of the clique, with $v_{k+1}, \dots, v_n$ the remaining vertices of $G$. Then the following ordering is a traversal of $G'$ of cost exactly $M$: $$v_1, \dots, v_k, u_1, \dots, u_k, v_{k+1}, \dots, v_n \,.$$
Conversely, let $w_1, \dots, w_{n+k}$ be an ordering of the vertices of $G'$ achieving cost $M$. Then by the above, the vertices $w_1, \dots, w_{k}$ must form a $k$-clique in $G'$. In the case these $k$ prefix vertices are all vertices of $G$ we are done. Otherwise, the independence of the $u_i$’s implies that at most one $u_i$ is used in $w_1, \dots, w_{k+1}$; using more would incur a total cost greater than $M$. In this case the $k-1$ remaining vertices of the prefix form a $(k-1)$-clique of $G$. Since it is NP-hard to approximate CLIQUE within a polynomial factor [@Zuckerman06], this proves the NP-hardness of convex decreasing NANIP.
Moreover, since the optimum value of a NANIP instance obtained by this reduction is at most $k^2$ which is upper bounded by $n^2$, the size of the NANIP instance, it also follows that convex decreasing NANIP is strongly NP-hard and therefore does not admit an FPTAS.
The cost function $f_k(i)$ used in the proof of Theorem \[thm:np-hard\] is parametrized by $k$. Call $\textup{NANIP}_k$ the subproblem of NANIP with cost functions of finite support where the size of the support is $k$. Because we consider $\textup{NANIP}_k$ a subproblem of general NANIP, stronger parametrized hardness results for the former give insights about the latter. Indeed, the following corollary is immediate.
$\textup{NANIP}_k$ is $W[1]$-hard.
CLIQUE is $W[1]$-complete when parametrized by the size of the clique. $W[1]$-hardness is preserved by so-called $fpt$-reductions (see [@DowneyF13]), and the reduction from the proof of Theorem \[thm:np-hard\] is such a reduction.
In particular, standard complexity assumptions imply from this that $\textup{NANIP}_k$ is not fixed-parameter tractable and has no efficient polynomial-time approximation scheme (EPTAS). Now we will show that the same reduction can be used to obtain a stronger approximation lower bound of $(1 +
n^{-c})$ for all $c > 0$. First a lemma.
Let $G'$ and $f$ constructed as above, and let $\sigma$ denote a NANIP traversal. Suppose $V$ denote the vertices of $G$ and $U$ denote the vertices of the independent set. If $\sigma'$ is obtained from $\sigma$ by moving the $U$ to positions $k+1,\ldots,2k$ (without changing the precedence relations of the vertices in $V$), then $C_{G'}(\sigma')\le C_{G'}(\sigma)$.
Consider the positions in $\sigma$ of the first $k$ vertices from $G$, and let $i_1, \dots, i_k$ be the positions of the vertices from $U$. Call $u_1 =
\sigma(i_1), \dots, u_k = \sigma(i_k)$.
**Case 1:** $i_1 > k$. In this case all the $u_i$ are free, as are all vertices visited after $\sigma(k)$. If $i_1 > k+1$, apply the cyclic permutation $\gamma_1 = (k+1, k+2, \dots, i_1)$ to move $u_1$ to position $k+1$. The cost of visiting $u_1$ is still zero, and the cost of the other manipulated vertices does not increase because they each gain one previously visited neighbor. Now repeat this manipulation with $\gamma_s = (k+s, k+s+1, \dots, i_s)$ for $s = 2,
\dots, k$. An identical argument shows the cost never increases, and at the end we have precisely $\sigma'$.
**Case 2:** $i_1 \leq k$. In this case $u_1$ is not free. Let $j$ be the index of the first $v \in V$ that occurs after $i_1$. Then apply the cyclic permutation $\xi = (i_1, i_1 + 1, \dots, j)$ to move $v$ before $u_1$. The cost of $v$ increases by at most $j - i_1$ (and this is not tight since it is possible that $j > k+1$). But since all $\sigma(i_1), \sigma(i_1 + 1), \dots,
\sigma(j-1) \in U$, and they each gain a neighbor as a result of applying $\xi$, so their total cost decreases by exactly $j - i_1$, and the total cost of $\sigma$ does not increase. Now repeatedly apply $\xi$ (using the new values of $i_1, j$) until $i_1 = k+1$. Then apply case 1 to finish.
For all $c>0$, there is no efficient $(1+n^{-c})$-approximation algorithm for NANIP on graphs with $n$ vertices with convex decreasing cost functions, unless $\textup{P} = \textup{NP}$.
It is NP-hard to distinguish a clique number of at least $2^R$ from a clique number of at most $2^{\delta R}$ in graphs on $2^{(1+\delta)R}$ vertices ($\delta>0$) [@Zuckerman06]. We will reduce this problem to finding an $(1+n^{-c})$-approximation for NANIP. In particular, we will show that there is no efficient $C$-approximation approximation algorithm for NANIP, where $$C
= \frac{k}{k+1} \left ( 1 + \frac{1}{k^{2\varepsilon}} \right )$$ and $k=n^{1/(1+\delta)}$.
This is equivalent to the statement of the theorem since by setting $\eps=c/(2+2\delta)$, we get that there is no efficient $\frac{n^{1+\delta}}{n^{1+\delta}+1}(1+n^{-c})<(1+n^{-c})$-approximation algorithm for NANIP.
Let $G$ be a graph on $n=2^{(1+\delta)R}$ vertices containing a $k$-clique where $k=n^{1/(1+\delta)}=2^R$ and construct $G'$ from $G$ by adding a $k$-independent set as before, with $f(i)=\max(k-i, 0)$. Suppose we have an efficient $C$-approximation algorithm for NANIP. After running it on input $(G', k)$, modify the output sequence according to the previous lemma. Then all the nodes after the first $k$ are free, thus the cost of the sequence is determined by the first $k$ vertices. Since they all have fewer than $k$ preceding neighbors, the cost function for them is linear, implying that the total cost of the sequence depends only on the number of edges in between the first $k$ vertices.
The cost of the optimal NANIP sequence in $G'$ is $k(k+1)/2$, thus the cost of the sequence returned by the approximation algorithm is at most
$$\frac{k}{k+1}\left(1+\frac{1}{k^{2\eps}}\right)\cdot \frac{k(k+1)}{2} =
\frac12(k^2+k^{2-2\eps}).$$
Since $$\frac12(k^2+k^{2-2\eps})=(-1)(1-k^{-2\eps})\frac{k^2}{2}+k^2,$$ it follows by [@Gutfraind14], Corollary 2, that there are more than $(1-k^{-2\eps})k^2/2$ edges between the first $k$ vertices.
Turán’s theorem [@Turan1941] states that, a graph on $k$ vertices that does not contain an $(r+1)$-clique can have at most $(1-\frac1r)k^2/2$ edges. The contrapositive implies that the induced subgraph on the first $k$ vertices of the NANIP sequence contains a $(k^{2\eps}-1)$-clique. Since $k^{2\eps}-1>
2^{\eps R}$, this completes the proof.
Greedy analysis for convex NANIP
================================
In this section we discuss the approximation guarantees of the greedy algorithm on convex NANIP. The greedy algorithm is defined to choose the cheapest cost vertex at every step, breaking ties arbitrarily. A useful observation here is that the greedy algorithm always produces a connected traversal of a connected graph, in the sense that every prefix of the final traversal induces a connected subgraph. We call an algorithm which always produces a connected traversal a *connected algorithm*.
Our next theorem shows a rather surprising result, that optimal recovery sometimes requires disconnected solutions, even on convex cost functions. Connected solutions can perform quite badly, having a cost that is a $\Omega(\log n)$ multiple of the optimum.
Connected algorithms have an approximation ratio $\Omega(\log(n))$ for convex NANIP problems.
We construct a particular instance for which a connected algorithm incurs cost $\Omega(\log(n))$ while the optimal route has constant cost. Define the graph $B(m)$ to be a complete binary tree $T$ with $m$ levels, and a pair of vertices $u,v$ such that the leaves of $T$ and $\{u,v\}$ form the complete bipartite graph $K_{2^{m-1}, 2}$. As an example, $B(3)$ is given in Figure \[fig:b3\].
[.5]{}
[.5]{}
Define the cost function $f(n)$ such that $f(0) = 2, f(1) = 1$, and $f(n) = 0$ for all $n \geq 2$. For this cost function it is clear that the minimum cost of a traversal of $B(m)$ is exactly 4 by first choosing the two vertices of $B(m)$ that are not part of the tree, and then traversing the rest of the tree at zero cost. However, if a connected algorithm were forced to start at the root of the tree, it would incur cost $\Omega(m) = \Omega(\log(n))$ since every vertex would have at most one visited neighbor.
To force such an algorithm into this situation we glue two copies of $B(m)$ together so that their trees share a root. Then any connected ordering must start in one of the two copies, and to visit the other copy it must pass through the root, incurring a total cost of $\Omega(\log(n))$. On the other hand, the optimal traversal has total cost 8.
Further, the greedy algorithm, which simply chooses the cheapest vertex at each step and breaks ties arbitrarily, gives a $\Theta(n)$ approximation ratio in the worst case. To see this, note that in the construction from the theorem the only way a connected algorithm can achieve the logarithmic lower bound is by traveling directly from the root to the leaves. But by breaking ties arbitrarily, the greedy algorithm may visit every interior node in the tree before reaching the leaves, thus incurring a linear cost overall.
Integer programming for NANIP {#section:IP}
=============================
In this section we describe a new integer programming (IP) formulation of the NANIP problem by adding in Miller-Tucker-Zemlin-type subtour elimination constraints [@miller1960integer]. An IP, of course, does not give a polynomial time algorithm, but can be sufficiently fast for some instances of practical interest. We then show that this formulation, experimentally, improves on the previous formulation by [@Gutfraind14].
A new integer program
---------------------
In what follows we will assume that the cost function $f$ is a continuous convex decreasing function $\mathbb{R}^{\geq 0} \to \mathbb{R}^{\geq 0}$ rather than one $\mathbb{N} \to \mathbb{R}^{\geq 0}$. It is necessary to extend $f$ to a continuous function for the LP relaxation to be well-defined. While there are many ways to do so, formulating the IP for a general continuous $f$ encapsulates all of them.
For an undirected graph $G = (V,E)$ on $n = |V|$ vertices, and introduce the arc set $A$ by replacing each undirected edge with two directed arcs. For all $(i,j)\in A$ define variables $e_{ij} \in \{ 0,1 \}$. The choice $e_{ij} = 1$ has the interpretation that $i$ is traversed before $j$ in a candidate ordering of the vertices, or that one chooses the directed edges $(i,j)$ and discards the other. In order to maintain consistency of the IP we impose the constraint $e_{ij} = 1 - e_{ji}$ for all edges $(i,j)$ with $i < j$. Finally, we wish to enforce that choosing values for the $e_{ij}$ corresponds to defining a partial order on $V$ (i.e., that the subgraph of chosen edges forms a DAG). We use the subtour elimination technique of Miller, Tucker, and Zemlin [@miller1960integer] and introduce variables $u_i$ for $i = 1, \dots, n$ with the constraints
$$\begin{aligned}
\label{eq:dag-constraint}
\begin{matrix}
u_i - u_j + 1 \leq n (1 - e_{ij}) & \forall (i,j) \in A \\
0 \leq u_i \leq n & i = 1, \dots, n
\end{matrix}\end{aligned}$$
Thus, if $i$ is visited before $j$ then $u_i \geq u_j - 1$. Now denote by $d_i = \sum_{(j,i) \in E} e_{ji}$, which is the number of neighbors of $v_i$ visited before $v_i$ in a candidate ordering of $V$. The objective function is the convex function $\sum_{i} f(d_i)$, and putting these together we have the following convex integer program:
$$\begin{aligned}
\textup{min } & \sum_i f(d_i) & \\
\textup{s.t. } & d_i = \sum_{(j,i) \in A} e_{ji} & i = 1, \dots, n \\
& e_{ij} = 1 - e_{ji} & (i,j) \in A, i < j \\
& u_i - u_j + 1 \leq n (1 - e_{ij}) & (i,j) \in A \\
& 0 \leq u_i \leq n & i = 1, \dots, n \\
& e_{ij} \in \{0,1\} & (i,j) \in A\end{aligned}$$
The integer program has a natural LP relaxation by replacing the integrality constraints with $0 \leq e_{ij} \leq 1$. Because $f$ is only evaluated at integer points, it is possible to replace $f(d_i)$ with a real-valued variable bound by a set of linear inequalities, as detailed in [@Gutfraind14].
Experimental results
--------------------
We compared the new IP formulation with the formulation of [@Gutfraind14], in the algebraic optimization framework (IBM ILOG CPLEX 12.4 solver) running with a single thread on Intel(R) Core(TM) i5 CPU U 520 @ 1.07GHz with 3.84E6 kB of random access memory. The simulation used graphs on 15 nodes, where the number of edges was increased from 14 (tree) until the running time exceeded 1 hour. For each edge density, we constructed 5 graphs and reported the average running time of the two algorithms.
From the computational experiments it is clear that our formulation gives significant improvements. For instance, the solve time seems to not depend on the number of nodes in the graph (Fig. \[fig:iptime\](a)), unlike in the previous formulation. We are also able to solve NANIP instances on 45 edges in under an hour, whereas the previous formulation solved only 30 edge graphs in that time (Fig. \[fig:iptime\](b)).
(a)![A comparison of the formulations in [@Gutfraind14] and our new IP formulation with MTZ-type constraints. This graph plots running time vs. (a) number of nodes and, (b) number of edges in the target graph. In (a) the number of edges was kept at 30 throughout, while in (b) the number of nodes was 15 throughout.\[fig:iptime\]](perf_ip_nodes "fig:"){width="45.00000%"} (b)![A comparison of the formulations in [@Gutfraind14] and our new IP formulation with MTZ-type constraints. This graph plots running time vs. (a) number of nodes and, (b) number of edges in the target graph. In (a) the number of edges was kept at 30 throughout, while in (b) the number of nodes was 15 throughout.\[fig:iptime\]](perf_ip_edges "fig:"){width="45.00000%"}
Conclusion {#sec:concl}
==========
We analyzed the recently introduced Neighbor-Aided Network Installation Problem. We proved the NP-hardness of the problem for the practically most relevant case of convex decreasing cost functions, addressing an open problem raised in [@Gutfraind14]. We then showed that the worst case approximation ratio of the natural greedy algorithm is $\Theta(n)$. We also gave a new IP formulation for optimally solving NANIP, which outperforms previous formulations.
The approximability of NANIP remains an open problem. In particular, it is still not known whether an efficient $o(n)$ approximation algorithm exists for general convex decreasing cost functions. One obstacle to finding a good rounding algorithm is that the IP we presented has an infinite integrality gap. As proof, the graph $K_n$ with the function $f(i) = \max(0, n/2 - i)$ has $\textup{OPT} = \Omega(n^2)$ but the linear relaxation has $\textup{OPT}_{LP} =
0$. So an approximation algorithm via LP rounding would require a different IP formulation.
Acknowledgments and Funding {#acknowledgments-and-funding .unnumbered}
===========================
We thank our colleagues for insightful discussions. AG was supported in part by an ORISE fellowship at the Food and Drug Administration. CPLEX software was provided by IBM through the IBM Academic Initiative program.
|
---
abstract: 'In this study, we incorporate configuration mapping between simulation ensembles into the successive interpolation of multistate reweighting (SIMR) method in order to increase phase space overlap between neighboring simulation ensembles. This significantly increases computational efficiency over the original SIMR method in many situations. We use this approach to determine the coexistence curve of FCC-HCP Lennard-Jones spheres using direct molecular dynamics and SIMR. As previously noted, the coexistence curve is highly sensitive to the treatment of the van der Waals cutoff. Using a cutoff treatment, the chemical potential difference between phases is moderate, and SIMR quickly finds the phase equilibrium lines with good statistical uncertainty. Using a smoothed cutoff results in nonphysical errors in the phase diagram, while the use of particle mesh Ewald for the dispersion term results in a phase equilibrium curve that is comparable to previous results. The drastically closer free energy surfaces for this case test the limits of this configuration mapping approach to phase diagram prediction.'
author:
- 'Natalie P. Schieber'
- 'Michael R. Shirts'
bibliography:
- 'bib.bib'
title: 'Configurational Mapping Significantly Increases the Efficiency of Solid-Solid Phase Coexistence Calculations via Molecular Dynamics: Determining the FCC-HCP Coexistence Line of Lennard-Jones Particles'
---
Introduction
============
Polymorphism, or the ability of a crystal to pack into multiple metastable states, is important in materials study and design. Polymorphism affects properties of materials such as charge transport [@Stevens2015] and bioavailability [@Chen2009; @Bauer2001a]. When multiple metastable polymorphs with different properties are present, the calculation of solid-solid coexistence curves becomes important. Temperature and pressure transformations are present in materials such as pharmaceuticals [@Boldyrev2004; @Fabbiani2006], and metals [@Boehler2000; @Choukroun2010].
Traditional phase-coexistence calculation methods, such as the Gibbs ensemble method [@Panagiotopoulos1988; @Panagiotopoulos1995; @Panagiotopoulos2002], either are not applicable to solid-solid systems, or require a previously known coexistence point and suffer from increasing error due to the use of numerical integration [@Kofke1993; @Strachan1999]. We have previously introduced the Successive Interpolation of Multistate Reweighting (SIMR) method to predict solid-solid phase diagrams. [@Schieber2018] This methodology does not rely on lattice dynamics, and thus is applicable in systems that are far from harmonic. It calculates the phase diagram from direct calculation of the relative Gibbs free energy using a series of direct molecular dynamics or Monte Carlo simulations without any specialized sampling techniques. It can thus can be wrapped around any molecular simulation code.
One drawback to this methodology is that it requires an overlap in the energy and volume phase space between adjacent temperature and pressure simulations. This presents a challenge in a number of situations, for example, when an extremely large pressure range is desired. Here, we present an extension of the SIMR method using a configurational mapping technique, inspired by the work of Tan and collaborators [@Tan2010; @Schultz2016; @Moustafa2015] that reduces the number of simulations required and therefore the computational cost.
We have applied this method to the solid-solid phase diagram of Lennard-Jones spheres, a common test systems in molecular simulation. The Lennard-Jones potential is often used to approximate the solid phase of noble gases such as argon, as well as the highly spherical methane, and to test methodologies for more chemically complex solids [@Hansen1969; @Hoover1967; @GPollock1976]. Many methods have been used to successfully and accurately calculate the melting and vaporization lines of Lennard-Jones system [@Agrawal1995; @Smit1992a; @Smit1991; @Mastny2007; @Morris2002; @Errington2004; @Luo2004; @Davidchack2003; @Schultz2018], such as the Gibbs ensemble method [@Smit1992a; @Smit1991; @Panagiotopoulos1995] (for vapor-liquid) and thermodynamic integration [@Mastny2007]. However, it is significantly harder to calculate solid-solid phase equilibria.
In this paper, we focus on solid infinite crystal systems. While the coexistence line of the hexagonal close packed (HCP) and face centered cubic (FCC) phases of Lennard-Jones spheres has been calculated using a variety of approximations and methods, there is substantial variation in the results from these studies. All studies agree that the two stable phases of the solid Lennard-Jones spheres are extremely close in free energy, on the range of $\Delta G = 1-10 \times 10^{-4}$ per particle in reduced units throughout most of the phase diagram, meaning that very small uncertainties or errors results in large changes and uncertainties in the phase diagram.
A number of research groups have attempted to calculate the Lennard-Jones FCC/HCP phase diagram with a range of approximations, with generally inconsistent results. Choi et al. [@Choi1993] performed an early calculation using perturbation theory of the Lennard-Jones potential around the hard-sphere close packing to determine the phase boundary between the FCC and HCP solids and the liquid phase. Van der Hoef’s equations were then used to obtain the residual Helmholtz energy [@vanderhoef2000]. The configurational Helmholtz free energy can then be expanded as a perturbation series and thermodynamic properties can be calculated from the first two terms [@Kim1989; @Kang1986]. However, as can be seen in Figure \[fig:prevresults\], this approach was not consistent with later, more comprehensive approaches. The coexistence line between FCC and HCP LJ structures has also been calculated using dynamic lattice theory (DLT) [@Travesset2014]. However, this approach for obtaining the free energy uses a harmonic approximation, which is not valid at the level of the accuracy needed here.
Calculation of the fully anharmonic Helmholtz free energy have previously been performed using Monte Carlo simulations [@EPollock1976; @Adidharma2016; @Jackson2002]. In the work of Adidharma et al. [@Adidharma2016] canonical ensemble Monte Carlo simulations were performed at a variety of reduced temperatures and densities. The results from the simulations were then fit, using the energy and pressure from the simulations, and constants derived by Stillinger [@Stillinger2001], to the equation for the Helmholtz free energy of the Lennard-Jones solid derived by van der Hoef [@vanderhoef2000]. From the Helmholtz free energy, the coexistence line was then determined.
Lattice switch Monte Carlo is another approach that has been used to calculate the free energy difference between phases. In the lattice switch Monte Carlo method, a transformation between phases is proposed, which takes the molecules of one structure and converts the atomic positions and box vectors to those of the other phase. A range of multicanonical approaches must be applied in order to get sufficient exchange between the packings [@Bruce1997; @Bruce2000]. Once the free energy difference was calculated as a function of $T^*$ and $P^*$, the phase boundary is easily determined. This method has been used to calculate the FCC-HCP coexistence line [@Jackson2002] and study the instability of the BCC phase relative to FCC [@Underwood2015]. The results of Jackson are more consistent with later methods, but are still temperature shifted, especially for smaller box sizes; for larger box sizes, the method was too inefficient to run at higher densities.
One recent comprehensive study of the LJ phase diagram was an extension of the earlier dynamic lattice theory approach of Travesset [@Travesset2014; @Calero2016]. Rather than directly calculate the full free energy, the anharmonic contribution to the free energy was calculated using molecular dynamics using thermodynamic integration along a switching parameter, $\lambda$, where $\lambda=0$ corresponds to the harmonic potential energy $U^{DLT}$, and $\lambda=1$ is the full Lennard-Jones potential, as seen in equation 16 of Calero et al. [@Calero2016]. Using the DLT energy and 20–50 simulations with a mixed potential $\lambda$ value, the anharmonic contribution is calculated by thermodynamic integration. Once the harmonic (via DLT) and anharmonic contributions to the free energy have been calculated, a conversion was performed to find the corresponding pressure for use in the calculation of the temperature-pressure coexistence curve. The addition of the anharmonic free energy term to the previous DLT results illustrates how a small change in free energy value results in a large change in the coexistence curve, as seen by the difference in the ‘Travesset’ and ‘Calero’ lines in Figure \[fig:prevresults\]. However, even more recently, Schultz et al. [@Schultz2018] published a comprehensive study of the Lennard-Jones phase diagram, using both exhaustively extensive direct simulation and analytic quasiharmonic approaches to explore the entire phase diagram of Lennard-Jones particles. Interesting, the results of FCC-HCP equilibria were in stronger agreement with Jackson’s (partial) results [@Jackson2002] than the later results of Calero et al. [@Calero2016]
A comparison of many of these previous FCC-HCP coexistence lines over the full range of methods is shown in Figure \[fig:prevresults\] and shows large differences between methods. All data was taken directly from temperature-pressure phase diagrams present in the papers using WebPlotDigitizer [@Rohatgi2018], with the exception of the Schultz et al. results, which were taken from correlations lines 7 and 9 of Table I of Schultz et al. @Schultz2018. We note that, as verified with the authors, the $v^4$ that should be in the denominator of line 9 is incorrectly written as $v^2$. This was a transcription error in the table alone, not an error in the results of the study. However, none of the studies above published explicit error bars, making comparisons particularly difficult.
Methods
=======
SIMR phase diagram prediction method
------------------------------------
To obtain the phase diagrams of Lennard-Jones spheres using full molecular dynamics, we used the Successive Interpolation of Multistate Reweighting (SIMR) method [@Schieber2018]. This method combines the reduced free energy difference values between temperature and pressure states within a polymorph, defined as $\beta G$, and a reference Gibbs free energy difference between polymorphs at the same temperature, which can then be combined to obtain the Gibbs free energy difference between polymorphs at all temperatures and pressures in the region of interest.
The reference Gibbs free energy difference value is determined using the pseudo-supercritical path method [@Zhang2012; @Eike2006; @Eike2005] (PSCP). This method determines the free energy required to take each polymorph from a real crystal to an ideal gas; the free energy between the two polymorphs is then the difference of those values.
The reduced free energy between states within a polymorph is found using the multistate Bennett acceptance ratio (MBAR) [@Shirts2008]. This method uses equation \[eq:mbar4\] to iteratively solve for the reduced free energy of each state $f_{i}$ with respect to each other state $f_{k}$, where $N_{k}$ is the number of configurations drawn from state $k$ and $ u_{k}(x_{jn})$ is the reduced energy of configuration $n$ sampled in state $j$ and evaluated in $k$. The reduced energy is defined as $u_k(x_{jn}) = \beta_k U(x_{nj}) + P_k V(x_{nj})$. $$\label{eq:mbar4}
f_{i} = - \ln \sum_{j=1}^{K} \sum_{n=1}^{N_{j}} \frac{\exp[- u_{i}(x_{jn})]}{\sum_{k=1}^{K} N_{k} \exp[ f_{k} - u_{k}(x_{jn})]}$$ Using the definition of reduced free energy as given above, the Gibbs free energy difference between two polymorphs at state $i$ is then given as equation \[eq:finaldg\], where $\Delta f_{ij}$ is the difference in reduced free energy between states $f_i$ and $f_j$, and $T_{ref}$ is some reference temperature where the $\Delta G_{ij}$ is known. Linear interpolation is then used to find the points where the difference between polymorphs is zero, which is coexistence. The uncertainty in the coexistence points using this method is found in equation \[eq:uncertainty\] where $\delta d$ is the magnitude of the uncertainty in the coexistence line perpendicular to the line, and $\delta \Delta G$ is the uncertainty in the free energy at a point along the coexistence line. Full details of this method can be found in Schieber et al. [@Schieber2018]. In theory, reweighting can be used to refine estimates in between coexistence points rather than direct interpolation but this is less reliable in the case of configuration mapping, as described below.
$$\label{eq:finaldg}
\begin{split}
\Delta G_{ij}(T) = k_B T \Big ( \Delta f_{ij}(T) - \Delta f_{ij}(T_{ref}) \Big ) + \\
\frac{T}{T_{ref}} \Delta G_{ij}(T_{ref})
\end{split}$$
$$\label{eq:uncertainty}
\delta d = \sqrt{\left(\frac{\partial \Delta G}{\partial P}\right)^{2}+\left(\frac{\partial \Delta G}{\partial T}\right)^{2}} \delta \Delta G$$
Configuration mapping
---------------------
One requirement for simulations used for the SIMR method is that simulations adjacent in temperature or pressure have a non-negligible amount of phase space overlap [@Schieber2018], as defined in equation \[eq:overlap4\]. Conceptually this means that simulations have some set of configurations that they both sample. The overlap between states 1 and 2 is then dependent on the probability of all configurations, $x$, in each of the two distributions, $P_{1}$ and $P_{2}$. Due to this requirement, the number of simulations performed is dependent on the width of the energy and volume distributions of the simulations. Systems with wider potential energy and volume distributions are likely to still achieve phase space overlap with wider spacing. The width of these distributions, and thus the spacing in temperature and pressure that is allowable between simulations, depends on factors such as the temperature, pressure, and the size and flexibility of the molecule. In order to decrease the number of simulations and therefore the computational resources required, it is desirable to increase the spacing between sampled states by increasing phase space overlap between states $O_{1,2}$. $$\label{eq:overlap4}
O_{1,2} = \int_{x \in \Gamma} \frac{P_{1}(x) P_{2}(x)}{P_{1}(x)+P_{2}(x)} \,dx$$ One potential way to increase phase space overlap between states is configuration mapping [@Tan2010; @Tan2010e; @Schultz2016; @Moustafa2015; @Paliwal2013; @Jar2002]. Configuration mapping transforms the set of coordinates in one thermodynamic state into a set of coordinates that is more likely to have a low energy in the the other thermodynamic state of interest, and evaluate the energy in the new state with the transformed configuration rather than the originally sampled configuration. We can then analytically calculate the free energy change for performing this mapping. This approach was used by Tan et al. [@Tan2010; @Tan2010e] to calculate the temperature dependence of the free energy of solids, by Paliwal et al. [@Paliwal2013] to calculate the Gibbs free energy of transformation between different water models [@Paliwal2013], as well as in a differential form to calculate physical properties such as the heat capacity of HCP iron and the dielectric constant of the Stockmayer potential. [@Schultz2016] Configuration mapping was shown to significantly improve precision of these calculations. Here, we propose to use this methodology to improve the phase space overlap for SIMR.
Mathematically, we define a transformation $T(x)$, with Jacobian $J(x)$, which is allowed to depend on the current configuration $x$, and $\Delta U(x)$ is the difference in potential energy between the configuration when evaluated in the original and mapped states [@Moustafa2015], $\Delta U(x) = U(T(x)) - U(x)$. In general, the potential can also change [@Paliwal2013], but in this study, we only change the temperature and pressure between states.
For a simple one-step transformation using the Zwanzig equation, the Helmholtz free energy difference in terms of the mapping can be written as: $$\label{eq:mapping}
\Delta A = -k_B T \ln\langle |J(x)| e^{-\beta \Delta U(T(x))} \rangle$$ More generally, when using configuration mapping to calculate free energy differences with multistate reweighting, we can derive equivalent formulas by replacing the reduced energy $u(x)=\beta U(x) + \beta PV$ used in MBAR [@Shirts2008] or BAR [@Bennett1976] in the NPT ensemble with a “warped” reduced energy, defined in Eq. \[eq:warped\] by analogy with “warped bridge sampling”, a version of this technique used in statistics [@Meng2002] to calculate the free energy difference between states. In Eq. \[eq:warped\], $i$ is the state the configuration was drawn from, $j$ is the target state the energy is evaluated in, $T_{ij}(x)$ is the transformed set of coordinates that were sampled from state $i$, and $|J_{ij}(x)|$ is the determinant of the Jacobian of the transformation $T_{ij}$. $$\label{eq:warped}
u_{ij}^{w} = u_{j}\left(T_{ij}(x_{i})\right) - \ln |J_{ij}(x_{j})|$$ Equation \[eq:warped\] is applicable for all transformations that have a nonsingular Jacobian, but only a relatively small proportion transformations actually increase phase space overlap. A good transformation is one that is both relatively easy to implement and results in significant increase in overlap. With a good transformation, the efficiency of the free energy calculation from state $i$ to $j$ can be made much more efficient.
In this study, we applied this coordinate mapping approach to the system of Lennard-Jones spheres to map between states defined by different temperature and pressures. In the case of point particles such as Lennard-Jones spheres, we only need to map the location of the particles themselves, and not deal with any internal degrees of freedom. This scaling is applied between any two pairs of simulations, as the average box vectors change both through thermal expansion and compression/expansion due to changes in pressure.
To implement this type of transformation, first, the trajectories from states $i$ and $j$ are read. The desired transformation between the two states is determined, which usually requires some information from both trajectories to be efficient. This transformation is then applied to the coordinates in the trajectory $i$. In our case, these new coordinates are written to a new trajectory file and the energy of each frame of the trajectory is then reevaluated. The warped reduced energy is calculated using the warped energy as well as the new $P$ and $T$, and the reduced free energy is calculated using equation \[eq:mbar4\]. For the multistate reweighting process used in SIMR, this transformation/reevaluation is performed for every set of pairs states in the (not necessarily regular) $T,P$ grid.
The transformation we used is defined by: $$\begin{aligned}
\label{eq:ljmap}
\vec{r}_{i,r} &=& \vec{r}_{i} B_{i}^{-1} \nonumber \\
\Delta \vec{r}_{i,r}^{w} &=& \Delta \vec{r}_{i,r} \left(\frac{T_{j}}{T_{i}}\right)^{1/2} \nonumber \\
\vec{r}_{i,r}^{w} &=& \vec{r}_{i,r} + \left(\Delta \vec{r}_{i,r}^{w} - \Delta \vec{r}_{i,r}\right) \nonumber \\
\vec{r}_{i}^{w} &=& \vec{r}_{i,r}B_{j}\end{aligned}$$ where $\vec{r}_{i}^{w}$ is the new coordinate in the target ensemble, $\vec{r}_{i}$ is the original coordinate, and $B_{i}$ and $B_{j}$ are the average box vector matrices of the original and target trajectories. If two simulations differ in temperature, we also scale the deviation of the particle from its equilibrium position as derived by @Tan2010e. This temperature scaling is carried out using the reduced coordinates $\vec{r}_{i,r}$, or the fractional coordinates within the box, thus making this a three step process. In Eq. \[eq:ljmap\], $\Delta \vec{r}_{i}$ is the magnitude of the deviation of the molecule from its equilibrium position, $\Delta \vec{r}_{i} = \vec{r}_{i} - \langle \vec{r}_i\rangle$. and $T_{i}$ and $T_{j}$ are the temperatures of the initial and target trajectories. The energy contribution of this Jacobian is $\frac{3N-3}{2}\log
\frac{T_j}{T_i} + 3N\log \frac{|B_j|}{|B_i|}$, where $N$ is the number of particles. The ‘-3’ occurs because of the removal of translational center of mass motion in the simulation. Note in this case a constant Jacobian is used for all configurations, though in theory it can be configuration-dependent. Once all of the $u_{ij}^{w}$ values have been calculated, eq. \[eq:mbar4\] can be used directly to calculate the reduced free energy differences between states.
An example of the effect of this mapping on the energy-volume distribution for two temperature and pressure states in this LJ system can be seen in Figure \[fig:ljpdoverlap\]. This figure shows the difference between the overlap in energy and volume achieved between two states unmapped and using configuration mapping. The two unmapped trajectories show no overlap of their energy and volume distributions. The mapped distributions, however, shows significant overlap in energy and density, which in most cases will translate directly to overlap of configuration phase space.
The cost of energy reevaluations is very low compared to the cost of simulations. For example, in one test on a standard laptop with GROMACS, the cost of mapping and reevaluating uncorrelated samples between two states from a 9 ns simulation costs approximately $7.4$ CPU-min. The cost of running one LJ particle mesh Ewald (PME) simulation itself for 9 ns is approximately is 56 CPU-hrs. For a set of 187 states, as is examined here, the mapping cost between all pairs of states is then $\sim 4300$ CPU-hrs, which is the cost of about 75–80 additional simulations. However, by using mapping, we need at least 15–40$\times$ less simulation than we otherwise would have needed with SIMR, as discussed below, though the exact numbers will depend significantly on the specific code used.
When this particular mapping was applied to the system of Lennard-Jones spheres, the number of states required to achieve overlap decreased significantly. Without mapping, the minimum pressure spacing required to achieve sufficient overlap for the calculation of MBAR to converge with a finite value was approximately 1 $P^*$. With mapping, this could be increased to 25 $P^*$ for the cutoff-based Lennard-Jones simulations and $12.5 P^*$ for the higher precision PME calculations, which translates directly into an efficiency increase of 12–25 by decreasing the number of simulations required. Although overlap in the temperature dimension was already enough to achieve MBAR convergence with simulations every $0.066 T^*$ ($0.033 T^*$ for PME simulations), mapping decreased uncertainty of free energies between neighboring states in the $T$ direction by roughly a factor of 1.31 to 1.48 (for example, at $P^*=127$ with simulations spaced at $T^* =0.006$ or $T^* = 0.026$ respectively), leading to an additional efficiency improvement of between $~\sim 1.31^2 \approx 1.71$ to $~\sim 1.48^2 \approx 2.19$, for an overall efficiency gain of 15–40$\times$.
Simulation details
------------------
The Lennard-Jones phase diagram was produced using a system of 1200 LJ spheres and the standard Lennard-Jones 12-6 potential. The systems were set up with 10 layers of atoms in the $x$ and $y$ directions and 12 layers of atoms in the $z$ direction, for a total of 300 FCC unit cells and 200 HCP unit cells. In a limited study of Jackson et al. [@Jackson2002]. a change of size from 216 to 1728 gave rise to a shift of about 0.1 $T^*$ in the FCC-HCP coexistence line. Since our study is nearer the upper end, size dependence will be relatively small compared to the uncertainty, as discussed in more detail later when we look at possible reasons for differences from previous results. The Lennard-Jones parameters for OPLS-UA methane were used in the simulations themselves ($\sigma = 0.373$ nm, $\epsilon = 1.2304$ kJ/mol, m=16.043 amu) [@Jorgensen1984] though all results are reported in reduced units. The range of the phase diagram was chosen to correlate with previous lattice dynamics studies of Lennard-Jones spheres [@Travesset2014]. The temperature in reduced units was 0.066 to 0.466 and the pressure was between 0.003 and 508.9, which corresponded to 10 to 70K and 1 to 200001 bar in real units. This temperature range was chosen in order to include the region of predicted coexistence without including the region of melting. The pressure range was chosen to include the maximum HCP temperature stability point and the reentrant behavior and to include high enough pressures that the coexistence line is approximately linear and can be extrapolated. Simulations were initially spaced every 0.066 $T^*$ and 25.44 $P^*$. For PME simulations, simulations were spaced every 0.033 $T^*$ and $12.72 P^*$. The largest current limitation in the number of states which can be simulated is the limit on the memory which is available to run MBAR, which will fail with a memory error in the current implementation of `pymbar` [@Shirts2008] used to solve if the input matrix is too large.
All production molecular dynamics simulations of Lennard-Jones spheres were performed with GROMACS 5.1.2 [@Berendsen1995; @Abraham2015], using a velocity Verlet integrator and Nosé-Hoover temperature control [@Evans1985] with a time constant of 1.485 reduced units (2 ps). Isotropic Martyna-Tobias-Tuckerman-Klein (MTTK) [@Martyna1996] pressure control with a time constant of 7.24 reduced units (10 ps) was used. This combination of integrator and pressure control was chosen because it was shown to be the most stable for NPT simulation of small LJ systems in GROMACS at high pressure. For PSCP simulations, the `sd` integrator (Langevin dynamics) and thermostat and the Parrinello-Rahman barostat [@Parrinello1981] were used to avoid nonergodicities at the fully restrained and non-interacting states of the PSCP. All particle mesh Ewald (PME) simulations were run for 9 million steps at a time step length of 0.00297 $t^*$ (4 fs) for a total simulation time of 26,728 $t^*$ The cutoff used in these PME simulations was 3.5 $\sigma$. Potential switch simulations were run with the same time step for 11,879 $t^*$. Two types of simulations were run to determine the phase diagram: with cutoffs, and using particle mesh Ewald (PME). For cutoff-based simulations, the van der Waals interactions were treated with a potential switch cutoff of 1.119 or 0.9325 nm, which corresponds to $3.0 \sigma$ and $2.5 \sigma$ respectively (in units of Lennard-Jones radius). This method smoothly switched the potential over a range of 0.02 nm as a function of radius down to make the potential at the cutoff $0$ using the vdw-modifier `potential-switch` keyword. Because of the smaller relative energy difference between phases, all PME simulations were run for a factor of 2.5$\times$ longer than all simulations with the potential cutoff to add statistical accuracy.
Rather than increasing the cutoff size and extrapolating to infinity, we used particle mesh Ewald for dispersion interactions to incorporate long-range effects. This method works by only calculating the short range interactions directly. Long range interactions are calculated with a 3D fast Fourier transform on a grid. [@Essmann1995; @Petersen1995] The smooth version of this method, as implemented by Essmann et al. [@Essmann1995] uses a B-spline interpolation on a grid to increase computational efficiency. This method was implemented for the dispersion term in GROMACS by Wennberg et al. [@Wennberg2013; @Wennberg2015]. For all PME simulations, a Fourier grid of $n_x = 36$, $n_y=32$, and $n_z=36$ and a PME grid interpolation order of 6 were used. A cutoff of the direct-space summation was used, and shifted from $3.47319 \sigma$ (1.2955 nm) to $3.5 \sigma$ (1.3055 nm) to guarantee smooth integration of the equations of motion, with tolerance of the direct space error at the cutoff (`ewald-rtol-lj`) equal to $1.0 \times 10^{-6}$.
Two PSCP calculations were carried out for each calculation, one at 127 $P^*$ and 0.33 $T^*$ and one at 152 $P^*$ and 0.4 $T^*$ in the NVT ensemble with a PV correction term, of $P \Delta \langle V \rangle$ to convert between Helmholtz and Gibbs free energy. Simulations were carried out in NVT at the average volume corresponding to the equilibrated corresponding NPT simulation. The (127 $P^*$, 0.33 $T^*$) phase point was used for the phase diagram calculation, with the other point used to check cycle closure. In the PSCP, intermolecular interactions were turned off quartically, while simultaneously harmonic restraints were added to the average lattice positions. A set of 25 intermediates was used, with the force constant turned on quadratically to a maximum value of 113.076 in reduced units ($1000~\mathrm{kJ}\cdot\mathrm{mol}^{-1}\cdot \mathrm{nm}^{-2}$), as per the protocol of Dybeck et al. [@Dybeck2016].
Our more challenging phase diagram, generated using PME, is the result of one set of additional low pressure and temperature simulations, which doubled the number of simulations in the region below $200 P^*$ and $0.3 T^*$, after the initial simulations. In this case, unsimulated states, where the average box vectors and displacement vectors are not known, cannot be incorporated into MBAR because the mapping procedure requires knowledge of the equilibrium P and T at each state, which must be obtained from simulation. To calculate the uncertainty, 200 bootstrap samples of simulation configurations were generated. The uncertainty in the $\Delta G$ per particle between the FCC and HCP phases at each point was then determined from the standard deviation of the reduced free energy using the bootstrapped input. This was then converted to the uncertainty in the width of the coexistence line and plotted perpendicular to the line, as errors along the line do not actually affect the line, as described in Schieber et al. [@Schieber2018]. Alternately, using each of the bootstrapped free energies, a new phase equilibrium line for each bootstrap resampling line can be drawn. Error bounds can then be generated by taking the lines that represent a given confidence interval away from the median line. This is most accurately done in the perpendicular direction, though because in this case the phase line is mostly vertical and noise in the line makes tangent determination challenging, we look at the confidence interval in the temperature dimension. We use a linear spline for the phase equilibrium points obtained using SIMR to sort the points at any temperature of interest. A comparison of the two types of error analysis can be found in section II of the supplementary material (using a $1\sigma$ confidence interval). We note very good consistency between the methods above the $100 P^*$ line; below the line the bootstrap estimate becomes inconsistent because of low overlap, leading to large fluctuations in the bootstrap uncertainty.
Thermodynamic cycle closure was used to validate the process for the PME line, using two separate PSCP calculations at two different points. At 127 $P^*$ and 0.40 $T^*$ the PSCP value is $-0.000898(9)$. At 127 $P^*$ and 0.27 $T^*$ the PSCP value is $0.000166(9)$. Using the second PSCP at the reference value in SIMR, the calculated $\Delta G$ at 127 $P^*$ and 0.40 $T^*$ is $ -0.00086(6) $, which is within uncertainty of the PSCP value at that point, indicating cycle closure to high precision. After initial simulations and one round of additional simulations at areas of low overlap, at $T^* = 0.20$ the average uncertainty in $\Delta\mu$ was $0.0004$ reduced units is similar to that shown in Figure 6 of Calero et al. [@Calero2016], which ranges from approximately $0.00025$ to $0.0006$, as seen in our Figure \[fig:dgvp\]. All free energies were calculated using the `pymbar 3.0.3` implementation of MBAR [@Shirts2008].
Results
=======
Molecular dynamics phase diagrams of Lennard-Jones spheres
----------------------------------------------------------
We see in Figure \[fig:ljcompcutoff\] that using mapping, the coexistence line was determined with good precision. The phase coexistence line is significantly affected by the treatment of the cutoff. Cutoffs should not affect the liquid-vapor transition significantly, since those phases are essentially uncorrelated outside of the cutoffs investigated here. However, in solids, cutoff effects of calculations using LJ spheres are important [@Calero2016; @Trokhymchuk1999]. The coexistence line predicted by the SIMR and configuration mapping using a $2.5 \sigma$ cutoff has a region of HCP coexistence that is higher in pressure and temperature, and spans a wider range of pressures than the coexistence line predicted using a $3.0 \sigma$ cutoff, as seen in Figure \[fig:ljsimr\]. Both of these results are a poor match for literature results extrapolated to long cutoff in the high pressure region, as seen in Figure \[fig:ljcompcutoff\]. The reentrant behavior resulting from the SIMR method is much sharper than literature results with extrapolated large cutoffs and the high pressure and low temperature region is not well approximated by this method. The poor match of the coexistence lines using a potential switch cutoff is due to the uneven increase in the contribution of the dispersion energy as the pressure is increased. The sudden inclusion of an entire shell of atoms under the cutoff value as pressure increases causes nonphysical behavior in the energy difference between phases. This is a known effect, analyzed for example by Jackson et al. [@Jackson2002], and we analyze the reasons and resulting issues in more detail in Section I of the supplementary material.
By using particle mesh Ewald for the dispersion term, we obtain a more accurate molecular dynamics phase diagram of FCC and HCP LJ spheres without the inconsistencies introduced by a potential cutoff. This phase diagram shows the same stability trends and reentrant behavior as literature results which in theory include the full statistical mechanics. The resulting PME SIMR phase diagram is approximately a standard deviation away from the results of Calero et al. [@Calero2016], which at $P^*=250$ is an average of $0.0234 T^*$, and is within uncertainty for most of the pressure range. The maximum HCP temperature stability is somewhat higher than their results. We note that although Calero et al. do not plot uncertainties in their phase diagram, their Figure 6 includes free energy differences between phases along the $P^*=290$ isobar; this can be used to back-calculate an uncertainty of about 0.01–0.015 in $T^*$ over this part of the range, slightly smaller than ours. The HCP temperature in our simulations stability is, however, lower than the results of Schultz et al. [@Schultz2018]
There are a number of factors that could potentially explain the discrepancies between other results that are statistically mechanically complete and our results; however, we find that most of them do not affect the phase diagram. The major methodological difference between previous anharmonic results and our PME SIMR results is the treatment of the cutoff in particular our use of PME to account for long range interactions, rather than extrapolation to infinite cutoff. The use of PME and the parameters used for PSCP were validated to the results of Stillinger et al. [@Stillinger2001], with which multiple groups have gotten highly consistent results. [@Schultz2018]. Our PME parameters gave energies at the lattice minimum of FCC and HCP lattice minima within of $0.0001 E^*$ of the Stillinger results, and $\Delta E^*$ between the phases of $7.3 \times 10^{-7} E^*$. Changing the PME cutoff parameters did not change the average potential energy to well within uncertainty. The difference in energy between HCP and FCC at the lattice minimum was $\Delta E^* = -8.70 \times 10^{-4}$. Increasing the Ewald direct space cutoff by 0.25 $\sigma$ changed this energy difference by $8.5 \times 10^{-6}$, less than 1% of the total relative energy, and significantly below the statistical uncertainty. Increasing the number of Fourier points by 50% changed the energy by $1.34 \times 10^{-5}$, approximately $1.5\%$ relative error. Additionally, these terms cancel — when both increases in accuracy of the calculation are used, the total change in energy is $3.4 \times 10^{-7}$, less than 0.5% relative error. This degree of relative bias in the energy calculation should be roughly invariant, or even decrease, throughout the simulation, as the energy becomes dominated by the repulsive term that is entirely short range at higher pressures. Using a 1200 atom system and the PME settings described above, the initial structures were minimized for volumes ranging from $182.58 \mathrm{nm}^3$ to $547.71 \mathrm{nm}^3$. The minimum enthalpies were then compared against the values in Stillinger [@Stillinger2001], and the coexistence pressure was found. The $T^*=0$ coexistence volume agreed to within $1.50 \times 10^{-4} V^* $ and the enthalpy at that coexistence point agreed to within $0.136 E^*$, which is within 0.2%. Our results were also statistically tested for consistency by extrapolating the coexistence line at high pressures (where it is roughly linear) down to the 0 K point, though we did not simulate in this high pressure region. We found that our extrapolated coexistence line crosses the $T^*=0$ line between $P^*=845$ and $P^* = 958$, which is within uncertainty of the literature results of $P^* = 878$.
There are several other system differences which could in theory be causing the lower temperature coexistence in our results. One such effect is system size. Schultz et al. found that the system size effects are almost entirely harmonic. [@Schultz2018] To study these effects, we added a harmonic $\Delta G$ correction, equal to the difference in the harmonic free energy of each phase between a system of 1200 atoms and a system of infinite cutoff and size, with data graciously provided by the authors of Schultz et al. [@Schultz2018]. However, this correction term only shifted the phase diagram up in temperature by an average value of $0.002 T^*$.
The effect of system size and cutoff was also tested at the $T^*=0$ K line. A series of PME cutoffs between $3.2 \sigma$ and $5.3 \sigma$ were tested, and the $T^*=0$ coexistence crossing was identified. The coexistence pressure was shifted by a value of $5
P^*$ between the smallest and largest cutoffs; using the cutoff value of $3.2 \sigma$ the coexistence pressure is $878.26 P^*$ and at $3.6 \sigma$ the pressure is $883.26
P^*$. Changing the system size from 1200 to 9600 atoms produced a change in energy at coexistence of $7.27 \times 10^{-6} E^*$ and moving from the smallest to largest cutoff produced a change of $7.61 \times 10^{-6} E^*$. Moving from a system size of 1200 to 9600 atoms produced a shift in coexistence of $7 P^*$, which we considered to be sufficiently accurate, given that our main focus is the $T^*>0$ portion of of the phase boundary.
Another difference from the result of Schultz et al. is that they considered anisotropic expansion of the HCP box, which slightly shifts the phase coexistence curve in the direction of the HCP phase. However, their results (in Table III of Schultz et al. [@Schultz2018]) show that this effect changes the location of the FCC-HCP-vapor triple point by $\Delta
T^*=0.0004$, and the location of the $T^*=0$ intersection of the coexistence curve by $\Delta P^*=0.01$, and thus is several orders of magnitude below the other differences considered in this paper.
After analyzing these other effects, we find that the maximum HCP stability temperature is almost entirely determined by the reference value obtained from the PSCP. If the PSCP reference value was shifted down by $0.0007$ $E^*$, our results would be within uncertainty of the results of Schultz et al, with our maximum HCP stability raised to $T^* = 0.402$ compared to $0.40$ in that study. Alternatively, if we use the coexistence point of Schultz et al.[@Schultz2018] of $P^*=127, T^*=0.4$ (obtained using the correlations found in Table I), as a reference, instead of the PSCP calculation, the maximum HCP stability moves to $0.42(2)$, and the $T^*=0$ coexistence line still intersects between 853 and 1238 $P^*$, which are both within uncertainty of literature results. The $P^*=0$ intersection could not be calculated due to poor overlap in the low pressure region with the simulated data set. Though this error in the reference $\Delta G$ generated using PSCP is small, it is several times the uncertainty in the calculation, indicating some source of bias exists. This bias could be caused by several factors, including the lack of anisotropy in the PSCP simulations and finite size effects in that calculation, or very minor errors in the Parrinello-Rahman barostat implemented in GROMACS, which have been noted earlier [@Shirts2013]. We emphasize that any potential errors are so small so as to likely only be noticeable in calculations of this precision.
Other methods could be used to generate the PSCP, for example, lattice-switch Monte Carlo at a single $(T^*,P^*)$ point, though the PSCP method is the most general for arbitrary crystal packings. In fact, the moves used in lattice-switch Monte Carlo themselves define a configuration mapping between the two phases, and the free energy between two phases can be determined directly from two simulations using Eq. \[eq:warped\], without ever actually implementing lattice switch Monte Carlo. However, preliminary testing demonstrated that this approach was far too inaccurate for the LJ phase diagram because of the low overlap between the mapped FCC$\rightarrow$ HCP and HCP$\rightarrow$FCC ensembles. This low overlap is not surprising given that lattice switch Monte Carlo requires additional acceleration methods to yield reasonable results [@Jackson2002].
One challenge with studying the FCC and HCP structures using PME is the extremely small free energy differences between the polymorphs, as seen in Figure \[fig:dgvp\], almost two orders of magnitude lower than the cutoff simulations. These small energy differences mean that longer ($2.5\times$ the potential switch simulations) and more closely spaced simulations are required to obtain sufficiently precise results. At pressures below $100 P^*$, the difference in free energy was small enough that sufficient precision to clearly resolve the phase diagram could not be achieved. This was due to consistently poor phase space overlap at these states, where the larger amount of movement in the atoms causes mapping to work poorly.
.
Discussion and Conclusions
==========================
This configuration mapping approach is likely to be useful for systems of point particles, whatever the potential may be, as the same mapping presented here will be valid. Based on previous results with configuration mapping on rigid water molecules in liquid water [@Paliwal2013], this approach is likely to work with small rigid molecules as well, but it is not clear how well it would work for more complex molecules with internal degrees of freedom that will change configuration ensemble between phases. There are a number of improvements that can be made as well. A number of additional extensions may be possible. For example, adaptive choices of the simulation points, as discussed in Schieber et al. [@Schieber2018], can further decrease the clock time required at the expense of wall time. Many of the state-to-state mappings still lead to essentially negligible overlap, and hence may be unnecessary; it may be possible to *not* map those states together, saving some amount of time spent mapping while losing negligible efficiency, though determining exactly which states to exclude or include is somewhat complicated.
The phase diagram of the solid phases of Lennard-Jones spheres has been predicted many times in literature, with large discrepancies between methods. The Successive Interpolation of Multistate Reweighting method (SIMR) is another method that can be used to determine the coexistence line of solid-solid transitions using full molecular dynamics simulations. However, this method is dependent on phase space overlap between adjacent simulations. Configuration mapping is a way to increase phase space overlap, and therefore computational efficiency in phase diagram prediction.
Using a potential switch van der Waals cutoff, the coexistence curves were efficiently generated using this method to reasonable precision compared to the scale of $\Delta \mu$ between phases, and demonstrates the efficiency of SIMR plus configuration mapping in a standard problem for point particles, such as found in simulations of metals or inorganic materials. However, this cutoff approach introduces nonphysical behavior in the energy difference versus pressure curves, which can be understood by examining the radial distribution functions. As the pressure increases more layers of atoms, and thus peaks in the RDF, are brought under the value of the cutoff, which nonphysically affects the Gibbs free energy difference and thus the predicted coexistence line. Using particle mesh Ewald to calculate long range interactions avoids nonphysical behavior in the energy differences between polymorphs, without extremely long cutoffs. The extremely small difference in potential energy and Gibbs free energy between phases and small difference in slope between the free energy surfaces makes the determination of this more accurate coexistence line challenging. In particular, at low pressures, the increased movement of the atoms decreases the effectiveness of the mapping, making overlap poor. Current limitations in analyzing larger numbers of states with MBAR prevents fully characterizing the phase diagram at lower pressures, though the uncertainty and bias are already very low (at or below 0.001 E\*).
When using particle mesh Ewald, the phase diagram produced by the addition of configuration mapping to the SIMR method shows independence of cutoff and, despite the issues with statistical convergence, consistent trends in agreement with the current somewhat diverging literature results. The HCP phase is most stable at moderate pressures and low temperatures, with the FCC phase being more stable at high temperatures and extreme pressures. The coexistence curve displays reentrant behavior consistent with previous results. The maximum temperature of HCP stability is lower than the most likely most accurate results of Schultz et al. [@Schultz2016], likely due to bias in the PSCP value used to generate our coexistence curve. We found that a change in the reference value on the order of 0.0007 $E^*$ brings out curve to within uncertainty of these literature results. These results demonstrate that the method, although not perfect for the present calculation, should be effective for most problems of applicable interest.
Supplementary Material
======================
Supplementary material online includes a discussion of the changes in the free energy changes due to cutoffs, including an analysis of changing radial distribution function changes as function of pressure, as well as comparison between the uncertainty lines determined by bootstraps over phase diagram lines and propagation of error in free energy perpendicular to the tangent line. We also include the GROMACS input files used for simulations of Lennard-Jones particles for reference.
Acknowledgments
===============
This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number OCI-1053575. Specifically, it used the Bridges system, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC). This work was supported financially by NSF through the grants NSF-CBET 1351635 and NSF-DGE 1144083. We thank Nate Abraham (CU Boulder) for discussions and comparisons, and Andrew Schultz (SUNY Buffalo) for helpful assistance in comparison to the results in Schultz et al.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.